When medical devices are hacked, is it finally time to get that security should be implicit as a requirement.
(Given many of my posts are second rate Gruber posts on the mac, this one is a second rate Schneier)
I like Chip+PIN. I don’t think EMV is perfect: it has the complexity of a committee driven standard created by competing companies, and it has flaws and oversights. I’ll still wager it’s more secure than someone looking at a signature, and since skimming attacks get immediately moved abroad (when the cloned cards are created from the legacy mag-stripe) behavioural analysis makes spotting fraud a bit easier.
I do not feel the same way about Verified By Visa which I continue to curse every time I use it.
Anyway I very much disliked the UK Cards Association’s response to the excellent Cambridge Computer Laboratory when they’ve published flaws and potential attacks, demanding they take the papers down. They played the near standard “oh it’s very hard to do right now, we don’t think anyone could really do that, please, they’re very clever and most people won’t be” line. The only problem is that with each new vulnerability, the Cambridge Team appear to be producing more plausible attacks. UK Cards were rightly told to go away.
It would have been nicer to hear:
“We thank the CCL for their work in exposing potential attacks in the EMV system. At the moment we think these are peripheral threats, but we will work with EMV partners to take the findings onboard, and resolve these as the standard evolves”
This is course blows the “Chip+PIN is a totally secure” line out the water – which matters because they’re trying to move the liability onto the consumer, admitting the system is even partially compromised lessens that.
At the end of the day, this is just money. There’s always been fraud, there always will be. Not life and death.
I used to work in Broadcast. Many of those systems were insecure relying on being in a partitioned network. DNS and Active Directory were frowned on, being seen as potential points of failure rather than useful configuration and security tool. The result was a known, but brittle system. Hardening of builds was an afterthought and the armadillo model of crunchy perimeter, soft centre, meant that much like the US Predator Drone control pods, once inside passage made easy.
Depressing, yes? Particularly because so many of these problems were solved before, and solved well. But it was just telly. Not life and death.
I mean, it’s not like you can remotely inject someone with a lethal dose of something.
Except it is: A few months back someone reversed engineered the protocol of their insulin pump, able to control it with the serial number. This was bad enough. Devices that inject things into humans shouldn’t be controllable without some of authentication beyond a 6 digit number.
At the time the familiar: “it’s too difficult, you still need the number, you’ve got to be nearby” response was provided.
Two months later, another security person has now managed to decode the magical number, and used a long distance aerial to be able to send commands to the pump.
I’m sure it’s still “too hard to be viable”: because the death of someone isn’t something that has major consequences that could have the kind of support that makes hard things viable…
Security is hard to do well, and we need to start embedding it in everything – it is now a matter of life and death. But it’s hard, and hard for the psychology just as much as a technical. You should really use an existing algorithm implementation because the chances are it’s better than yours: but that’s licensing and IPR, so just roll your own cipher believing your application is too trivial to be a target for hacking. Besides your proprietary wire-protocol is proprietary, it’s already secret. People aren’t going to bother to figure it out.
Security makes things harder: you can’t just wire-sniff your protocol anymore to debug stuff. Your test suites become more complicated because you can no longer play back the commands and expect the device to respond. That little embedded processor isn’t powerful enough to be doing crypto: it’s going to up the unit price, it’s going to increase power usage and latency.
Many programmers, still, belong to the “if I hit it and hit it until it works” school of coding. I don’t mean test-driven-development, I’m meaning those coders who think if it compiles, it ships. These people don’t really adapt well to working in a permissions based sandbox; it’s harder to split your processes up so that only the things that need the privileges have them (we’ve all done ‘chmod 777 *’ to get an application up and running).
Until everyone realises that every device with smarts is a vector, from Batteries, to APIs, to websites we’re increasingly at risk. I guess that massive solar flare could take things out for us.