Falsehoods Smart-Device people believe about Home Networks

A few years ago someone posted a great article about the bad assumptions programmers make about names; here’s a similar list about assumptions about home networks and smart devices.

We all remember the excellent Falsehoods people believe about names don’t we?

Having lived with a few smart devices sharing my network for a while, I thought we need a similar one about smart devices and home networking.

Items marked with a * contributed or inspired by @davidmoss

  • The WiFi is always available
  • The WiFi is continuously connected to the internet
  • The WiFi network isn’t hidden
  • The WiFi network isn’t restricted by MAC address so they can be hidden from the user
  • The WiFi network doesn’t use strong authentication like WPA2
  • The WiFi network definitely doesn’t use authentication mentioning the word ‘Enterprise’
  • The user knows the exact authentication type is use for the WiFi, so no need to auto-detect it*
  • There is only a single WiFi network
  • The name of the WiFi network is ASCII*
  • There is only a single access point for the WiFi network
  • Any device connected to the home-network is trusted to control the smart devices on it
  • Smart devices and their controllers are on the same network
  • Devices on the network can connect directly to each other
  • The network is simple, and doesn’t use other technologies such as powerline1
  • All networks have a PC type device to install/configure/upgrade devices (and that device is running Windows)*
  • There is always a DHCP Server*
  • Devices will always get the same IP address on the internal network from the DHCP server
  • DHCP device names don’t have to be explanatory, because nobody ever sees them
  • Devices can have inbound connections from the internet 2
  • The network is reliable without packet loss
  • The connectivity is sufficient for all devices on the network
  • The performance characteristics of the network is constant and doesn’t change across time
  • The Internet connectivity isn’t metered, and there’s no problem downloading lots of data
  • Encryption of traffic is an overhead that isn’t needed on embedded devices
  • Predictable IDs like Serial-Numbers are good default security tokens
  • Unchangeable IDs like Serial-Numbers are acceptable security tokens
  • The device won’t be used as a platform for attacks, so doesn’t need hardened from threats internal and external to the network. 3
  • Devices can be shipped and abandoned. They won’t be used for years, as so any future software vulnerabilities can be ignored
  • IPv6 is for the future, and doesn’t need to be supported4

What have I missed?

  1. These should be layer 2 transparent, but they can disrupt Multicast which can break bonjour
  2. aside from security implications, ISPs are moving to a carrier-grade NAT to work around IPv4 address exhaustion, so inbound ports may not be possible
  3. many devices have a pretty complete Linux stack, at least complete enough for attackers to use
  4. Chicken and Egg this one

Security is hard, but the easy bits aren’t

The hard bits of security are hard, but the easy bits aren’t. As infrastructure gets more dynamic, we need to make sure it isn’t everyone else redefining it.

Another week, another story about security.

Actually multiple stories about security.

And what’s upsetting with these ones are the fact that the fixes for them are already available.

I don’t cut-code anymore. I’m not a particular adept coder, and I think my code is a bit ugly. But I still know what bad practice smells like and what upsets me is how often we have repeat the mistakes of old. 1 2

Yes there are always deadlines, but if we’re working with advanced software defined infrastructures, then we have to restrict who can redefine those.

If you’re in a Product Manager role, don’t be afraid to ask what you’re doing for security, or what response plans are if something is compromised. Be mindful of the risk to your reputation or risk if you don’t give developers time to improve security instead of piling ever more features on. The mitigations for the most obvious attacks are documented, and usually relatively easy to implement.

And now to the details

Code Spaces had all their data wiped, we don’t know all the details but it sounds like:

  • They hadn’t enabled 2factor auth on their AWS account
  • Their backups weren’t to a different AWS account,  or better still to another provider.

If you’re running a production service, and you’re hosting data for anyone else, then your backups need to be rock solid. Backing up to the same provider, in the same account, is like copying all the files from your desktop into a folder called “backup”.  Sure you’ve two copies but when that disk goes bang they’re both gone.

And yes, 2 Factor is a pain when you’re logging into services, but if you’re hosting customer data that’s a pain you need to cope with. Providers usually let you set up many secondary accounts with reduced privileges, so use those tools to protect your services, and let people do just what they need in order to do their jobs.

On a similar theme people are leaving their AWS keys in android apps. Amazon offers a ticket granting service that’s ideal for this, but that’s more work, but work that you should be doing.

Some people aren’t even using those permissioning tools to embed keys with limited access, which just to reiterate, you shouldn’t be doing anyway. Instead they are embedding their main access key pair, which means that attackers could access and delete all data, and spin up thousands of instances just for fun/profit.

Security is hard, the recent problems found in libraries like OpenSSL are hard for an individual coder to work around, but decent libraries are still better than going it alone.

The 80:20 rule is ever present, will you ever make your app fully secure; unlikely. Can you prevent the most obvious attacks with application of best practices, which many programming languages can do for you; yes.

Don’t leave keys lying around, give apps or services any more permissions than they need, or use predictable IDs for sensitive data…

Do sanitise data you’re given, protect from XSS attacks, turn on 2-Factor Authentication for anything serious and always keep decent backups hosted on separate infrastructure…

These lists go on, but they not new: Best practice years ago, is still best practice now.

  1. Don’t get me started on file-moving scripts that don’t use incoming and outgoing folders to avoid race-conditions
  2. Or when we tolerate software from vendors that can’t run as anything other than root or Administrator

Security becoming life and death

When medical devices are hacked, is it finally time to get that security should be implicit as a requirement.

(Given many of my posts are second rate Gruber posts on the mac, this one is a second rate Schneier)

I like Chip+PIN. I don’t think EMV is perfect: it has the complexity of a committee driven standard created by competing companies, and it has flaws and oversights. I’ll still wager it’s more secure than someone looking at a signature, and since skimming attacks get immediately moved abroad (when the cloned cards are created from the legacy mag-stripe) behavioural analysis makes spotting fraud a bit easier.

I do not feel the same way about Verified By Visa which I continue to curse every time I use it.

Anyway I very much disliked the UK Cards Association’s response to the excellent Cambridge Computer Laboratory when they’ve published flaws and potential attacks, demanding they take the papers down. They played the near standard “oh it’s very hard to do right now, we don’t think anyone could really do that, please, they’re very clever and most people won’t be” line. The only problem is that with each new vulnerability, the Cambridge Team appear to be producing more plausible attacks. UK Cards were rightly told to go away.

It would have been nicer to hear:

“We thank the CCL for their work in exposing potential attacks in the EMV system. At the moment we think these are peripheral threats, but we will work with EMV partners to take the findings onboard, and resolve these as the standard evolves”

This is course blows the “Chip+PIN is a totally secure” line out the water – which matters because they’re trying to move the liability onto the consumer, admitting the system is even partially compromised lessens that.

At the end of the day, this is just money. There’s always been fraud, there always will be. Not life and death.

I used to work in Broadcast. Many of those systems were insecure relying on being in a partitioned network. DNS and Active Directory were frowned on, being seen as potential points of failure rather than useful configuration and security tool. The result was a known, but brittle system. Hardening of builds was an afterthought and the armadillo model of crunchy perimeter, soft centre, meant that much like the US Predator Drone control pods, once inside passage made easy.

Depressing, yes? Particularly because so many of these problems were solved before, and solved well. But it was just telly. Not life and death.

I mean, it’s not like you can remotely inject someone with a lethal dose of something.

Except it is: A few months back someone reversed engineered the protocol of their insulin pump, able to control it with the serial number. This was bad enough. Devices that inject things into humans shouldn’t be controllable without some of authentication beyond a 6 digit number.

At the time the familiar: “it’s too difficult, you still need the number, you’ve got to be nearby” response was provided.

Two months later, another security person has now managed to decode the magical number, and used a long distance aerial to be able to send commands to the pump.

I’m sure it’s still “too hard to be viable”: because the death of someone isn’t something that has major consequences that could have the kind of support that makes hard things viable…

Security is hard to do well, and we need to start embedding it in everything – it is now a matter of life and death. But it’s hard, and hard for the psychology just as much as a technical. You should really use an existing algorithm implementation because the chances are it’s better than yours: but that’s licensing and IPR, so just roll your own cipher believing your application is too trivial to be a target for hacking. Besides your proprietary wire-protocol is proprietary, it’s already secret. People aren’t going to bother to figure it out.

Security makes things harder: you can’t just wire-sniff your protocol anymore to debug stuff. Your test suites become more complicated because you can no longer play back the commands and expect the device to respond. That little embedded processor isn’t powerful enough to be doing crypto: it’s going to up the unit price, it’s going to increase power usage and latency.

Many programmers, still, belong to the “if I hit it and hit it until it works” school of coding. I don’t mean test-driven-development, I’m meaning those coders who think if it compiles, it ships. These people don’t really adapt well to working in a permissions based sandbox; it’s harder to split your processes up so that only the things that need the privileges have them (we’ve all done ‘chmod 777 *’ to get an application up and running).

Until everyone realises that every device with smarts is a vector, from Batteries, to APIs, to websites we’re increasingly at risk. I guess that massive solar flare could take things out for us.