The Internet of Things (IoT) is growing. If you’re relaxed about that, maybe you should think again and read the latest book from security guru Bruce Schneier: Click Here to Kill Everybody: Survival in a Hyper-Connected World due out on 18th September from W. W. Norton and Co., ISBN: 978-03936088.
Take a simple example: While you are out shopping, you whip out your smartphone and interrogate your smart refrigerator to see what is inside. Now you know what you need to cook that delicious-sounding Delia recipe. When you get home, you find that the fridge has told you porkies. A key ingredient is missing. Tonight it will be beans on toast instead of a Delia. It’s a bummer but hardly a disaster. At least your fridge hasn’t hacked your smartphone to get your bank card details. Yet this represents just the tip of the iceberg as regards what the IoT might foul up.
Next day you visit your mother in hospital. She’s been on a ventilator after a major op and you’re hoping she’ll be off it by the time you arrive. When you get there, the staff tell you that she died half an hour earlier. Investigation reveals that the ventilator settings had been altered from those set by the medical staff. While keeping your mother alive, it hot-downloaded a buggy software update that corrupted things. All you can do is ask the undertakers to collect mummy from the morgue.
In both of these cases the dependability of the system has been impaired by technically inadequate systems engineering. The systems should have been made robust against the situations in which they were made to go wrong. The trouble is that making distributed systems dependable is difficult because any dependability property, be it reliability, security or safety, is a property of the whole system and not just one of its communicating nodes.
To analyse system dependability, you need white-box knowledge of each communicating node to verify that it implements the relevant secure communication protocols robustly. This is far from easy even for a single processing node, leave alone the myriad that might communicate with your particular device in the IoT; and this is quite apart from gross system design howlers. The ventilator example is not far-fetched. This author has actually come across a market-leading ventilator that was running Windows 7 Embedded! … Enough said.
One thing that software engineers do superbly well is failing to learn from past mistakes. How often do you actually see best practice used for real? Sadly, the overwhelming likelihood that as the IoT grows, poor software engineering will spawn dependability failures as if they are going out of fashion. Think of a failure scenario and sooner or later some IoT programming cock-up will be the cause of it.
Whether you are an optimist or a pessimist about such things, Schneier’s upcoming book augurs to be a must-read for anyone involved in developing IoT products. Until you can get your hands on it, you might do well to mull over the lighter-weight recent posting by Scott Magready on lovemoney.com (https://www.lovemoney.com/news/75598/internet-of-things-security-concerns-iot-toys-gadgets-security-safe).
One of the ways that the IoT grows is by manufacturers adding wireless capabilities to existing products. (Whether consumers really want or even need it is a moot point.) Your kitchen appliances can have wifi and even your toothbrush can have Bluetooth. Every such device brings potential vulnerabilities and consequent security risks. If you get them from big manufacturers, then they should be quite quick to provide fixes when problems are identified. The greater concern is over the smaller players, out to make a fast buck by getting a cheap device quickly to market. Long-term support, and even short-to-medium-term existence, may not be part of their business plans.
Firms in the game for quick short-term profits have no incentive to spend much on software security. When they disappear, their web sites can easily be acquired by criminals who can then offer an online “software update facility” intended purely to infect your IoT device with malware. Much the same situation obtains when big-brand goods are faked. Cyber-criminals will ride on the faked brand-name to convince you that their goods are the real thing, while all the time their fake web-site exists solely for the purpose of cyber-fraud. Naturally the big brands have an interest in collaborating with legal authorities to scotch this sort of thing but they can do so only after the fraudulent activity has been discovered, by which time many customers will have already been ripped off.
Magready gives a great example of this concerning a smart teddy-bear product. By registering the defunct manufacturer’s old domain name, he was able to access the product’s web-enabled app. Potentially he could then quite easily have infected the bears with malware. Once again we see the power and insidious risks of commercial bandwagonning What benefits does a web-enabled cuddly toy really have over the old-fashioned sort? I certainly wouldn’t give one to my grandchildren. Regardless of your views on whether tech toys are worth the candle, Scott Magready's posting makes sobering reading.
The economic incentives facing small firms out for quick profits all act in the wrong direction. Making software secure takes a great deal of attention to detail. Fast-buck merchants have no time for it. They just want to stuff in marketable functionality, hit the market, then decamp with the gains. Even if they were to take security seriously, time-to-market constraints will inevitably put a stranglehold on software quality.
Another route into the wild for malware is in old products. Someone offering last-years best-buy at a knock-down price may well have bought up stocks of a product with a known security weakness. His business model is to re-badge the product and ship it out as quickly as possible. Once again, the unwary consumer unknowingly gets a dodgy product. Criminals can easily exploit IoT security holes to hack home networks, monitor communications and even control other connected devices. The moral for the security-conscious buyer is to avoid bargain-basement IoT suppliers like the plague.
As if this is not enough, you don’t need criminals and malware to make your IoT devices turn against you. The classic example here is the original internet worm, nicely described, in 1988, in a Purdue University research report by Eugene Spafford ( https://spaf.cerias.purdue.edu/tech-reps/823.pdf). The worm was never intended to be malware. It was simply trying to estimate the then size of the internet by accessing host network data and following network links. Unfortunately a simple error turned it into the first widespread denial-of-service malware.
The worm’s creator, Robert Tappan Morris, never intended to attack internet hosts. His main error was in failing to simulate the worm’s propagation before releasing it onto the net. The problem was in how it spread. To stop local system administrators from killing it on their hosts, it used a randomisation process to disguise itself. Also, though it tested whether a copy of itself was already running on a new host, the “already-running” test was too weak, with the result that copies proliferated rapidly and soon led to hosts being deluged with traffic. The meltdown was caused by a simple programming error of the kind that even top-notch software engineers can make. (Morris was no ignorant neophyte and later became a tenured professor at MIT).
The Purdue paper makes fascinating reading and is well worth a look for anyone who doubts the ease with which accidentally malicious software can cause chaos – yet more evidence for cock-up theorists who constantly remind us that it’s not only criminals we have to worry about.
So, what are the lessons for today’s software engineers? Largely they are the same as they have always been. To get security, you need to design it into your software. To do that properly means using best-practice methods supported by the best available tools. The good news is that those tools are getting better all the time (take a look for example at AbsInt – google for it). The bad news is that the working culture of software engineers is still barely into the Bronze Age and it changes with all the speed of an arthritic tortoise.
Meanwhile all that the prophets of doom can do is to keep on pointing out the myriad ways in which software dependability can be impaired. They will wail like Jeremiahs until some disaster occurs that makes people say that enough is enough. Tombstone mentality it may be, but it has made flying the safest form of travel ever. It has long been high time that software engineers started taking a few leaves out of aviation’s book.
Of course this will never solve the problem of perverse financial incentives. That will only be cracked when we have enacted very much strengthened product liability laws. Unfortunately, courts and legislatures are notoriously slow, so the smart money isn’t holding its breath. Until they act, we will have to put up with mutinous refrigerators and delinquent teddy bears until common sense prevails.
As the old saw goes, “Good luck with that one!”
Note anyone developing any software that runs on any sysgtem that is on a network or can be connected to any other system, which is susspect is every softwae developer these days, needs to bd subscribed to Cryprogram. https://www.schneier.com/crypto-gram/ it might be cooll to keep up with the latest cool trend in programming but it is essential to keep up with the security