2: The Dark Side of Convenience
The chapter opens with an account of some of the exploits of a hacker who specializes in exploiting the variety of services hotels make available to their guests through the convenience of the television remote control.
- He accidentally locked the minibar in his room using the remote, and the way to unlock it again was obvious.
- This exposed a back-door by which he could bypass the billing system to access any service, including premium TV channels and his guest account in the hotel billing system
- He could also access other guests accounts - to see what services they were using: he could hijack their TV channels, access their guest accounts, see the system messages sent to them, etc.
- By using the internet transceiver on his laptop, or plugging the coax cable into his own computer, he could dispense with the inconvenience of the remote control and get into even more systems, including access to the hotel's back-end billing server and four other back-end systems
The problem is that these and other wireless conveniences require a simple set of codes to operate and no encryption or authentication - when the receiver recognizes a specific code sent from any transmitter, it activates a command. The "trick" is discovering the code - but as the devices merely ignore invalid codes, doing a brute-force hack is very simple.
Garage door openers, for example, typically use a set of eight switches (256 possible combinations) to send a "unique" signal to open or close the door. It's fairly simple for a person with a laptop and a radio transmitted to cycle through them all in a short amount of time; and because the device simply ignores any invalid code, there's no obstacle to brute-force hacking.
Hacking the Grid
The electric power grid in most locations was put under control of computerized systems. The use of a Supervisory Control and Data Acquisition (SCADA) system meant that, so long as the hardware wasn't damaged, the power company didn't need to send out service vans, but could monitor usage and perform basic functions, such as turning the power off or on, from a computer in the central office. The system had no security to speak of, meaning anyone who could access the network had complete access to the grid.
After the terrorist attacks of 2001, there arose grave concern about this critical vulnerability, and energy providers were required to upgrade their systems to provide greater security. Ten years later, "most of the electric industry had not completed the recommended migrations" despite the urging of the federal government.
The intelligence of the grid has been expanded: in some areas, a power company offers discounts to residents who install equipment to allow the company to monitor major appliances (air conditioners, dishwashers, etc.) as proof that customers refrained from using them during certain hours. There was a plan, albeit overly ambitious, to have this degree of monitoring and control in every home by 2012.
One consultant reported logging over 38,000 software vulnerabilities on one SCADA system - though this included things as minor as employees having personal software on computers connected to the same network (which seems frivolous, until a software vulnerability is exploited). This and other evidence provides a clear indication that the industry, as a whole, is indifferent and lackadaisical regarding the security of their systems.
It's likely that they are indifferent because the potential for someone to gain access to their systems hasn't been exploited. While there have been no major incidents in the United States (allegations without proof have been offered), and while it was claimed that hackers brought down the power grid in parts of Brazil, this turned out the be due to poor hardware maintenance.
The solution to the vulnerability of SCADA systems is the use of "smart" meters - which are somewhat less vulnerable, but not invincible by any means. These meters require an authentication code that is stored in ROM chips, similar to the technology used for credit cards and transit passes. They are still vulnerable to hacking, but while obtaining the codes for individual meters rather than a single system is more time-consuming, it's still not particularly difficult. A "worm" can migrate among meters and propagate itself to facilitate a large-scale attack on the system. Worse, it makes it harder to respond when hundreds of thousands of meters are all attacked at the same time, because an authorized user must also deal with individual systems.
The author suggests that "this is neither the first time nor the last time" that gadgets intended to make tasks more convenient or efficient have been hastily implemented without sufficient regard to security flaws, and especially given our dependence on technology and the electricity that drives it, the potential for a mischievous or malicious individual to cut the power supply is a threat that should be, but is not, taken with the utmost seriousness.
Vulnerabilities of Connecting
Enabling devices to communicate with one another provides added convenience: to steam movies from the Internet to a television set, pause the action to look up the profile of an actor, cat with friends who are watching the same program, and buy an item for which you see a commercial are all made possible by enabling devices and services to communicate information to one another.
However, the ability of data systems to send and receive data is seldom done with consideration of security: the affordance that is provided with the intention of one device being able to send and receive information is another "leak" that can be exploited, and a system of several devices is as vulnerable as the least secure one.
It's disturbing to think that someone unknown to us might be monitoring what we watch on television and the Internet and concerning to realize there may be a breach of our financial accounts - but rather alarming when the devices that may be breached are related to healthcare. The HITCH act, passed in 2009, seeks to encourage greater use of electronic health records, protocols for this information to be shared among various service providers, and gadgets to access it.
The fundamental concept is good: you can go to a clinic anywhere and doctors would have complete access to your medical history. Emergency medical teams would be able to know your concisions, allergies, etc. Specialists and doctors could refer to your information and share insights related to your care. A pharmacist would know exactly what medications you're talking. Used legitimately, there's great power in having ready access to health information.
But there are illegitimate uses that range from distressing to dangerous: your medical records can fall into the hands of someone who would use the information against you, or someone could use the data to defraud your health insurance provider, or a particularly malicious person could plant potentially dangerous disinformation in your records.
Security details are absent from the HITCH legislation, but the level of compliance to security requirements of the earlier HIPPA act is not encouraging. Passed in 1996, HIPPA forbids using the patient's social security number - but over a decade later, many providers and insurers request the patient to provide this information.
The author also suggests that medical facilities will be in a "rust" to adopt and will likely use existing technologies - such as tablet computers to remotely connect to databases and wireless networking in hospitals, leaving their network open and vulnerable.
(EN: No details are given to substantiate a cause for concern, just a hypothetical scenario contrived to provoke a panic reaction. The author goes on to provide a second chicken-little scenario about what a "someone from another country" hacking into your DVD player to gain access to your home network and all the devices connected to it.)
Medical Devices
The vulnerability of medical devices poses a threat not just to privacy or financial assets, but to the life of those who depend on them. Implantable Medical Device (IMDs) such as pacemakers or defibrillators need to be monitored and adjusted, and enabling doctors to do so remotely is a great convenience, but opens the IMD to the same vulnerabilities as any other device connected to the Internet or accessible wirelessly.
The author cites some experiments done with pacemakers, demonstrating how simple it is to access the device and turn it off - the result of such an attack on a human being is death - or to tamper with the settings to speed or slow the heartbeat of the patient, which has adverse effects and risk of death.
The manufacturer of the pacemaker in question was unimpressed, and argued that the existing safeguards were secure enough to prevent such an attack from occurring on a device that's actually implanted in a patient - but that's simply not true: it makes it more difficult, and less likely, but still entirely possible.
The reasons a person might exploit an IMD to harm or kill the user are roughly the same as to do so by any other means: it may be sociopathic, or for political or financial reasons. A person could effectively be held hostage by a threat of interfering with their IMD.
The value of modern devices is largely in the software. The hardware itself is inexpensive, and practically useless without the software that drives it - the iPod us just a portable hard drive. The same is essentially true of an IMD. The danger in this is it's very easy to counterfeit a device by cloning the hardware, and using cheaper software that is less reliable or functional. This might be an inconvenience with a knock-off version of a music player, but it's a serious risk for one of an insulin pump.
On the other hand, implementing strong security measures may itself be an obstacle to being able to use devices as intended. If security is too weak, the device is vulnerable to attack or unintentional malfunction; too strong and it may not be accessible in a medical emergency.
Dangers of RFID
Radio Frequency Identification (RFID) tags are widely used in logistics and manufacturing. These tags are read electronically to monitor and manage the movement of goods quickly and accurately, and are also being used to identify personal property. When someone mentions implanting a chip in a pet or a human, they're usually alluding to RFID.
The scanners that read these tags merely take their data and drop it into a database without evaluating it, then other software interprets that information. The author suggests that this is a vulnerability that has the potential to completely disrupt the flow of materials, shutting down factories and ports, if an RFID tag is encoded with SQL command that somehow gets executed rather than read as data.
(EN: I'm skipping the rest, as the premise is really rotten. The notion that user input would be executed as code has been played out by panic-mongers: it would take a colossal programming blunder and very shoddy testing for this ever to occur. While it's not entirely impossible, the possibility is so remote that stirring up a panic is melodramatic. They also completely miss that RFID used as identification is easily counterfeited.)
Cars, Again
Then modern motor vehicle is the most technologically complex gadget that most people own. A car's "computer" consists of fifty to seventy different systems, each of which controls a very specific task, some of which are evident (cruise control, automatic wipers) and others of which aren't (fuel injection systems, adjusting the flow of coolant and oil). The various electronic control units (ECUs) in vehicles are made by different manufactures, and must communicate openly to work in conjunction with one another - so there are well-documented standards and little in the way of security to make it easy for them to be interoperable. These qualities also make the system more vulnerable.
The current implementation requires access to a physical diagnostics port in the vehicle - originally accessible only by dealerships, but later required to be accessible by mechanics. A computer connected to this port has a great deal of control over the vehicle's systems. A program called CarShark was developed to hack the system, and has control over the systems - from displaying messages on the console, turning indicators off and on, tampering with the speedometer, disabling the brakes, stopping the engine, etc.
Currently, this can only be done by accessing a physical computer port on a vehicle, which is difficult to do while the vehicle is in motion (unless a transmitter was previously installed with the intent of a later attack), the port is accessible to a mechanic, a valet, or anyone who has access to the vehicle (per the earlier discussion, door locks are easily defeated).
However, some of these systems may go wireless in future. A wireless tire pressure monitoring system was implemented after the Ford/Firestone disaster in 2000 - and it's already been demonstrated that these can be exploited to falsify or conceal a low-pressure warning to the driver. Other systems may go wireless, and there's even consideration of enabling the vehicle to transmit information via wireless Internet connection.
(EN: This is already being done to some extent, by using systems that enable parents to monitor their teenagers' driving habits, or to enable insurance companies to collect information on driving habits to offer discounts to safe drivers. It's not very extensive, but it's a start.)
There's evidence cars are becoming over-automated. Mercedes-Benz undertook an effort to remove over 600 functions that had accumulated. In most instances, there were functions that "no one really needed and no one knew how to use." Prior to that, the engineers were overzealous, given the driver the ability to "adjust everything that is adjustable" just because it could be done.
On the other hand, there's also ongoing research to make cars even more automated than they currently are, by enabling them to virtually drive themselves, or communicate to one another to reduce collisions. (EN: The author doesn't mention the "automatic parking" function of some luxury vehicles where the driver surrenders control of the vehicle to a computer system to pull into a parking space.)
Convenience Requires Complexity
Complexity is the consequence of convenience - making devices do more things with less effort. The simplest device performs a single function and has only a manually-activated on/off switch. The author considers the example of a mobile phone - without additional controls, all phones would ring with the same tone and the same volume, and your only choice would be to switch them off. Complex systems are composed of multiple parts, each of which can be flawed, and their interaction multiplies the flaws of the components.
(EN: Occurs to me the idea might be better illustrated by the example of a lamp. It's a simple device with an on/ off switch to do a simple thing, i.e. provide light. If we want to adjust the level of lighting, add a dimmer switch. If we want to turn it on or off from a distance, a remote control. If we want it turn on or off at certain times, a clock. If we want it to come on when someone enters the room, a motion sensor. It's possible to make a lamp much more convenient and functional, but each added "if we want" adds complexity to the system.)
The author suggests that "companies don't have an economic incentive" to test extensively enough to consider every possible flaw in every component in every possible combination. (EN: I disagree, as successful companies do exactly this and devote considerable expense to testing, their incentive being to provide a reliable product that wins market share. But it requires consumers to prefer reliable products and be willing to pay a premium that will cover the costs of testing. Where there is lack of competition and/or consumer apathy, companies can get away with providing shoddy products, but this is true even of simple, non-technical products.)
The complexity of systems is such that there are an astounding number of possible combinations: if a device has ten features that can be turned on or off, that's over 1,000 possible combinations. If each of the features has ten levels of settings, that's a hundred billion possible combinations. It's simply not possible to test every possible combination before releasing the product to the public. (EN: That is a good counterpoint to my previous note, but it is likely an exaggeration: is it necessary to test a device to ensure that changing the volume from 4 to 5 doesn't cause a cascade failure? Likely not, and it should be possible to test with two or three different settings - off, lowest, highest, and average - rather than all ten settings.)
Studies of consumers in regard to product features demonstrate a preference for complexity. When given options for video players, they indicated the most full-featured gadget as the one they'd prefer to own. And when given a list of options they would like to have, they wanted as many as they could get. Even when price was considered, they opted to include about 80% (19.6 out of list of 25) of the options that were available.
But in using products, consumers often become frustrated and dissatisfied with the "cornucopia of features" that made the device difficult to use even for its basic purpose. (EN: This is a general statement, with no research to back it, but from my own experience in UX, it very often bears out. When researching prior to development, interviewees provided a long list of "must have" features; in pre-release testing, subjects complain that the resulting interface was confusing and difficult; and when the interfaces are published, it's found that very few users access the "must have" features.)
The author presents the extremes as being two alternatives (EN: which is itself a fallacy - choosing a position between the extremes is the most common and effective solution.): The first, which he suggests is "the world we have today," is to demand ever-faster releases of products of ever-increasing complexity and ever-increasing vulnerability. The second is to stop abruptly and revert to using simple devices, adding additional functionality only when we can be sure that doing so doesn't pose an unacceptable level of risk.
Loose Notes
A study is cited in which scanner script detected about six million gadgets connected to the Internet were still set to accept the manufacturer's default password.
At one point, the author advocates an "FDA-like organization" to regulate the development of technology products, as customers are incapable of comprehending the dangers or making informed or intelligent choices for themselves. (EN: This becomes more a matter of politics, and the debate over whether it is right, effective, or even possible for this to be done. The FDA, which the author uses as an ideal worth duplicating, is far from ideal - some suggest it should be doing even more to protect consumers, others suggest its current level of involvement is too much and withholds medicine from the people who need them and would gladly bear the risk, and both sides concede that it's thoroughly corrupt and biased in choosing what it approves for public consumption.)