The notion of self-driving cars has been around in science fiction for at least 60 years. Now Google is leading the charge to make autonomous vehicles the typical mode of transport. Taking control out of human hands, the thinking goes, will make driving safer. Question is, will tomorrow’s computer chauffeurs really be infallible?
Google has a small fleet of self-driving cars on the road in Silicon Valley — the California information technology hub. Some employees use them to commute to work. The cars are easily identifiable by their roof-mounted spinning turrets. These contraptions house the Lidar system – essentially, laser radar. The system creates a 360-degree map of the car’s surroundings. Google says its driverless cars have collectively gone more than 800,000km without crashing.
Proponents of self-driving cars point out there are about 32,000 traffic deaths per year in the U.S., or one every 15 minutes. That is almost three times the rate of deaths from guns. Perhaps computer-controlled cars, with faster reaction times and consistent algorithmic decisions, would substantially lower the accident rate, use less fuel and move more people faster.
But perhaps not. Hardware and software systems are extremely complex. There will always be software bugs, hardware failures, inaccurate maps, improper maintenance and unforeseen situations no algorithm can handle.
Toyota on trial
Already, modern vehicles have as many lines of software code as an Airbus passenger jet. Computers run everything from a car’s windshield wipers to its acceleration and braking systems.
Lawsuits and studies have shown these computers are not risk-free.
A trial in the U.S. state of Oklahoma last October marked a milestone in the controversy over Toyota-brand cars and unintended acceleration. The case involved the crash of a Toyota Camry in 2007; one woman died, another was injured.
A 10-month study by the National Highway Traffic Safety Administration and NASA had concluded, in February 2011, that there were no electronic flaws in Toyota cars that would open the throttle enough to cause unintended acceleration. The Oklahoma court, however, heard testimony that digital systems could not be ruled out as a factor in that Camry accident.
Software experts had been given access to Toyota Motor’s top-secret source code for controlling the electronic throttle. The lead expert, Michael Barr, testified and delivered an 800-page report that said the code was defective.
Barr testified he had demonstrated a driver could lose control of the accelerator due to a software glitch that is not reliably detected by any fail-safe backup system. He described how, as proper practice, software that controls potentially life-threatening devices must have built-in redundancies — yet these redundancies were absent.
Each hardware supplier writes its own software to control the devices it is contracted to produce – the braking system, the engine throttle control, etc. This leads to complex coupling between software modules, increasing the likelihood of errors. Plus, Toyota did not possess the code for some of the software.
Beware of hackers
The Oklahoma jury found Toyota liable — the first such verdict against the company. While the automaker said it strongly disagreed, it reached a confidential settlement to avoid punitive damages.
Software bugs are not the only threat. In 2010, researchers from the University of Washington and the University of California, San Diego, demonstrated how cars’ electronic control units are vulnerable to hackers. It was the first experimental study of the real security risks with modern vehicles.
Using their own software, called CarShark, the researchers ran lab and road tests and found it is possible to hack into almost any ECU. They put a laptop in a car, connected it to the vehicle’s computer system via a standard interface normally used for entertainment systems and other aftermarket add-ins, and controlled the computer with WiFi.
With the car traveling at up to 65kph, the researchers were able to honk the horn, kill the engine, prevent a restart, blast out the heat and stop the driver from braking. The potential for a mass attack is obvious; a virus could even be installed to completely erase evidence.
What are the takeaways?
Toyota’s case shows it should be a principle of civil law that, when a technological failure inflicts damage, the vendor and independent parties must place all known diagnostic information into the public domain. This would assist us in learning from mistakes and reducing the risks to society.
The hacking study should put Google and Silicon Valley, always full of sturm und drang, on notice: There are risks that must be considered above the hype. Rebooting a computerized car at 65kph is not a viable option.