In September, the president of the lobbying organization Advocates for Highway and Auto Safety, an alliance of insurance companies and consumer, medical, public health and safety groups, warned a U.S. Senate committee of the dangers of the push for large-scale public testing of autonomous vehicles.
Deploying these self-driving vehicles “before they can be safely operated on public roads and without common-sense government oversight and industry accountability,” Catherine Chase wrote, “is not only reckless and ill-advised, but it will also substantially reduce public confidence in this new technology.”
Many members of the public, it seems, are already there: The organization pointed to polling — its own and surveys by the American Automobile Association, Reuters/Ipsos and others — that found about 70 percent of Americans saying they were concerned about the safety of self-driving vehicles and were likely to be too afraid to ride in one.
And in Chandler, Ariz., where the Google spinoff known as Waymo started testing its driverless vans in 2017, some residents are registering their concerns in ways that transcend opinion surveys: They are pelting the vehicles with rocks, slashing their tires, trying to run them off the road with their own cars or braking hard in front of them, and threatening the vehicles’ emergency backup drivers — in one case, according to The New York Times, with a .22-caliber handgun, and in another with a length of PVC pipe.
Behavior like this is unsupportable, but the fear it reflects is not unjustified.
In May 2016, the driver of a Tesla Model S operating with its driver-assist Autopilot technology was killed when the vehicle failed to brake to avoid hitting a tractor-trailer. Last January, another Tesla on Autopilot crashed into a parked fire engine (no one was injured). In March, yet another Tesla on Autopilot, a Model X, left the roadway and struck a safety barrier at 70 mph, killing the driver. In March, a self-driving Uber sport utility vehicle, a Volvo, struck and killed a pedestrian in Tempe, Ariz.
There’s no doubt that human error and bad decisions contribute mightily to the wreckage and slaughter on America’s roads, with millions of crashes and tens of thousands of people killed every year (there were 37,133 fatalities on U.S. highways in 2017, according to the National Highway Traffic Safety Administration). Self-driving vehicle technology has the potential to help reduce those tragic numbers, among other possible advantages, but at this point it remains just potential.
Efforts to rush through corporate-backed legislation like the AV START Act (for the American Vision for Safer Transportation through Advancement of Revolutionary Technologies Act), are simply wrong-headed. That legislation, which stalled in the Senate last year, could have unleashed millions of self-driving vehicles — all of them exempt from federal motor vehicle safety standards — onto American roads in what would have amounted to a massive and potentially bloody beta test. That, in part, is why New Hampshire Gov. Chris Sununu last summer vetoed legislation that would have allowed more testing of autonomous vehicles, known as AVs, in the Granite State.
“The artificial urgency to deploy immature AVs is disconnected from public opinion as well as the reality that serious and fatal crashes have revealed significant flaws in this still developing technology,” Chase, of the highway safety lobbying group, wrote.
We understand the attraction of self-driving vehicles. Ours is a rural region with limited public transportation options. The population is growing older and some may find driving more difficult with each passing year. Certainly people with disabilities could benefit from easy access to autonomous transportation. Others could find it convenient and productive to read or answer email, for example, while their car took care of the chore of commuting. If most or all of those vehicles got their power from renewable energy sources, such as solar, it could be a boon for the environment. And if the technology works and is adopted on a large scale, it could make some huge — and expensive — transportation infrastructure projects, such as new rail lines, unnecessary. On the other hand, we might need more or wider roads. And then there’s the question of where all these vehicles will park, and how they might handle challenges such as a snow- or ice-covered road surface.
As anyone with any experience at all with technology, especially new technology, knows, there are glitches, freezes and system failures large and small. Capabilities are routinely overstated and computer security is, we’ve come to find out, an oxymoron. It’s obvious that Google, Tesla and the other tech companies working on autonomous and self-driving vehicles are eager to get their products on the road, and generating revenue, as quickly as possible. But when the systems running these vehicles crash — and they will crash — people stand to lose more than a report or a spreadsheet. They could lose their lives.