We’re asking the wrong question about self-driving cars

0

As recently as 2013, the phrase ‘self-driving car’ was nowhere to be found in documents published by the US Department of Transportation. Oh, what a difference half a decade makes.

Less than five years after the term first appeared in a 14-page memo produced by the National Highway Traffic Safety Administration (NHTSA), the US House and Senate Commerce Committee unanimously passed two separate bills regulating self-driving cars. The US Department of Transportation is presently working on a third revision of the Federal Automated Vehicle Policy it first released in September, 2016.

Each January is simultaneously exciting and maddening for transportation wonks of all stripes. In the space of three weeks, four major conferences converge to produce a barrage of flashy demos at the Consumer Electronics Show, new vehicles at the North American International Auto Show (NAIAS) in Detroit, academic research at the Transportation Research Board Annual Meeting and glimmers of forthcoming policies at the SAE Government/Industry Meeting.

Automated vehicles also made their way into discussions at this week’s World Economic Forum meeting in Davos, where Uber CEO Dara Khosrowshahi tried to dispel concerns that driving jobs would evaporate.

As self-driving cars began to appear on the roads in 2009, developers were quick to point at regulators as the roadblock that would prevent their widespread deployment: regulations would be too slow, too restrictive, or a ‘patchwork’ of regulations across states would hamper any broad deployment. These fears have thus far proved to be unfounded.

Although NHTSA’s first Federal Automated Vehicle Policy was strictly voluntary, by the time the 2017 revision was released a year later, the policy had shrunk from 112 to a mere 26 pages.

NHTSA eliminated entire sections, and in other areas simply replaced the words ‘should’ with ‘are encouraged to’ as if to reassure skittish automated vehicle engineers – and lawyers – everywhere that, no, these really aren’t regulations.

NHTSA’s existing policies aren’t binding, so there’s not much stick to be worried about. Still, the new House and Senate legislation (Self-Drive and AV-Start) approved in October, 2017 provide some powerful additional carrots: preemption of any state regulation of the design, construction, or performance of vehicles, and potential exemption of tens of thousands of vehicles per year from existing safety regulations.

Automotive manufacturers and technology companies are slowly beginning to release technical details in response to NHTSA’s softly-worded requests.

Last October, Waymo released its safety report, Intel/Mobileye released its ‘Plan to Develop Safe Autonomous Vehicles. And Prove It’ and this month GM/Cruise released a report of its own. To varying degrees, these reports attempt to address issues of risk, the causes of errors or accidents, and responsibility.

Questions of responsibility and fault have also begun to arise in the small number of accidents involving partially-and fully-automated vehicles to-date.

In the most serious of these, a fatal accident involving a Tesla Model S in May, 2016, both NHTSA and the National Transportation Safety Board (NTSB) conducted independent investigations of the accident. These agencies each approached the accident with slightly different objectives.

For NHTSA, the objective is to answer the question ‘Was a safety-related defect identified?’

NHTSA’s investigation found no such defect. The NTSB’s mission in crash investigations is slightly different: to objectively determine the facts of the accident and to identify probable causes.

The NTSB’s investigation found plenty of fault to go around: “the probable cause of the Williston, Florida, crash was the truck driver’s failure to yield the right of way to the car, combined with the car driver’s inattention due to overreliance on vehicle automation, which resulted in the car driver’s lack of reaction to the presence of the truck.”

The NTSB also noted that the design of the driver assistance systems on the Tesla permitted prolonged disengagement by the driver.

In the context of this accident and others, I’ve found myself taking a more personal look at the issue.

Last July, I was walking through a parking lot to drop my children off at summer camp when my 5-year-old-son suddenly jumped and ran – he saw a spider and was playfully overreacting.

I always keep the children farthest away from the traffic lanes, but in a matter of a second or two he ran behind me and about 15 feet straight into the path of a car on its way out of the parking lot. I leaned back to try to intercept him but he was out of my reach, and the best I could do was step out into traffic together with him and gesticulate at the driver.

Fortunately the driver was (a) not traveling quickly, and (b) paying close attention to the task, or that morning might have had a very different ending. It was the next moment that got me thinking. As the driver stopped I nodded and mouthed ‘thank you’ to her. She smiled in return, in what I interpreted as an indication that my son’s actions were innocent but that she had protected him.

Fundamentally, what struck me is the degree to which driving involves compensating for the mistakes of others. Had there been an accident that day, my son (and I) would have been at ‘fault’ to the degree it could be assigned.

However, assigning fault isn’t always the point – I like to think that had the situation been reversed, I would have also been driving carefully and taken similar evasive action.

I can’t think of another common situation in life in which we frequently make potentially fatal mistakes and depend on others to notice and compensate for them.

This is a profound form of a social contract, and a shift to self-driving vehicles would be a fundamental movement away from this idea, towards one where we expect vehicles to make fewer mistakes.

I’m less confident that they’ll be able to fulfill the same role in compensating for the mistakes of others. This social contract of driving is one that overarches questions that might be asked by NHTSA (e.g. ‘Did this system fail or act as intended?’) or the NTSB (e.g. ‘Is the system driving this vehicle at fault?’)

As a society a third, related question that we could ask is that of the counterfactual: “Had a human been operating this vehicle would the accident have happened?”

This question captures the fact that we compensate for the failures and inattention of others nearly every time we drive. At the same time, we’ll need to begin to ask the reverse question of each human failing: “Had a computer been at the wheel, would the accident have happened?”

The challenge that we’ll face in answering these questions lies in the fundamental problem of causal inference: we can’t simultaneously examine two different conditions in the same accident.

Another challenge we’ll face will be our inability to internalise the causes of accidents. Human-caused accidents are often terrible, but at some level we can usually empathise with a driver who fell asleep, drove too fast or looked down at the wrong moment. Computer systems have fundamentally different strengths and weaknesses than humans do, and some of the accidents of the future will be hard to comprehend.

As we begin to accumulate statistics on accidents involving self-driving cars, they will occur for reasons we may be able to explain technically but cannot envision doing ourselves.

Accepting this change will involve a new social contract: rather than depending on other drivers to compensate for our mistakes, we’ll depend on engineers to make fewer of them. — Reuters

Stephen M. Zoepf is the executive director of the Center for Automotive Research at Stanford University. The opinions expressed here are his own.