This article is more than

8 year old
Tesla

A Tesla Driver Died in a Crash While His Car Was on Autopilot

Source: Slate
June 30, 2016 at 18:19
A Tesla driver died in a crash while his Model S was on autopilot, the company disclosed in a blog post Thursday.

It’s not immediately clear to what extent Tesla’s autopilot system, which has been billed as the most advanced of its kind on the market, was at fault. According to the company, the U.S. National Highway Transportation Safety Administration has opened a “preliminary evaluation” into the system’s performance leading up to the crash. Here’s how Tesla described the accident (italics mine):

What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.

The phrase in italics above is exactly the sort of excuse that everyone involved in self-driving cars had hoped never to have to hear in conjunction with a deadly accident. It’s a classic “edge case” for computer vision, the sort of thing the engineers are supposed to thoroughly solve before we entrust their software with our lives.
 

According to Tesla, this is the first known death involving its autopilot system. The company reported that its drivers have collectively traveled about 130,000 miles in autopilot mode. On average, Tesla noted, one person dies for every 94,000 vehicle miles traveled in the United States. “It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations,” the company added.

The implication is that people shouldn’t rush to label Tesla’s autopilot feature as dangerous. It’s a case that the company persistently throughout its blog post announcing the crash, making for a tone that’s more defensive than apologetic. CEO Elon Musk did offer public condolences, however:

Tesla’s blog post is worth reading in full, as it lays a blueprint for the ways in which the company is likely to defend itself in the face of the intense scrutiny that is sure to follow. It’s a test case for how the public and media will respond to the occasional deaths that will inevitably come as carmakers move gradually toward self-driving technology. Tesla appears ready to defend itself with statistics, reminding people that human drivers set a relatively low bar when it comes to safety. Whether statistics are enough to trump people’s fear remains to be seen.

It’s important to note, as Tesla does, that the company’s autopilot system officially requires the driver to keep his hands on the wheel, beeping warnings when it senses inattention. That differentiates it from the fully autonomous driving technology that Google, Uber, and others are developing and testing. And perhaps it will convince regulators and others that accidents such as this one are not to be blamed on the company or its software. Autopilot, Tesla insists, is a safety feature that is meant to be redundant to the driver’s full attention.

Yet there are pitfalls to this approach, as illustrated by YouTube videos showing drivers going “hands-free” or even vacating the driver’s seat while their Tesla is on autopilot. The company has taken steps to prevent this.

Still, as I argued last year after test-driving a Model S on autopilot, the technology—impressive as it is—does seem to tempt drivers to relax their focus. That’s why many other automakers and tech companies are taking a different approach to vehicle automation. Toyota, for instance, views its role as that of a backstop against human error, rather than a substitute for human effort. And Google has removed human drivers from the equation entirely, reasoning that it’s impossible to expect them to drive safely when they know a computer is doing much of the work.

Tesla’s description of the accident does not make it sound as if the autopilot system went rogue and crashed into something. Rather, it seems to have failed to avoid an accident that a fully engaged human driver may or may not have managed to avoid. And while Tesla doesn’t say so, it certainly seems possible that the driver in this case was devoting less attention to the road ahead than he might have if autopilot were not engaged.

I’ve emailed NHTSA for comment and will update when the agency responds.

Previously in Slate:

 

Future Tense is a partnership of SlateNew America, and Arizona State University.

You did not use the site, Click here to remain logged. Timeout: 60 second