Elon Musk angered the media for focusing on an Autopilot-related accident that occurred in March in Mountain View, California—while ignoring what he sees as the real safety benefits of autonomous vehicles.
“They should write a story about how autonomous cars are really safe,” Musk said in his Wednesday earnings call. “But this is not a story people want to click on. They write inflammatory headlines that are fundamentally misleading to readers. It’s really horrible.”
Musk believes that the negative media coverage of Autopilot puts lives at risk.
“People are reading things in the press that cause them to use Autopilot less; that makes it dangerous for our customers,” he said Wednesday. “That’s not good.”
So what is Tesla’s proof that “autonomous cars are really safe”?
A few days after the Mountain View accident, Tesla issued a press release blog post acknowledge that Autopilot was engaged at the time of the crash. But the company argues that the technology improves overall security, citing a 2017 news by the National Highway Traffic Safety Administration (NHTSA).
“A year ago, our first iteration of Autopilot was found by the US government to reduce crash rates by as much as 40 percent,” the company wrote. It’s the second time Tesla has cited the investigation in the area of the Mountain View crash-another blog post three days ago had done the same place.
Unfortunately, there are some big problems with that search. In fact, the defects are so serious that NHTSA released a shocking statement this week that distanced itself from its own investigation.
“NHTSA’s safety defect investigation of the MY2014-2016 Tesla Model S and Model X did not assess the effectiveness of this technology,” the company said in an email to Ars on Wednesday afternoon. “NHTSA compared this index of rates before and after the installation of the feature to determine whether models equipped with Autosteer were associated with higher crash rates, which further research may have indicated is important.”
Tesla has also said that its cars have a crash rate 3.7 times lower than average, but as we’ll see there’s little reason to think it has anything to do with Autopilot.
This week, we’ve spoken to several car safety experts, and no one can point us to proof that Autopilot’s semi-autonomous features improve safety. And that’s why news sites like ours haven’t written stories “about how safe autonomous cars really are.” Maybe that will be true in the future, but right now the data isn’t there. Musk has promised to publish regular safety reports in the future—perhaps they will give us the data we need to establish whether Autopilot really improves safety.
One thing we know works: emergency braking

Tesla
Talking about the safety of Autopilot is tricky because the term is used in two different ways. Technically, Autopilot is an umbrella term for various safety and driver assistance technologies in Tesla vehicles. That includes forward collision warning (FCW) and automatic emergency braking (AEB) features that come standard with every new Tesla. Autopilot also includes optional lane keeping and adaptive cruise control features that cost an additional $5,000.
In practice, however, people who talk about “Autopilot” are often talking specifically about secondary-technological features that allow the car to drive itself (albeit with human supervision). This is clearly what Musk is referring to when he talks about the risk that drivers will “use Autopilot less.” The NHTSA report mainly focused on Autosteer, the official name for Autopilot’s lane keeping function, but Tesla’s blog post summarized it as finding that “Autopilot” has reduced crash rates.
So in this article, we’ll use “Autopilot” the same way Musk and Tesla have: to describe Tesla’s optional lane keeping and automatic cruise control technologies—but not the automatic emergency braking and forward collision warning features of it comes with every car.
One of the biggest problems with NHTSA’s finding of a 40-percent reduction in crashes is that most—perhaps even all—of that reduction can be attributed to AEB and FCW instead of Autopilot. Tesla vehicles were shipped with the Autopilot feature starting in October 2014, but the Autosteer function didn’t go live until October 2015. NHTSA compared crash rates before and after this implementation date to see how the crash rate has change.
But Tesla activated automatic emergency braking and forward collision warnings a few months ago, in March 2015. And we know that these technologies have significant safety benefits. A good source here is the Insurance Institute for Highway Safety, an insurance industry-sponsored research firm that has unlimited access to insurance claim data. IIHS data (PDF) shows that, across the car industry, the combination of AEB and FCW has reduced claims for damage to other vehicles by 13 percent and claims for injuries to people in other vehicles by 21 percent. The technology reduces front-to-back collisions by 50 percent.
Unfortunately, NHTSA does not seem to have attempted to exclude the effects of Autopilot from AEB and FCW. Therefore, it is likely that most—if not all—of the safety gains that NHTSA notes are actually a result of AEB and FCW rather than Autopilot. In fact, 40 percent is greater than 21 percent or 13 percent. But we don’t know the confidence interval for that 40-percent range. And NHTSA uses a measure—air bag deployments—that may not be directly comparable to those used by the IIHS.
And, interestingly, IIHS told Ars this week that it has observed a 13-percent drop in Model S demand rates since Autopilot features went live in October 2015. Unfortunately, Tesla did not provide IIHS with data on which vehicle has paid for an Autopilot Upgrade, so this statistic combines vehicles with and without Autopilot. That means it’s not a direct measure of Autopilot’s safety benefits.
However, 13 percent is roughly the amount we would expect claims to write if Tesla had only enabled FCW and AEB—suggesting that Autopilot’s additional safety may be minimal or even nonexistent.
Tesla could resolve the dispute by releasing more data
Tesla has a lot of data about its cars. It knows how many miles each vehicle has driven, whether or not the customer has paid to activate Autopilot, and how many accidents each vehicle has had. If Elon Musk really wants to resolve the controversy over Autopilot safety, Tesla could release the full dataset in anonymized form, allowing independent experts to analyze the data.
Musk may also work with the IIHS—as a number of other automakers do—allowing IIHS investigators to look up which vehicle’s identification numbers Autopilot operates. That would allow the IIHS to determine the Autopilot status of each car involved in an insurance claim, making a rigorous analysis of whether Tesla’s self-driving cars make insurance claims at a lower rate than cars that don’t. Autopilot.
Instead of providing this type of data to researchers, Tesla has gone out of its way to block access to it.
NHTSA has broad authority to collect data from automakers, a power it used in its 2017 investigation of Autopilot safety. Tesla provided NHTSA with the data but said it was confidential. When an independent consulting firm called Quality Control Systems sought the data in a freedom-of-information request, NHTSA denied the request, citing Tesla’s confidentiality requirement.
QCS has sued NHTSA for access to the data, which the group argues should not be confidential. The matter is currently being litigated.
“I don’t see Tesla doing themselves a favor by keeping these databases,” said Randy Whitfield, director of QCS. “I hope it’s true that this kind of technology can be beneficial. But if it is, you’ll have to get people to use it; they’ll want to trust it first.”
To summarize: Tesla’s main evidence for Autopilot’s safety is an NHTSA report that NHTSA itself dismissed as a “cursor comparison.” That study did little to distinguish the safety benefits of Autopilot from other safety technologies on Tesla vehicles. And when an independent investigator tried to get NHTSA’s data to do his own analysis, NHTSA blocked access, citing Tesla’s confidentiality requirement.
It’s worth noting that Tesla isn’t shy about releasing data when doing so helps its case. Indeed, Tesla has been so angry about releasing information about the Mountain View accident that it has led to a high-profile conflict with the National Transportation Safety Board, an investigative agency that requires companies to maintain confidentiality during an investigation.
When Tesla’s Model S received an unfavorable review from John Broder of The New York Times in 2013, Musk draw data about Broder’s journey to debunk Broder’s story (Broder answer here.) In short, Tesla is more than happy to use data from Tesla cars when doing so supports Tesla’s case. So it’s interesting that the company has been so forthcoming about publishing data to prove its claim that Autopilot actually improves safety.