Self-Driving Cars Will Succeed

|
About: Tesla, Inc. (TSLA), Includes: MBLY, NVDA
by: Mark Hibben
Summary

It's been claimed that self-driving cars will fail because they are not perfect.

While self-driving cars face regulatory and insurance hurdles, it's not necessary for them to be perfect.

They only need to be better than humans.

Even as self-driving car technology rapidly matures, a new set of objections are being voiced by naysayers. These objections revolve around assumed regulatory and liability issues. In fact, the problem of licensing and insuring autonomous vehicles may prove less challenging than licensing inexperienced first-time human drivers.

Source: Tesla

The School of Red Herrings

Recently fellow SA contributor Austin Craig wrote an interesting piece (Self-Driving Cars Will Fail) in which he declares self-driving car technology a "fad" and "not ready for prime time" and declares that the "potential liabilities are huge". I believe that Austin Craig doesn't understand the technology or the regulatory and liability issues involved.

Not that he's alone. He quotes Philip Koopman, a computer scientist at Carnegie Mellon University, from an IEEE Spectrum article, regarding the difficulty of verifying deep learning software:

You can't just assume this stuff is going to work.

But the IEEE article makes clear that Koopman's concern is not whether self-driving cars can work, so much as the problem of testing and validation of the software:

Traditionally, he says, engineers write computer code to meet requirements and then perform tests to check that it met them.

But with machine learning, which lets a computer grasp complexity - for example, processing images taken at different hours of the day, yet still identifying important objects in a scene like crosswalks and stop signs-the process is not so straightforward. According to Koopman, "The [difficult thing about] machine learning is that you don't know how to write the requirements."

Koopman is approaching the problem as a software engineer, and makes a good point that the AI software cannot be tested the way software traditionally is. He's right.

Craig also points out that self-driving cars cannot possibly cope with every possible situation that could come up. He asserts, correctly, that self-driving cars will make mistakes and that people will be injured or killed as a result. This problem of creating, let alone verifying the perfection (or lack thereof) of the self-driving car also seems to plague Koopman.

Finally, Craig points out that, human nature being what it is, it's unrealistic to expect that human drivers will hover over the controls, ready to take over at a moment's notice should the self-driving car fail to make the right decision. Also correct.

These objections are mostly red herrings. The problem is that the standard that is being asserted, that self-driving cars should be perfect, is simply inappropriate. It isn't necessary for self-driving cars to be perfect. It's only necessary for them to be demonstrably better than human drivers.

Shadow Mode

Demonstrating that self-driving cars are better than humans will not be difficult, although it may be a lengthy process. Humans are not particularly good drivers. Let's take motor vehicle fatalities as a typical metric. In 2015 there were 35,092 motor vehicle fatalities in the US, or an average of 96 fatalities every day. In 2015, motor vehicles traveled 3.148 trillion miles, for about 1 fatality for every 100 million miles traveled.

For instance, Tesla (NASDAQ:TSLA) will need to show that its Autopilot II Full Self-driving option is significantly safer (it claims a factor of 2 safer) than human drivers. This can only be approached statistically, through a large fleet of Tesla cars equipped with Autopilot II operating over an extended period of time.

Suppose a fleet of 100,000 Tesla cars logs an average of 10,000 self-driving miles per car in a year. If driven by humans, these cars would only be expected to cause 10 fatalities on average in the year. Even if the Tesla fleet performed significantly better than the human average in its first full year (or billion self-driving vehicle miles), that might not be sufficient to absolutely convince insurance companies and regulatory bodies that Tesla self-driving cars are safer than humans.

But it would be a good start and probably allow self-driving cars to be licensed and insured with some restrictions. After all, insurance companies take similar risks with novice drivers, knowing full well that new drivers are much more accident prone by virtue of age and inexperience.

There's a chicken and egg problem, though. How does the self-driving fleet get deployed in order for the statistics of its performance to be verified without first being licensed and insured? The answer to this problem is Tesla's current approach to rolling out Enhanced Autopilot features.

Following the announcement of the replacement of Mobileye (NYSE:MBLY) hardware with Nvidia's (NASDAQ:NVDA) Drive PX 2, Tesla suspended the deployment of standard Autopilot features pending verification of the software for the new Nvidia hardware platform. Tesla promised that the new Enhanced Autopilot would provide new features in addition to the features of the older Mobileye-based system.

Not surprisingly, Tesla is still working on the new software and is gradually rolling out the features to Tesla owners. Tesla is currently pushing some Enhanced Autopilot features that operate in "shadow mode" As described by Electrek. Co:

...shadow mode, which means that the system is active in the vehicle, but it doesn't take the controls. It gathers data to improve the system and assure that it is safe.

Shadow mode will undoubtedly be used to roll out the Full Self-driving option of Autopilot II when it's ready, probably this year. It might even be used in all new Tesla's with the new Nvidia hardware, regardless of whether the owner bought an Autopilot II option. This would facilitate data collection and verification of the self-driving functionality on a fleet wide basis prior to release.

Almost certainly, drivers will still be expected to assume control of the car if needed for some time when full self-driving transitions from shadow mode to normal operation. This requirement will be characterized as due to the "beta" nature of the software and be mandated in order to gain regulatory approval and insurance. Although owners will be required to agree to the requirement in order to obtain the software update, there will undoubtedly be some who ignore the restriction at their peril.

Even after the software moves out of "beta", there will probably still be a requirement for the driver to be able to resume control after being warned by Autopilot. That functionality already appears to be built in to the Nvidia system, which can warn the driver when the system doesn't have "high confidence" of being able to drive in a particular environment. So in that sense, even the Autopilot II self-driving option may never be "fully autonomous".

Investor Takeaway

Introducing new automotive technology that directly affects safety is always fraught with a certain amount of legal risk. In the case of Autopilot II, it's likely that the software license that the driver must agree to in order to obtain Autopilot II updates will always require the driver to accept legal responsibility even when Autopilot is engaged, and to be ready to resume vehicle control when called upon.

Whether any such license restrictions will actually protect Tesla from liability remains to be seen. But I don't see self-driving cars as fundamentally altering the legal landscape. Automobiles are dangerous machines, but we already have a legal and insurance system that manages the risk involved and for the most part shields automobile companies from liability for the actions of drivers. In the future, that will probably include the action of engaging an autopilot-type system, wherein the legal responsibility still rests with the driver.

If Tesla's Autopilot II lives up to expectations, I don't doubt that it will be welcomed by the industry and especially insurance companies. Once again, the bar is not that high. Self-driving cars don't need to be perfect, or prevent all accidents. They only need to be better than humans.

Being better than humans is an entirely achievable goal. Self-driving cars will succeed, and Tesla, with Nvidia's help, will probably be first. Tesla remains a key area of interest for potential investment in 2017, as I make clear in the DIY Investing Summit*. I continue to consider Tesla a hold, while Nvidia I rate a buy.

*The DIY Investing Summit is a joint project of Seeking Alpha and SA Contributor Brian Bain. The Summit brings together 25 of the top SA contributors, including myself, for in depth interviews with tips for successful investing in 2017. Normally, the Summit requires a fee, but you can get free access by clicking on the link here.

This is a limited time offer, so please don't wait too long to listen to your favorite SA contributors on the Summit. Thanks for reading, and good luck in the new year!

Disclosure: I am/we are long NVDA. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.