Tesla Has A Self-Driving Car, Here's How It Compares

| About: Tesla Motors (TSLA)

Summary

Tesla has a self-driving car.

Here's how Tesla got it.

Here's how it compares to some other self-driving cars out there.

Click to enlarge

The Undisputed Champion

In this article, I will cover Tesla's (NASDAQ:TSLA) self-driving car. Yes, amazingly, Tesla actually does have one. It's very recent - it didn't exist as of December 2015. We know this for a fact because otherwise Tesla would have filed a report on its performance with the California DMV. Since Tesla now has one such car, several questions beckon:

  • How did Tesla get it so quickly?
  • How does it perform?
  • How does it compare to several other automakers working on the same problem?
  • How does it compare to what's needed for full autonomy?

So let's answer these questions.

How Did Tesla Get A Self-Driving Car So Quickly?

First, there are two considerations to make:

  • Tesla did not have a self-driving prototype as of December 2015. We gather this from the fact that while it was licensed to be testing one, it wasn't doing so (as per the California DMV "Autonomous Vehicle Disengagement Reports").
  • Tesla lost all its ADAS features (automatic emergency braking, collision warning, lane holding, active cruise control) when it swapped out the Mobileye (NYSE:MBLY) EyeQ3 hardware for a new Nvidia (NASDAQ:NVDA) solution.

Based on these two considerations, Tesla's self-driving ability has been acquired within the last 10 months. More likely (as we will see) within the last six. It should be noticed that for the public at large, Tesla's ADAS features were proof of Tesla's already existing self-driving prowess. As we know today, those were misguided thoughts (since Tesla lost those features as soon as it lost the Mobileye EyeQ3).

Now, months ago, in April 25 to be exact, Nvidia (NASDAQ:NVDA) published a paper titled "End to End Learning for Self-Driving Cars." This paper described Nvidia's efforts to produce its own self-driving demonstrator. Nvidia's approach was unique and rather revolutionary. While based on a 2004 DARPA project, Nvidia's approach took the original work much further. Nvidia's approach basically consisted in feeding its NN (Neural Network), a training set made up of video collected during regular human driving, along with the steering wheel angle input at the same time.

This thus constituted a labeled training set where each frame (taken out of video at 10 FPS) was paired with the "right answer" (the label, the steering wheel angle). Together with some other adjustments, like artificially-generated images of a car straying off course and the correct angle to get back on course, this training set quickly enabled the NN to gain driving aptitude. The result can be seen in Nvidia's demo video (link).

Why was this unique and rather revolutionary? It was revolutionary because the NN learned the task "holistically." That is, the NN optimized for the full driving task. Up until then, NNs had been trained for parts of the driving task, such as recognizing objects, the road, the lanes within the road, etc. These partial tasks were then glued together by human logic or human logic further aided by another NN. While each sub-task was optimized, the full driving task arguably was not.

Moreover, this revolutionary approach also made training the NN massively simpler. After all, you just had to feed it video of humans mundanely doing their driving tasks (along with the inputs as labels) and it would learn further. Here, this might start sounding familiar. What Nvidia did here is very similar to what Tesla now proposes doing.

Also importantly, Nvidia got to having a very respectable demonstrator within less than one year. And, of course, anyone basing himself off Nvidia's work would take even less time - Nvidia's paper is detailed enough that a dedicated team could probably have something very similar working within weeks.

So, it is my opinion that this was how Tesla got to jump-start its self-driving project. Nvidia's demonstrator (and underlying logic) was shown to it as a way of marketing Nvidia's self-driving hardware. Tesla - impressed by the demonstrator, the theory behind it, and the speed at which it was all put together - bit into it. Hence, the quick falling out with Mobileye. Hence, the announcement now that Tesla is going with Nvidia hardware.

In fewer words, Tesla got its present self-driving capability by emulating Nvidia's work and building from there. This explains why in a few short months Tesla was able to get something which can pass for a self-driving effort.

How Does Tesla Self-Driving Car Perform?

We have two clues to answer this question.

Nvidia Performance

The first clue is how Nvidia's self-driving demonstration car performed as of April 2016. Nvidia talked about this in its paper:

7.2 On-road Tests

After a trained network has demonstrated good performance in the simulator, the network is loaded on the DRIVETM PX in our test car and taken out for a road test. For these tests we measure performance as the fraction of time during which the car performs autonomous steering. This time excludes lane changes and turns from one road to another (1). For a typical drive in Monmouth County NJ from our office in Holmdel to Atlantic Highlands, we are autonomous approximately 98% of the time. We also drove 10 miles on the Garden State Parkway (a multi-lane divided highway with on and off ramps) with zero intercepts.

What is Nvidia saying here? It's saying that (1) excluding lane changes and turns from one road to another, the car drove itself autonomously for 98% of the time in what is likely to be the path shown below (New Jersey):

Click to enlarge

Source: Google Maps

We're thus talking about a trip of ~23 minutes and 9.9 miles. Taking into account Nvidia's formula for calculating the percentage of autonomous driving time, this corresponds to ~5 disengagements within 10 miles. Importantly, this excludes lane changes and taking turns from one road to another.

Tesla Performance

We also have an inkling of how Tesla's self-driving car did in a similar task. This clue comes from Tesla's own "self-driving" demo video, which the company just posted with the AP2.0 unveiling.

The video depicts a trip along the following path:

Click to enlarge

Source: Google Maps

Here, we're talking about a ~10 minute drive and 6.8 miles. While we don't have a precise count of how many disengagements occurred, we do know that Tesla cut the video several times. SA Member "Model S Owner - It's Not That Great" put together a non-exhaustive list, chronicling just the major cuts:

  • Klamath to Sand Hill Road.
  • After the first red light on Sand Hill Road to the 280 overpass.
  • 280 Overpass, onto 280 South.
  • On exiting 280, lane changes to the far right lane for the exit.
  • On the 280 exit ramp, first half of the exit.
  • After the left turn onto Page Mill Road until Dry Creek Road making the right turn.

This is not exhaustive. There are many more briefer cuts in the video where the car goes to other camera views. Also, some of these cuts accounted for above are very large and probably included not one but several disengagements. Moreover, most of the trip occurs on the 280 highway - this is important, because Nvidia has shown a 100% autonomous drive within a 10 mile highway stretch (not counting lane changing or exits into other roads).

Anyway, if we on a best-case basis account for just 1 disengagement on each of those 6 cuts, we come up to 6 disengagements on a 6.8 mile drive. Importantly, this would be 6 disengagements including lane changes and turns to other roads - so actually a better performance than Nvidia. This wouldn't be surprising since Nvidia's performance was as of April 2016 and its car now likely performs better. Using Nvidia's formula and with 6 disengagements, the Tesla could have said to have been autonomous 94% of the time.

How Does Tesla Self-Driving Car Compare To Others?

Thanks to the California DMV, we actually have some data on how several other builders' self-driving prototypes performed as of December 2015. The testing period the reports refer to is for the year ending at the start of December 2015, so we are talking about where these cars were, on average, one year and five months ago.

Normalizing these automakers (and Tesla's trip) to "miles per disengagement," this is what we come up to (higher is better, I used a logarithmic to make differences easier to see):

As we can see, Tesla compares extremely badly even to where other automakers were 1.5 years ago. Indeed, several automakers are 10-40x better than Tesla, and Google (NASDAQ:GOOG) (NASDAQ:GOOGL) is 1000x better, that's 3 orders of magnitude. Again, this is 1.5 years ago. As of Q4 2015, Google was already at ~2800 miles per disengagement.

How Does It And Others Compare To What's Needed For Full Autonomy?

When we talk about disengagements per mile, or percent of time on full autonomy, numbers like 2,800 miles on Google cars or 98% on the Nvidia car might seem awfully large and close to what's needed for full autonomy. Indeed, if we calculate Google's Q4 2015 performance to Nvidia's standard and assume say a 30mph average, Google would be shown at 99.9982% autonomous.

So is Google close? Is Tesla close? How do humans do anyway?

If we consider an accident to be a "human disengagement" and that a disengagement in an autonomous car would lead to an accident without human intervention. If we further consider that humans have around 4 accidents per million miles (1 "disengagement" per each 250,000 miles), then, Google is ~2 orders of magnitude (~100x) worse than humans.

Tesla? Tesla is ~100,000x, or ~5 orders of magnitude, worse than humans. The problem here, is that humans are incredibly proficient at this driving thing. Machines just won't have to be 99% or 99.9% or 99.99% autonomous. They'll have to go over and beyond 99.9996% autonomous.

Conclusion

There are several conclusions one can take from this exercise:

Tesla does indeed have a self-driving prototype now. It got it basically by following Nvidia's work.

This prototype does not work in the same manner as in the past. Instead, it learns holistically just by receiving data from a car being driven by a human and how the human reacts to what it's seeing. This is in contrast to having the NNs learn specific sub-tasks and segments of the driving environment and then fusing it all together using programmed logic or programmed logic further aided by NNs.

Learning holistically makes for a much faster ramp in producing initial prototypes with decent performance. The code broadly remains unchanged and simply improves performance as it gets more training data. It remains to be seen if learning holistically can exceed the performance attained by dividing the task into sub-tasks and learning those (and right now, it doesn't). This is so because the machine lacks actual comprehension of the task at hand, and thus while it does generalize part of the driving task, it likely can't generalize it all.

There will be problems with this process. Namely, it won't be known where the learning will simply stop. Thus, the reliance on this method to go towards full autonomy is entirely speculative and unlikely to happen without severe changes to the entire strategy. The bar to be cleared is extremely high, and the learning process is likely to stop before that bar is cleared. Elon Musk is betting that additional training miles will always deliver improvement, and this is not a certainty - there is such a thing as overtraining.

However, the nature of the learning process means that this time Tesla's "autonomy drive" will indeed be learning from its fleet on the ground. But again, limits will show up and they'll show up soon in the learning process (given how much data Tesla will be collecting).

As of right now, Tesla massively trails other self-driving efforts from 1.5 years ago.

Even the best of those projects, Google's, is arguably still 2 orders of magnitude away from a human driver. Tesla's is at least 5 orders of magnitude away and is 1000x worse than Google's effort.

Predictions:

  • This mode of learning means it's likely that Tesla will be able to implement lane following soon. It's a particularly simple problem within this logic. Indeed, lane following without lane markings ought to be possible.
  • In a regular system trying to do lane following, the system most times knows when it hasn't got enough data to solve the problem and warns the driver. Given a NN's nature and the visibility into its workings, it's likely that a purely NN-based system will not know when it doesn't have the data to solve the problem, and neither will an exterior system know that the NN is incapable of solving it. As such, when the system fails, it will fail more unpredictably (without warning). The difference between the two modes of failure can be exemplified by a crash preceded by a "crash warning", and a crash preceded by nothing (like the fatal AP1.0 crash). The same applies when the system has wider responsibilities than just lane following - thus, such a system is likely to always require driver attentiveness.

All in all, Tesla did very well in seeing how Nvidia's effort could jump-start its own, inexistent, self-driving program. However, Tesla seems overenthusiastic about how far this new approach can take it. It's likely that the learning process will stall before reaching the objective (self-driving), and for now Tesla greatly lags the performance of other self-driving systems, even when comparing to where those systems stood 1.5 years ago. It can easily happen that the learning will stall even before Tesla attains the levels other competitors were at, 1.5 years ago.

A side note regarding Tesla's promised LA to NYC trip. Using the same logic Tesla used to produce its 6.8 mile video, Tesla could do that trip even today. If Tesla gets to cut an edit any effort and doesn't have third-party verification, Tesla can do anything. If Tesla gets third-party verification, those parties would also have to be able to choose routes. The kind of system Tesla is using is particularly adept at learning a very specific path (and indeed, this might have been used on the 6.8 mile video, since there are reports of multiple trips taken along that route when the filming took place).

Before anyone bombards me with "ah you said they didn't have a self-driving car" or "it learns (holistically) after all!", well:

  • Tesla did NOT have a self-driving car up until recently, and would still not have one if not for Nvidia's (and perhaps Comma.ai's, which followed a similar path to nVidia) efforts.
  • Tesla's cars did not learn holistically until this Nvidia development, and it's still uncertain (even for Tesla) how much they will be able to learn before the learning process stalls.
  • When the facts change, I change my opinion. And you, sir, what do you do?

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.