Tesla Isn't Telling The Truth Regarding Lost Features

| About: Tesla Motors (TSLA)

Summary

Right now, the new AP2.0 Tesla cars are less capable than AP1.0 cars.

I explain why this is so, and why Tesla is not telling the truth regarding the motive.

The nature of this exercise allows me to produce two interesting conclusions.

Click to enlarge

Frame grab showing some of the information Mobileye (NYSE:MBLY) provides, including lane marking detection, vehicle detection and range, etc.

Tesla (NASDAQ:TSLA) recently presented its Autopilot 2.0 hardware. When it unveiled this new hardware it said, as I had predicted, that it would be capable of full autonomy down the road. At the same time, however, it also said something, which goes as follows:

Before activating the features enabled by the new hardware, we will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience. While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control. As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features. As always, our over-the-air software updates will keep customers at the forefront of technology and continue to make every Tesla, including those equipped with first-generation Autopilot and earlier cars, more capable over time.

Put another way, a new Tesla with AP2.0 hardware is, right now, less capable than an old Tesla with AP1.0 hardware. It lacks:

  • Automatic emergency braking.
  • Collision warning.
  • Lane holding.
  • Active cruise control.

Tesla, as written above, says these features are absent because they need to be validated, and need data from the new cars running in "shadow mode" so as to become good enough/better. It's easy to know this is not true. Let me explain why.

First, The True Reason

The reason why Tesla lost those features (ADAS features), is because AP1.0 relied entirely on Mobileye's EyeQ3 to be able to provide them.

You see, there is indeed AI (Artificial Intelligence) which goes into being able to deliver those features. However, this AI was applied by Mobileye and its output was part of what the EyeQ system delivered. Said another way, it was the EyeQ3 system which delivered to Tesla ready-made information about object identification/classification, road and lane geometry, range of vehicles, range-rate of vehicles, etc., which allowed Tesla to deliver the ADAS features. Look at the left menu on Artificial Vision Applications, and read about some of them. This is immediately obvious.

Source: Mobileye

When Tesla went away from Mobileye and adopted a Nvidia (NASDAQ:NVDA) hardware solution, it lost all of this ready-made information. Thus, it immediately lost the ability to deliver its existing ADAS features - the very features, which the public at large thought represented the visible face of Tesla's self-driving leadership.

Now, to get back to where it already was, Tesla will have to re-create, itself, the recognition of all these objects, ranges and boundaries. This is a neural networking learning task, but not one where high accuracy can be gotten without supervised learning. And what is "supervised learning"? It's a neural network learning process where the training set has both instances of what you want to recognize and labels for the correct result.

This is the main learning process, which can, in today's world, deliver very high accuracy. It's the process used by Google (NASDAQ:GOOG) (NASDAQ:GOOGL) to provide object recognition in its Google Photos app/service. It's the process now starting to be used by Google on its Google Translate app/service to improve its accuracy. It's likely also the process, which Mobileye used, to be able to provide accurate outputs (Mobileye describes it as a statistical learning process).

So this is the true reason why Tesla isn't delivering the ADAS features right now. It's because Tesla is still working on them, at this very low level. Also, this presents an interesting prediction:

  • Not all neural networks perform to the same level of accuracy, even if Tesla has higher performance hardware to work with. Thus, when Tesla finally (again) delivers its ADAS features, there's a certainty that these features will behave differently from the same features in AP1.0. "Differently," in this case, won't necessarily be "better." In time, it might be, but initially, it won't necessarily be.

Second, Tesla Isn't Telling The Truth

So we already know the true reason why Tesla lost its ADAS features. Now, why is Tesla not telling the truth about why it can't validate and field equivalent ADAS features right now? Well:

  • Either Tesla was not telling the truth about collecting data from AP1.0
  • Or Tesla is not telling the truth about needing to collect data from AP2.0

Why is this so? Because the sensor suite on AP2.0 includes everything from AP1.0 and more. So, if Tesla was collecting data good enough for machine learning from AP1.0 (as it stated), then it could train its AP2.0 hardware to, at least, match the AP1.0 ADAS features' performance using that data. Then, later on, it would train it further, using AP2.0 data, for the performance to improve and also to gain further performance from more extensive sensor data.

So either Tesla does not have adequate AP1.0 data (something I always defended), and thus, was not telling the truth about getting that data, or it isn't telling the truth about needing to collect AP2.0 data. This is so since if it had AP1.0 data, that would be enough to train the system back to AP1.0 standards even on AP2.0 hardware.

My own opinion, let it be known, is that Tesla does not have AP1.0 data that's good enough to train its system, and will train the system on something (properly labeled data) other than data coming from AP2.0 data on customer's cars. So in my view, Tesla is failing to tell the truth not once but twice.

Conclusion

Based on this exercise, I will make two predictions:

  • One, which is less robust, is that Tesla might miss its December 2016 deadline for re-delivering the ADAS features. This prediction is less robust, because the nature of the learning process will mean that Tesla can deliver something at any point, only the earlier it delivers, the less accurate the product will be. Tesla, of course, can choose to deliver early and improve later on. That Tesla didn't choose to do so right when it launched AP2.0 hardware, just means that Tesla is really early into the process of delivering something. In short, this makes it more likely for Tesla to miss the December 2016 deadline.
  • Two, which is more robust, is that whatever Tesla delivers, will perform differently than the equivalent features on AP1.0. The entire "recognition backend" will be different and have different accuracy in different circumstances, so the entire system will behave differently. "Differently," here, is not necessarily better. It might happen that when the AP2.0 ADAS features are delivered, they might be obviously worse than AP1.0 features - it depends on how tolerant Tesla will be towards delays. One would surmise that Tesla will try to avoid "obviously worse," but only to the extent that such doesn't imply having to sell cars without those features for many months.

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.