New Research May Vindicate Tesla's Choice To Forego LIDAR

Nov. 13, 2017 1:55 PM ETTesla, Inc. (TSLA)283 Comments
Yarrow Bouchard profile picture
Yarrow Bouchard
1.82K Followers

Summary

  • New research finds that a self-driving car can use cameras to determine distances to within 10 centimeters (3.9 inches) of accuracy.
  • 10 cm of accuracy is most likely sufficient for self driving, making LIDAR unnecessary.
  • The accuracy of cameras was only tested at low speeds, but the evidence points toward accuracy being good enough for high speeds.
  • Foregoing LIDAR may be a strategic master stroke, giving Tesla a multi-year lead in self-driving over all competitors.

The most common criticism of Tesla’s (NASDAQ:TSLA) self-driving hardware strategy is that, unlike competitors, Tesla is not using LIDAR. LIDAR is a sensor that operates by firing laser pulses at the objects around it. By measuring the time it takes a laser pulse to bounce back, the LIDAR unit calculates distance. LIDAR, proponents say, is needed to determine the exact distance between a self-driving car and objects in its environment, such as pedestrians, cyclists and other cars.

It turns out this may not be true. New research finds that using just four cameras and an inertial measurement unit, a self-driving car can determine the distance between itself and other objects to within 10 centimeters (3.9 inches) of accuracy. For reference, 10 cm is the length of a credit card plus the width of one finger. The caveat is that the system was only tested at low speeds. At high speeds, motion blur and other visual artifacts may occur that would need to be corrected for.

CEO Elon Musk describes Tesla’s approach to self-driving at TED 2017.

Why this matters

In the next section, I’ll get into the technical details of using cameras to determine distance. But first, let me spell out the potential implications for Tesla. LIDAR has an accuracy of 1.5 cm (less than one finger width). However, it is too costly to include in production cars. It may be several years before the cost comes down enough to use LIDAR at scale. If it turns out that the ultra-high accuracy of lidar is overkill, and cameras are accurate enough, then Tesla is making the right decision to forego LIDAR and rely on cameras and radar, plus short-range ultrasonics.

By deploying self-driving hardware in production cars potentially years before competitors who are waiting around for affordable LIDAR, Tesla will continue to widen its driving data advantage. Between now and 2020, Tesla will train its neural networks with data drawn from over 11 billion miles of driving.

With more and more data, Tesla can train its cars to drive in a larger and larger share of the situations, conditions, and environments that humans drive in until that share reaches 100%. While no one can yet say how much data is required to develop full self driving safe enough for widespread public use, we can at least say that (all else being equal) the company with the most data is the closest to that goal.

Tesla’s suite of sensors. Source: Tesla.

By 2020, competitors may not have even launched a production car with self-driving hardware due to high cost of LIDAR. Even if affordable LIDAR becomes available before 2020, car manufacturers often only use components that are available at the time the car is designed, meaning there could be a multi-year delay between when affordable LIDAR is available and when it is used in a production car.

The longer it takes competitors to deploy their first production car with self-driving hardware, the better for Tesla. It gives Tesla more time to launch the Tesla Network, its planned autonomous ride-hailing service. I roughly estimate that Tesla could earn over $12.7 billion per year from the Tesla Network, increasing its market cap several-fold and making it one of the most profitable companies in the United States. Tesla would benefit from launching the Tesla Network before a competitor can launch a similar service. Autonomous ride hailing has a first-mover advantage, due to consumer habit and an early advantage in driving data.

If competitors are delayed due to their dependence on LIDAR, that also gives Tesla more time to scale production. Scaling production helps Tesla retain its data advantage. It also will be needed to satisfy demand. Total demand for self-driving cars could eventually be in the hundreds of millions, since the lower cost, higher safety, and higher convenience of autonomous ride-hailing will quickly render the global fleet of 1.2 billion manually driven vehicles obsolete. Following the launch of the Tesla Network, Tesla’s ability to grab market share will largely be a function of its ability to scale production to meet demand.

Foregoing LIDAR, then, could be the strategic master stroke that allows Tesla to become the largest company in the automotive industry. But that’s only if LIDAR is overkill and cameras can do the job as well as is needed. So, what’s the truth?

Using cameras to determine distance

First question: Is 10 cm of accuracy enough for driving? Remember, 10 cm is the length of a credit card plus the width of one finger. My gut sense is that 10 cm is enough, since when I’m driving I don’t think I have that level of accuracy. Generally speaking, if my car is within 10 cm of anything, that’s too close. That’s almost touching.

My gut feeling is somewhat backed up by a study on parking. The study found that 95% of the time cars are parked with a misalignment greater than 10 cm between the center of the car and the center of the parking spot. This was true even when participants were asked to align the center of their car as closely as possible with the center of a pad, although in this part of the study drivers did not have the typical painted lines of a parking spot to assist them.

Put it this way: A self-driving car that always left at least 30 cm (11.8 inches) between itself and other objects would do just fine. It would not be accused of leaving too much room. In fact, the more room it leaves the better, since it provides a larger margin of error. By keeping at least 30 cm of distance from all other objects, a self-driving car could miscalculate by 20 cm and still be 10 cm away from touching.

Three front-facing cameras, found in all new Teslas. Source: Tesla.

Second question: can this level of accuracy be maintained at high speeds? The answer here is less definitive. The recent experiments with a four-camera system only tested the system’s accuracy at very low driving speeds in a parking garage. In discussing the matter with one of the researchers, I learned that the concern about adapting the system to high driving speeds is motion blur and other visual artifacts that may occur.

Motion blur occurs when the relative motion of an object is fast enough that the object appears blurred because it has moved during a single exposure. A related visual artifact results from the object moving during exposure, but without the blurring effect. With a camera that uses an electronic rolling shutter, each pixel is captured one after the other. At highway speeds, a car may move so fast that it has moved between the time the first and last pixel is captured. So, the roof of the car may appear to be slightly behind the wheels underneath it.

I have three thoughts on this.

First, motion blur can be reduced using software. One study used a neural network to estimate motion and reduce blur in images.

Second, Tesla’s cameras may have a fast enough shutter speed to avoid these visual artifacts. Shutter speed is how long an exposure lasts and hence how fast a camera captures an image. A faster shutter speed means faster moving objects can be captured clearly.

Hackers at the Tesla Motors Forum believe they have uncovered the model of camera that Tesla uses. The shutter speed of that model is 1/60th, meaning it exposes each frame for 1/60th of a second. The frame rate is 60 frames per second, meaning it exposes 60 images per second.

According to car photographer Paddy McGrath, when a photographer is in a car following and photographing another car, a “good rule of thumb is to set the shutter speed to whatever speed the cars are doing i.e. 40mph at 1/40th, 80mph at 1/80th.” This is enough to ensure that “the car’s body remains sharp.” At 1/60th, a car’s body would remain sharp at 60 mph (96 km/h).

Even at high speeds, then, Tesla’s cameras should be able to get a clear image of the cars around it.

One of the side cameras included in all new Teslas. Source: Tesla.

Third and finally, at higher speeds, less longitudinal (forward-backward) accuracy is required with respect to other cars, because more distance needs to be left between the self-driving car and other cars. The system will perform fine as long as the vision neural network can detect a blurry car or a car with a slightly misaligned roof and wheels.

Following the two-second rule, a safe following distance while driving at 130 km/h (80 mph) is 72 meters or 7200 centimeters. Accuracy can deteriorate 10x, 20x, or 30x from 10 cm at low speeds, and the car just needs to hang back a few extra metres to maintain a safe following distance. (Plus, radar is used as a backup.)

It’s really lateral (left-right) accuracy that is needed. Most importantly, the car needs enough accuracy to stay centered in its lane and to correctly change lanes while avoiding other vehicles. There is strong evidence that Tesla’s sensor suite has sufficient accuracy to do this. The evidence: Tesla’s cars already do autonomous lane keeping and lane changes on highways! You can see that in the video below.

You also can see in the video that Tesla’s cars maintain a safe following distance behind other cars on the highway. To do that, the cars likely use both cameras and radar.

LIDAR is (probably) overkill

In summary:

  • At low speeds, cameras can be used to determine the distance between a self-driving car and other objects to within 10 cm.

  • Determining distance to an accuracy of within 10 cm is most likely good enough for the purposes of driving a car. Both my gut feeling and parking data support this.

  • At high speeds, the concern is that motion blur and other visual artifacts may occur, harming accuracy.

  • If it occurs, motion blur can be reduced using software.

  • Motion blur may not occur since (according to hackers) Tesla’s cameras have a high enough shutter speed to produce crisp images even at 60 mph (96 km/h).

  • These visual artifacts may not matter since simply detecting a blurry or misaligned car should suffice for high-speed driving.

  • Already today Tesla’s cars can autonomously stay in their lane, change lanes, and maintain a safe distance behind other cars when traveling at high speeds. This suggests cameras and radar are accurate enough for high-speed driving.

If cameras and radar are accurate enough for high-speed driving, and are even more accurate at low speeds (where they are also augmented by ultrasonics), then LIDAR is overkill. 1.5 cm of accuracy is not needed.

The brilliance of Tesla’s hardware strategy

On the latest earnings call, Tesla CEO Elon Musk said this about the company’s self-driving hardware strategy:

...We feel confident of the competitiveness of our hardware strategy. I would say that, we are certain that our hardware strategy is better than any other option, by a lot.

If LIDAR is indeed overkill, as I have surmised, then Tesla’s strategy gives it a multi-year lead over all competitors. Tesla could end up launching the Tesla Network before a LIDAR-equipped production car rolls off the line.

Perhaps this is what Nvidia (NVDA) CEO Jen-Hsun Huang had in mind when he said the following:

...I think what Tesla has done by launching and having on the road in the very near-future here, a full autonomous driving capability using AI, that has sent a shock wave through the automotive industry. It's basically five years ahead. Anybody who's talking about 2021 and that's just a non-starter anymore. And I think that's probably the most significant bit in the automotive industry. I just don't – anybody who is talking about autonomous capabilities in 2020 and 2021 is at the moment re-evaluating in a very significant way.

Huang added that vehicle autonomy is “not a detection problem, but it's an AI computing problem and that software is really intensive.”

Competitors who are waiting for LIDAR to become affordable may find themselves years behind Tesla.

Tesla demonstrates full self-driving without LIDAR. Source: Tesla.

Please tell me why I’m wrong

Writing this article involved research into several topics in which I'm not an expert, including photography, computer vision, machine learning, and robotics. It’s possible that I waded out beyond my depth. If you are knowledgeable about one of these topics and see that I’ve made a mistake, I want to hear from you. Please contact me using this form or message me via Seeking Alpha. My goal is to know the truth. Any help you can provide is much appreciated. Thanks in advance.

Disclaimer: This article is not investment advice.

This article was written by

Yarrow Bouchard profile picture
1.82K Followers
I write about autonomous vehicles and other AI robots on Seeking Alpha and in my Substack newsletter. I've been long TSLA (which is 95%+ of my portfolio) since February 2017.
Follow

Disclosure: I am/we are long TSLA. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Recommended For You

Comments (283)

To ensure this doesn’t happen in the future, please enable Javascript and cookies in your browser.
Is this happening to you frequently? Please report it on our feedback forum.
If you have an ad-blocker enabled you may be blocked from proceeding. Please disable your ad-blocker and refresh.