Tesla: Physics First Principles Of Automatic Emergency Braking

| About: Tesla Motors (TSLA)

Summary

Political risks of AEB/Autopilot issues possibly misunderstood by Tesla.

There is an asymmetrical distribution between 25,000 "one percenters" using Autopilot and millions of other drivers concerned about auto safety.

Election year politics could result in a negative outcome for Tesla concerning its "driving assistance" features.

Political pushback on Autopilot would not be viewed as interfering with Tesla’s stated objective of a "green" and "sustainable" future in transportation.

Introduction

It was a very busy week for Tesla (NASDAQ: TSLA). One activity was a giant tent show revival meeting in the desert near Reno where the faithful gathered to hear visions and revelations from the Prophet. Given that SolarCity (NASDAQ:SCTY) was hatched out of a nearby revival meeting in 2004 that is known as Burning Man, I think that an appropriate name for the new gathering should be Burning Cash. Meanwhile, far to the east on the much less enlightened coast of our great nation, there was a tedious little affair which was an appearance before the Senate Commerce Committee staff who are responsible for auto safety issues.

The visions from Tesla's testimony at the Senate hearing (as reported in this Reuters article) could also be considered visionary revelations as well. The visionary part of it also has a number of aspects.

The first is that apparently the vision of Tesla's Automatic Emergency Braking system radar, image sensors and software is myopic and has a problem distinguishing 53-foot long and nine-foot high trailers that are four feet off the ground as being different from overhead signs or bridges (that are usually at least 14 feet off the ground). The other aspect of Tesla's visionary myopia is that over two months after the tragic incident in Florida on May 7, the representatives from the company seemed to have no clear vision about what even caused the failure of their vehicle's AEB system.

The revelation part of the hearing was that if any of the Senate staff members was really listening, as opposed to reviewing focus group data about what sound bites their boss's worthy constituents wanted to hear next in this election campaign, it should have been a clear revelation that:

  • a system which cannot distinguish between large trailers and overhead signs or bridges

is not ready for deployment at all. Since there are apparently unknown and undocumented blind spots in the current configuration of Tesla's radar and image sensors, it is also egregiously irresponsible of the company to then have additional functions within Autopilot being "beta tested" by the company's own customers.

The reason that all of this is also connected to Tesla's Autopilot system, is that the same radar and image sensors for the AEB system are also used for the Autopilot system. If there are blind spots in the AEB system, it would be reasonable to assume that the same blind spots exist in the additional Autopilot features which include traffic-aware cruise control (TACC), autosteer (lane assist), and lane changes.

Tesla's own website also includes braking functionality in its description of Autopilot:

" Autopilot allows Model S to steer within a lane, change lanes with the simple tap of a turn signal, and manage speed by using active, traffic-aware cruise control. Digital control of motors, brakes, and steering helps avoid collisions from the front and sides, and prevents the car from wandering off the road."

Since the same radar and image sensors are being used for both AEB and Autopilot functions and that Tesla representatives, as I will describe below, still apparently don't know the cause of the May 7 accident, I believe regulatory action is required to have Tesla disable all of its "driving assistance" functions (both AEB and Autopilot) until Tesla can provide comprehensive data about all sensor functions including full "field of vision" performance for both its radar and image sensors.

It could have been this or it could have been that…

If I was in a meeting and I heard employees trying to explain the supposed cause of something and I started hearing things like "it could have been this or it could have been that," my conclusion would be that the employees didn't have a clue about what they were doing.

That would be bad enough in the usual silly environment that goes on in a lot or organizations but when it concerns what are supposedly automated safety systems and overall driver safely, my response would be a quick "DING, we're done here - come back in a few years until you have a lot more experience with employee testers of your vehicles, not members of the general public."

Some specific comments from the Reuter's article are especially notable to support such a perspective:

  • Tesla staff members told congressional aides at an hour-long briefing on Thursday that they were still trying to understand the "system failure" that led to the crash, the source said.
  • Tesla is considering whether the radar and camera input for the vehicle's automatic emergency braking system failed to detect the truck trailer or the automatic braking system's radar may have detected the trailer but discounted this input as part of a design to "tune out" structures such as bridges to avoid triggering false braking, the source said.

As for "still trying to understand the system failure" - the system failure happened over ten weeks ago at this point. If the company still doesn't understand why the system failed then it is not safe for customer deployment in general. Such lack of knowledge and insight strongly suggests that there are probably a lot of other things that Tesla also doesn't know or understand about how its systems are operating and that is another reason that such functions should be disabled.

As for the second point - not knowing whether the radar and camera failed to detect the trailer or that the AEB system did detect "an object" but "tuned it out" to avoid "false braking" - that also strongly suggests to me that such lack of knowledge is another reason to immediately have all of the Tesla automated functions disabled. Not only does Tesla not know what failed but not having such knowledge also confirms that Tesla knows nothing about any instances in which their systems might fail.

First Principles of Automatic Emergency Braking

In Elon Musk's continual efforts to display both what an erudite visionary he claims to be and to give an impression of his seemingly massive intellect, he is very fond of a particular expression which is "the first principles of physics." The usual context for using the phrase is usually also about some sort of amorphous or unprovable topic or issue. Since most people have never studied physics, there is usually then some sort of collective shoulder shrugging and submission that the Master (of Master Plans) must yet be right again about some topic.

What is laughable to me about his continual use of the phrase is that it is really just a catch phrase for what one might learn very early in an introductory physics class. The phrase is also essentially plagiarized from the title of a long-ago introductory physics textbook that was originally published in 1912 (somewhat before Autopilot). As such, I have come to view any topic where he uses the phrase with as much belief as when he also talks about being "cash flow positive."

I have my own view of first principles which I believe is actually closer to the technical meaning of the phrase than the usual empirical contexts where Musk uses the term. That difference, between pure physical theory which underlies the concept of the first principles of physics and empirical data, is actually the critical differentiator between two paths of working on any issues or problems.

Ironically, Musk also seems to rely mainly on empiricism in "managing" most of Tesla's activities. Such examples would include:

  • charging his customers $2,500 to be "beta testers" (umm, guinea pigs) for new driving assistance software whose outcomes are then beamed up to Mission Control on Mars
  • hyping up new vehicle launches, collecting deposits, and then seeing if the company has any capabilities at all to actually manufacture the vehicles (the first Model X was 18 months late and it then has taken another nine months to possibly get it into consistent volume production)

It should be obvious that such a trial and error feedback loop confirms the company has little or no insight into figuring out the first principles of any important issue.

In addition to my own view of the meaning of following first principles of any theoretical topic, I also have my own view of what the outcome should be - which is that the outcome should be the intended objective. As such, a function such as "Automatic Emergency Braking" (AEB) should actually provide Automatic Emergency Braking - and have both fully tested capabilities to detect ANY and ALL objects in the path of a vehicle and to then either completely stop or significantly slow a vehicle equipped with such a system before contacting another object.

Any failure of such a system, particularly a system that has been as actively promoted such as all of Tesla's supposed driving assistance capabilities, displays, at the least, overall incompetence in incorporating all factors that such a system would need to process and actuate. Given that Tesla apparently also knows that there are aspects about how their systems operate that they don't understand, selling such systems could also possibly be considered fraud. Such possible deception then cascades further into much broader issues for the company when considering the overall interconnected web of Tesla's vehicle sales, supposed technological advantages and continual need to raise more money to fund its money-losing operations.

Underappreciated Risks for Tesla from its driving assistance systems

Just as Tesla's supposedly automated systems have blind spots that the company apparently doesn't understand or know about, everyone also has blind spots that result in them not understanding both broad and interconnected issues in any situation. The blind spot for Tesla in this situation is that automated functions such as AEB and Autopilot are not currently a required component for its supposed mission of providing "green" and "sustainable" transportation solutions. An additional blind spot is not understanding the imbalances between asymmetrical constituencies - particularly in a political environment.

The politics of driving assistance functions being deployed is not really on the radar screen yet (just as "high ride height" trailers are apparently not yet on Tesla's radar) but the unfortunate incident in Florida on May 7 has started to create much broader public awareness of the issue. With growing overall awareness of such risks, the asymmetrical risk to Tesla is that on one side you have 25,000 Tesla customer beta testers and on the other side you probably now have millions of other drivers who are probably very concerned that out of control vehicles - which may not be capable of seeing all objects around them - are now randomly cruising down highways.

We are also in an election year where both sides will be trying heroic attempts to pull out all stops to show the electorate how much they care. Safety issues are an easy way to show such concern by politicians. As such, some of the Senate Commerce Committee members who focus on auto safety would find it a "no-brainer" to do the math that disconnecting Autopilot for 25,000 "one-percenters" would have a lot of broad support among millions of other constituents.

Tesla would likely counter with a combination of more of its skewed and misleading statistics that ignore much lower overall fatalities for the demographic driving Tesla vehicles and on the road types on which Tesla vehicles are driven versus the overall fatality statistics that Tesla uses. Additional mumbo jumbo that could be communicated by Tesla is that such "real world" driving experience from current Autopilot users is on the critical path to providing more energy efficient autonomous vehicles in the future but I don't think such a message would have much support in the current more populist environment.

What is really outrageous about the way that Tesla uses it highly selective statistics is that it also violates all principles of statistical significance in designing testing procedures as there are no controls. One very interesting statistic that we have also not heard at all is the incident rate of other Tesla vehicle drivers who do not use Autopilot.

If Tesla really wants to gather real world experience with Autopilot, they should have 1,000 Tesla employees drive their cars around to gather that experience. If that sounds like an unreasonable extraordinary expense, just add it to the overall "mission" costs that Tesla spends money on, given their vision for a sustainable future.

A recent external review of Autopilot

Within the last week there was also another discussion of Autopilot that I saw in an article by Alex Roy from The Drive comparing Autopilot and Mercedes Benz's new Drive Pilot. The article is well written, worth reading and also a strong recommendation for the current Autopilot capabilities but, in my opinion, it also has comments about Autopilot that should be regarded with concern.

While I have not yet done test drives of either a Tesla with Autopilot or a Mercedes with Drive Pilot, I also have some of my own experience with a "driving assistance" system which is Drive Pilot's precursor (Distronic Plus) that is installed on a Mercedes which I currently own. Maybe I just prefer to rely on my own driving skills and awareness about what is going on around me but I found the continual "adjustments" that Distronic Plus was making resulted in my feeling uncomfortable while it was engaged and so I no longer use the system.

While I would like to respect the opinions of others about the utility of such systems, I also have a broader perspective about the possible cascading absurdity of such systems being more broadly deployed. As anyone who lives in a major metropolitan area knows, it really only takes one idiot to completely disrupt the traffic flow for an entire freeway and cause delays that could be 30 to 60 minutes in duration.

From that perspective, I have a dim view of a whole bunch of automated systems being deployed which result in general driver complacency while such systems continually speed up and then slow down and then speed up and then slow down possibly based on the erratic driving patterns of either other drivers in general or on a lot of other vehicles also using similar driving assistance systems. I also find it completely absurd that a direct view of the road in front of and around a driver would be substituted with a virtual image of the road and supposed positions of other vehicles on the road on the display inside the vehicle. Switching views back and forth between actual images outside the vehicle and the virtual images on the vehicle's dashboard display also seems to me to be a huge distraction and possible additional driving risk.

While all of the above are just my views, I believe some of the excerpts from Mr. Roy's article that I've included below about Autopilot also describe some of the same things that I just mentioned. The excerpts are in italics and I've then followed each excerpt with my comments.

"Autopilot: Situational Awareness

The Tesla's display is the heart of the relationship between Autopilot and its user. Everything the car sees, the driver can see. If the lane lines are blue, the car can see them. If one or both turn grey, the driver will likely have to take over. Cars and trucks are indicated as cars and trucks. Walls, curbs and unknown objects are color coded based on distance. If a vehicle ahead turns blue, Autopilot will track it and remain engaged even in the absence of lane markings, up to a point."

Up to what point will the other vehicle be tracked by the display? This sounds like yet another unknown about how Autopilot operates. It also sounds very distracting and dangerous to continually monitor whether lane lines on the display are either blue or grey instead of directly looking at road markings and traffic around the vehicle.

"Autopilot: Steering

Very good. This is Tesla's crowning achievement. In perfect conditions it tracks lane markings and remains centered better than I would. It does such a good job, one shouldn't linger near exit lanes, because it may track right off the road. Anyone who wants to live will only let this happen once."

Hearing that one shouldn't linger near exit lanes due to the risk that automated steering may guide the vehicle off the road sounds dangerous to me. Maybe Mr. Roy is also making a joke in the final statement but he is also expressing the possible danger of this function by saying anyone who wants to live will only let Autopilot's steering function do unintended things once.

"It also shouldn't work as well as it does at night or in weather, which is why some users grow overconfident and get in trouble."

This comment is a critical issue as there are currently no controls or monitoring systems which will automatically turn the system off in "weather" or "at night" and leaves such a decision up to the random and uncontrolled judgment of each driver. Such possible inappropriate use of such systems is not only a risk to the driver using such systems but also to all vehicles in the same area.

"Autopilot: Hands-Off Interval

In perfect conditions, Autopilot will drive itself as long as...that's unclear."

If it is unclear how long Autopilot will continue controlling the vehicle, that is another reason why the system should not currently be used.

"Tesla won't say, but in my experience the average is about six minutes, after which you get..."

Someone could completely doze off in six minutes which has actually been documented in videos of a Tesla driving in rush hour traffic with a driver who had nodded off and that is very dangerous. The following is what you will get as a continuation of the previous statement:

"Autopilot: Warnings & Involuntary Disengagement

Pretty straightforward, but not good enough. Visual warnings appear onscreen and tones will sound. If music is playing, it will be interrupted. I've never missed these warnings, and it's hard to understand how one could, but this isn't even the biggest problem.

It's entirely possible for Autopilot to remain engaged until the last possible moment, then provide a warning and disengage precisely when an unready driver is least able to take over.

What was just described is the fundamental issue with Autopilot which is that knowledge about when or how it may disengage is apparently not understood. The driver in the accident on the Pennsylvania Turnpike apparently didn't understand that Autopilot was supposedly no longer engaged and the driver in a rear-end collision in California was apparently under the impression that her vehicle's AEB system was to supposed to stop her car before a collision occurred. Ironically, this distinction initially seems to be Tesla's main line of defense about Autopilot incidents - that the system was not actually engaged at the time of a collision but apparently drivers don't know that is the case.

"There are two solutions here: 1) MUCH larger, louder warnings about keeping your hands on the wheel, and 2) geo-fencing Autopilot based on location and conditions."

The final comment about Autopilot needing a geo-fencing capability based on location and conditions (such as weather or time of day) should have been a minimum requirement before allowing drivers uncontrolled use of the system. Without such additional functionality, it is egregiously irresponsible for the current system to be deployed. As I've described in the earlier section about the potential political risks of driving assistance features, politicians may also come to the same conclusion.

The Price of being a Tesla Beta Tester or Investor

There is another related issue to the unknowns about the current capabilities of Tesla's AEB and Autopilot systems and that is the culture of Tesla and its leaders. Essentially, if Elon Musk is willing to let Tesla vehicle owners possibly risk their lives providing "real world experience" for the future development of the company's Autopilot system, he also probably has no qualms about letting investors and creditors risk their capital in what is still an extremely speculative enterprise.

Investors have been misled about Tesla's capital needs with statements about the company soon to be "cash flow positive" and a lot of Model X reservation holders experienced long delays in receiving their vehicles. Many Model X owners have also apparently been unpleasantly surprised with fit and finish issues and door malfunctions which are all part of Tesla's empirical "learning on the fly" approach to running its business. In 15 months, we will then see another data point about whether Tesla has any more manufacturing competence to then introduce a third model, both on time and in volume production at production rates supposedly four times previous production rates.

Conclusion

The issues with unknowns about the functionality and capabilities of Tesla's various "safety" (AEB) and "driving assistance" (Autopilot) systems are such that I believe that any serious regulatory scrutiny of such systems would result in requiring the systems to be disabled until extensive future testing is done in controlled tests rather than random "real world experience" with no comparative controls.

I also believe that Tesla has underestimated any possible political response to the introduction of such systems given what I have also described as the asymmetrical distribution of 25,000 "one-percenters" being beta testers for the system while millions of other drivers may be concerned for their safety. As I've also described given the current political environment, actions concerning the Tesla systems could happen very quickly depending on the initial response to Tesla's Senate Committee appearance last week.

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: This article expresses the author’s opinions and perspectives about various investment related topics. Since all statements in the article are represented as opinions, rather than facts, such opinions are not a recommendation to buy or sell a security. My own investment position described in the disclosures is not intended to provide investment advice or a recommendation of a specific investment strategy but is a required disclosure item by Seeking Alpha. My own investment position may have been initiated at very different price levels than current prices levels and so that is also why my disclosed position is definitely not intended as an investment recommendation. All investors should also do their own research before making any investment decision.