Seeking Alpha

NVIDIA Corporation (NVDA) Management Presents at Deutsche Bank Autotech Conference (Transcript)

|
About: NVIDIA Corporation (NVDA)
by: SA Transcripts
Subscribers Only
Earning Call Audio

NVIDIA Corporation (NASDAQ:NVDA) Deutsche Bank Autotech Conference Call December 10, 2019 2:00 PM ET

Company Participants

Rob Csongor - vice president, Autonomous Machines

Conference Call Participants

Ross Seymore - Deutsche Bank Securities

Ross Seymore

All right, everybody. Let’s get started with the next presentation. We’re very pleased to have Rob Csongor, who is the VP - you have the coolest title overall, here’s the clicker for the...

Rob Csongor

I have the coolest title?

Ross Seymore

The coolest title, the VP of Autonomous Machines. It sounds like it’s a surprise to you too. But either way…

Rob Csongor

I didn’t even know what my title is.

Ross Seymore

Either way, NVIDIA has a really exciting automotive business that’s evolved very, very rapidly over the years. So Rob is going to hit us with a very brief presentation that includes a video for you all to watch, and then we’ll go into Q&A. During the Q&A, if you have any questions, just raise your hand. We’ll call on you, just wait for a microphone to come over to you as you’ve been so wisely doing thus far. So with that, let me pass it over to Rob to just go through a quick presentation.

Rob Csongor

Hey, good morning everybody. This is our fourth year here, right? And we now have the record?

Ross Seymore

Yes. Exactly.

Rob Csongor

So we should be getting a fruit basket or something, right, pretty nice. Okay, I just want to reaffirm for you guys what our strategy is, okay? And if there’s any touch ups on the strategy, just align you guys with it. And then sometimes, it’s just easier to show you some things as opposed to talking about it. So I’ll just show a video of some of the work that we’re doing.

So you guys know that our strategy is pretty simple. All right? Our strategy is to develop. And I guess, just a quick way to summarize it, I guess, what makes NVIDIA special is really four things, right, we have a powerful platform. Powerful meaning that we’re able to leverage all of the experiences that we’ve gone through in creating artificial intelligence, general purpose computing and solving a lot of compute problems in a wide variety of different applications. And then NVIDIA’s strategy is to leverage it. So as a result, we’re building upon about 20 years of compute experience, specifically around GPUs and around artificial intelligence and leveraging it into automotive.

Secondly, our platform is open. It’s very easy to say. It’s so easy to say and so hard to do, right? For those of you familiar with our CUDA platform, it took us 10 years to develop the CUDA platform and make it something that the entire industry works on and builds on. And we’re very patient, we’re in this for the long haul. We will be the last one standing. So our investment in this is based off of some experience on what it takes. It’s not just about chips.

The third component of what makes us special is that our platform is software defined. ADAS is largely fixed function and autonomous vehicles are software defined. Again, very easy to say but it is extremely complex in terms of what your strategy is and how you go about building autonomous vehicles.

And then finally, our solution is end-to-end. We’re not just doing edge computing in the car. We’re doing simulation, and we’re doing training, okay? Now that’s kind of the overview of what makes us special. But I’m going to run a short video with a very eloquent spokesperson in the video to show you what we’re doing.

[Video Presentation]

It’s just easier to show you some of that stuff. But basically, if you divide our solution into three, there’s an edge computing, there’s a driving portion, all of the compute that goes into there. To go through all of it is just – we don’t have the time. But fundamentally, we have a wide variety or a scalable solution that’s built off of the ability to leverage common cores and the same software to then have a family of products. So you can scale from Level 2+ up to robotaxis. There’s a base layer of software that allows you to connect and allows you to have an ecosystem.

So we have almost 400 different partners in the ecosystem. They write software, they develop products, they tie into APIs that we have. And as a result, our customers and carmakers are able to build products, and they’re also able to leverage a lot of solutions from a third-party ecosystem environment. We then, on top of that base level, we provide you with a bunch of pre-done code because we have our own self-driving cars. We have a fleet of cars we call BB8. We write our own software.

We develop them and then we go to customers, and you’re welcome to use as little or as much of our software as you like. We then package these things, and we provide them in the form of different tools. You can either have an SDK, or in the case of a vehicle itself, we have a vehicle called Hyperion, which allows you to have a development vehicle, which includes the computer. And then as a result, you can build on top of it.

Now the evidence of this strategy, everything that I’ve described to you, it isn’t just carmaker design wins. The evidence of this strategy is that nearly every robotaxi in the world is building on NVIDIA. All of the start-up companies, building shuttles, last mile delivery, delivery bots, commercial vehicles are doing so based on NVIDIA. So these are kind of proof points or evidence that you can look at to say, is this platform usable? Is it useful? And how can the world build on it?

Aside from the computing solution on the driving side, we have a training solution even customers who are using Mobileye, are using. If you probably know that BMW selected Mobileye in the current design and yet, BMW was presenting at GTC on how they’re using our DGX server solutions in order to train deep neural net models.

And then finally, we’re leveraging our expertise in graphics and physics and compute to create simulation solutions. And again, the reason is, is that testing and validating self-driving cars would just simply take too long. It would take hundreds of years, thousands of drivers, billions of miles to actually test and validate self-driving cars. And of course, the path to get there is through simulation. And what we can do is not just test and validate but we can actually generate and train deep neural network models using synthetic data.

So all of the – if you look at NVIDIA’s business, when we talk about ray tracing for gamers, that same technology can now be able – can be used to create photorealistic data that you can train, use to train deep neural net models. So we’re heavily leveraging technologies to create this end-to-end open platform. All right. So again, just in summary for us, the world of AV is a lot bigger. It’s not just cars itself. Commercial vehicles and delivery bots and last-mile deliveries, all of these things are part of the development, our key strategy.

So I think I went through those. All of those are game changers for AV makers. And the result is a wide variety of autonomous vehicles right now being developed. All of the ones that I listed in that slide there are actual autonomous vehicles that are being developed today. Collecting and training and analyzing data is essential for autonomous vehicles. Today, over 60 automotive companies are using our DGX, and we’re just literally just getting started.

And then finally, the DRIVE Constellation simulation system. Not only is it something that is a tool that’s available but again, it’s also developed in the same trademark way that NVIDIA operates, which is to build everything as an open platform. And then the simulation tool is actually open itself. So partners out there like ANSYS and other companies, can connect and as a result, you can leverage sensor models, things like this, and plug them into our simulation solution. Okay. That’s it. Just wanted to give a quick summary.

Ross Seymore

Perfect. As we did before, I’ll ask a bunch of questions up here. And then if you have a question, just raise your hand and get my attention. We’ll get a microphone over to you. So Bob, like you said, you guys have been an anchor tenant at this conference, whether it be here in San Francisco or in London, for which we’re very, very grateful. Over those three or four years, we’ve seen a bit of an evolution in NVIDIA’s strategy. Originally, you seem to be one of the most optimistic about Level 4 or 5, full autonomy happening sooner rather than later. Up until now, it seems like now you’ve somewhat bifurcated that, where you’ve had a consumer strategy that’s much more L2+, and then you have the robotaxi and those sorts of things, which still have the L4/5. So my question is, do you agree with that assessment that you’ve pivoted a little bit? And if so, why do you think that it’s taken longer to get the L4 or 5, because it doesn’t seem like it’s based on any issues with NVIDIA and your technology but other companies, even the president of NXP, said the same thing, that it’s taken longer even from their point of view.

Rob Csongor

Yes, I think it’s largely correct. There’s a couple of things that I think are a little different, and I’ll try to answer your question that way. First of all, it’s become clear to us that Level 2+ and Level 4 is essentially the same driving functionality. I don’t think it’s going to be too much of a surprise to anybody that Tesla is going to announce full self driving, right, in their car and in their platform. You just won’t go into the backseat and take a nap, right? So driving functionality wise, we see Level 2+ as a way to get a solution out to market faster. The same driving functionality essentially is used for Level 4, but now you have to add failover, redundancy and whatever delays regulatory will bring.

And how long will regulatory delays be, well, we don’t know. I don’t think anybody knows, because it’s never been done before. So Level 2+ I think is a logical step to deliver essentially a fully self-driving car that you have to be alert. The failover procedure is that you take over as a driver. And then it allows companies to develop solutions, get something on the road, get revenue and then you can in parallel, develop your Level 4.

Level 5, we actually see as something that’s available sooner, and the reason is that Level 5 doesn’t have to drive everywhere in the world. Level 5 can be geofenced. You can – I read an article this morning, Mercedes and – announced that they were deploying their robotaxi based on NVIDIA, very soon here in the Bay Area and actually putting it out on the road. And the reason they can do that, again, is because it’s – you can decide as a robotaxi, you can decide, I will drive in these areas and I won’t drive everywhere.

So as a result, Level 5 is something that I think is coming to market as planned, possibly faster, especially in commercial vehicles. I think you guys have probably read about the announcement that Volvo Trucks is using NVIDIA’s solution and deploying a wide variety of different vehicles, so mining vehicles and haulers, short-range haulers. And all of those, again, can be geofenced. So I think there’s a little bit of puts and takes and – but otherwise, I think what you said is roughly right.

Ross Seymore

So we’re thinking on the consumer side, we can go to the commercial side and the robotaxi later, we talked earlier on some of the panels that we had up here about people future proofing, putting in the technology that can evolve pretty easily, some of it just through software, to go up to Level 3 and Level 4. How do you handle the economics of that, if the technology is kind of robust enough and theoretically expensive enough to handle those driver-free applications? Is it more of an economic challenge early on for the adoption of Level 2+ or does NVIDIA kind of tap down the size of the solution for Level 2+ and make it more ala carte for each level of autonomy?

Rob Csongor

Yes. That’s a good question, and this is part of what defines software-defined. That’s where software-defined comes in. AI is a development tool, and if you give AI to engineers, 10 seconds after they give you a set of requirements and say, we need these five things done, they will, all of a sudden, discover that AI can solve these other 20 things. So this is very much like other markets that in my years at NVIDIA, this is what happens. And as a result, people end up building and developing more and more capability because this tool can now solve a problem. So I think the proliferation of deep neural networks in the car is going to grow much, much more than people realize. And that’s why there’s a big difference between autonomous vehicle and ADAS.

In ADAS, you have requirements that are largely fixed. You can say this is what your [lane] [ph] cap is, you meet the requirements, and that’s essentially it. But for autonomous vehicles, I think the more you use it, the more you realize you have to figure out a solution for this situation or that situation and as a result, you need to have a runway and a scalable architecture that lets you build not just the hardware component of it, which is really just the beginning, but the software. And the software is a big, big part of this whole solution. So on the economic side, I mean, from a cost point of view, hardware-software will simply scale, right? And then in terms of funding for NVIDIA, we’re also of course, selling other parts of the solution, training and simulation. So…

Ross Seymore

You guys have talked – I think you even mentioned in your slides, you’re up to nearly or maybe even over 400 engagements now. I think you said somewhere between 300 or 400, it’s a huge number either way. Talk a little bit about the go-to-market strategy. That’s a ton of engagements. Obviously, we know that the design wins happen significantly earlier than the revenues in this market. So everybody understands that. How exclusive are these relationships? Do you see that it’s – the phrase I used earlier, everybody is still dating everybody. We’ve been talking about this for four years that people haven’t seen to pair off too much, but you probably amongst everybody had the splashiest design win announcements with the likes of Volvo, Toyota at your analyst meeting earlier this year. Are those exclusive? How important are those wins with OEMs versus Tier 1s? Just talk a little bit about how that go-to-market strategy works?

Rob Csongor

Yes. I mean, typically, when we engage these companies, we start out with a proof of concept, a POC. And then once we get past a certain point, it evolves into something where we’re now talking about how to scale across a fleet of vehicles, wrestling with issues like on this car, the cameras are in this position but on this model, they’re going to be in a slightly different position. So how does that affect your deep neural networks? And how do you adjust it and things like that. So I think – and then the other thing is that the business, the go-to-market skills from just an edge computing engagement to one where we’re involved in training the data, in developing the software, in doing simulation.

And I think you’ve seen that. Toyota, Volvo are just the two examples you listed, both of them started out like that, where we’re involved in a pilot program and then eventually became an end to end engagement. So I think that’s largely – it’s not even so much a deliberate strategy as opposed to a logical evolution of a relationship because everything that we have to go through to get our cars to drive, it stands to reason that our partners do also.

Ross Seymore

So you needed to train the validation for people, I would think it’s a natural evolution that you would also do the driving assistance side of things. Is that kind of feeder system of doing the training, worked in the majority of the cases that you’ve been involved with?

Rob Csongor

It is definitely for new engagement, but for older engagements, like I mentioned, the BMW engagement, it came after the fact where they realized, hey, there is all this other side of this solution that we’re not doing right now, and we need NVIDIA for that. So it’s – in that particular case, they’re using our training and our simulation tools, and then they’re not, in the current design, not using the edge computing.

Ross Seymore

Current might be the operative word, I’m thinking. That was my word, not yours. So we talked earlier about the processing power in the vehicle and the partitioning of that between the sensor, the core processors, kind of the decision-making engine of the car. How does NVIDIA envision that? Like early on in this evolution, we had supercomputers in the trunk. We knew that was never meant to be the go – eventual go-to-market strategy. But how do you see that working from the processors at the edge, regardless of what type of processor it is, to how important the core AI engine is that’s making the decisions? Where do you think that processing power needs to sit? Is it a balance? Is it all in NVIDIA, the core processor? How do you see that partitioning?

Rob Csongor

Yes. If I have my autonomous machines hat on, I would say that we’re not religious about it. It really depends on the application. We have robot applications today where a single camera, very, very high resolution, where we’re detecting defects on a gear in a production system. And in that case, having a very intelligent, low-cost, high-performance camera is the right solution for that robot. For a self-driving car, a self-driving car is going to be one of the most complex computers in the world. It will be running tens of deep neural networks models simultaneously. It will be fusing surround cameras, radars, lidars, and it will be doing so under the requirement of a high degree of safety at high speeds.

So for all of those things, I think, a centralized computer architecture is the proper solution as opposed to an intelligent sensor. And then as a result – and then we start getting into the weeds but there’s a lot of details about how you set that up and how you divide compute tasks and requirements. But that’s kind of where the rubber hits the road. It’s – that’s a lot of complex expertise.

Ross Seymore

And talk a little bit about how the DRIVE system has evolved over time. People look at NVIDIA and say, your strength is you’re a GPU company, parallel processing, et cetera. But as the system has evolved to lower power consumption, there’s different sorts of processors on the board. Just talk about how that entire system has evolved. And where do you think the next important steps are for your business from that technology point of view?

Rob Csongor

Yes. Well, first of all, on the power per watt and power is the first fundamental metric that we use to design our silicon. Power matters. Most people think of power as, "Hey, you got to have low power for a cell phone or you have to have low power for a small camera. Power is probably the single biggest issue in designing a supercomputer, right? So which you guys, I’m sure, can imagine. So the kind of things that we’ve done, like for example right now, the success we’re enjoying in the notebook business is largely due to a new power innovation called Max-Q, which we rolled out to the market, and now we’re able to provide ray tracing capabilities in a thin and light notebook. They used to be big and bulky, and now they’re very, very thin and light. So power technology is just simply across the board, fundamentally in every single vertical market that we engage in.

So, I think that there are certain realities about a car. And in every one of the applications that we have, there’s a power budget that we define, customers define it. And then we simply have to deliver the absolute maximum amount of performance within that power budget. That’s the hard part. Many, many companies can be low power because you might have a chip that is unencumbered with transistors. It might be a very small chip, which is very low power. But that’s not what the goal is. The goal is to deliver something that can power tens of deep neural networks and operate within a power budget, and that’s really where we have demonstrated excellence. I think if you want to see more of the impact of these things, I encourage everybody to go look at the recent results of the MLPerf benchmarks.

You guys know that AI finally has a benchmark, right? It’s called MLPerf. And it was – the first one was launched this year for edge inference, and I can talk all you want and you can listen to lots of people selling and marketing. And I encourage you to just go look at the results. NVIDIA pretty much dominated across the board, not just in the training side but also on the inference side. So the details matter, and per watt means more than just whatever lowest power is. It’s also how do you deliver the performance to drive a self-driving car within a power budget.

Ross Seymore

I think we have a question.

Question-and-Answer Session

Q - Unidentified Analyst

Yes. Question here. More on the sensor node and also, what is – are you driving the market towards what types of sensors you’d be able to compute? Or would you get the input from the road maps of all these different sensor companies? And how is that evolving over time? For example, would the restriction be the standardization bodies or proprietary chipsets you would be able to get within the processing unit? Or is it more driven by what’s currently available out there on the sensor market as that would require enormous amount of work on your side?

Rob Csongor

Yes. It’s a good question. The – it’s a little of both. The customers are obviously – in the partnerships that we do and one of the benefits of having a large ecosystem is that we get lots of feedback all the time. So we’re always learning what are people doing. Hey, check this out, look at what happened in this particular case. So that’s one of the benefits of an ecosystem, and so we learn a lot from what people are doing. But honestly, the – in a lot of cases – and I joined NVIDIA in 1995, and in all my years here, the one thing that’s clear is that every single business that we’ve gone after, we created. And if we had gone originally to a customer to say, what would you like? They probably would have said, "Well, we’d like this $5 Windows 95 accelerator to be $4.

"So there’s a large part of what we’re doing, what we’re doing is because we believe. We just believe that this really, really complex computer problem can be solved uniquely using our expertise. And therefore, we’re going to go to customers with a lot of things and say, we think this is important, check this out. So it’s a little bit of both. It’s a little bit of both of us developing and going to customers and saying, this is something we think you’re going to need, and. Then a lot of it is them coming back and saying this is what we need.

Ross Seymore

Just get another question over there.

Unidentified Analyst

Yes. Thank you. Two questions, please. The first one is what you said against the – about the regulation being one of the biggest hurdles to overcome for autonomous vehicles. Are you involved in the discussions with the regulator? Is it primarily done by the OEM?

Rob Csongor

It’s primarily done by the OEM. We are involved in safety panels but most of the requirements for carmakers and regulatory, we learn from them. So having said that, the – what I said was that a lot of the regulatory obstacles are unknown. I just don’t know what they are. What – how long will it take to get Level 4 certification, I don’t know.

Unidentified Analyst

And when we talk about deep learning and AI, one of the problems is obviously, you cannot really tell the regulator what the system will be capable of doing in two to three years down the road because it continues to improve itself, right? What do you say to BMW, Daimler, VW, what they then should say to the regulator to overcome that obstacle? Is that even really possible?

Rob Csongor

Yes. That was something that was a lot more of a conversation a couple of years ago where people would say, hey, how does AI work? How – what if you want to go back and debug something and then how does that work? So we are now – for us, artificial intelligence is in full production. On the hyperscale side on – in so many different industries and markets now, we’re past the point of arguing whether AI is useful or not. Or can it do things? Or how do you do this or that? So the – it’s actually a pretty – it’s a pretty involved answer, but the bottom line is that there are ways now to test and validate. For example, even if you handwrote code in a lot of cases, it’s not such a given that, oh, I know exactly.

Now I’m protected and I’m completely free from many problems because it’s handwritten code. It sounds funny even just to say it, right. But with artificial intelligence and with handwritten code ultimately, testing, validation, that’s where simulation makes a big impact because instead of having to drive 4-corner testings to test and validate an algorithm, you can now run it. The other thing that we do is we very consciously will use diverse algorithms and run them simultaneously. So for example, you can have an algorithm to detect an object and, therefore, don’t drive there. We also have an algorithm called free space, where you detect where you’re allowed to drive. So this is where you’re not allowed to drive. This is where you are allowed to drive. And that diversity of algorithms is actually purposeful so that you can have a check and balance to make sure that you’re able to do what you’re supposed to do. And then ultimately, test and measure.

Unidentified Analyst

And then just because we’ve heard it a couple of times today, automakers are very excited about connectivity because it enables them to have recurring revenue streams, right? You spoke about in the past about this possibility as well with software updates, for example. Is that still something that you see as exciting and interesting? Or is it still primarily you supplying the software and the chip in the first instance?

Rob Csongor

It’s inconceivable that a software-defined computer would not have updates, right? Imagine your cell phone or your notebook not having updates. And you just – it’s just inconceivable, right? So this is a big part. That’s why when I mentioned in the beginning one of the things that makes NVIDIA special is the fact that the platform is software defined. This, of course, takes into account that you’re going to update, and you have to have a mechanism for doing so. And then you have to have a software architecture in order to enable that to happen. The second thing that I didn’t talk about is simultaneous or running alongside an autonomous vehicle computer we are developing. And I think you’ve seen demos of work we’re doing on the cockpit computer.

Now, the two are connected to each other. The – whatever, for example, is happening in the autonomous vehicle and happening outside the vehicle, you need to visualize to the driver internally so that the driver can trust the car. You see this car coming at this side and then you almost want to tap on the car and say, “Hey, are you seeing this?” So – but I think you guys are familiar with lots of discussions going on right now about BERT, conversational AI, extremely powerful, very, very compute intensive. But why wouldn’t you talk to your car and have the same capabilities that you could get from Alexa or Siri. Now we happen to have expertise in that area because we work with these companies, and we work in the data center. So taking that capability and putting it into the car is certainly an upgrade I would like to get in my car, and it’s certainly something that our customers would like to get. So, the fact that NVIDIA is working on these things in data center and with other companies is something, again, that we’ll leverage into the automotive world, and it’s very exciting.

Ross Seymore

Thank you. Another question?

Unidentified Analyst

Yes. Thanks. Given all the data you’re seeing as every sensor signal will essentially run through your solution, can you talk a little bit about data protection but also specifically data ownership? Kind of what are you – in most of the contracts you have with these Tier 1 sort of carmakers, kind of what are you allowed to do with the data? How can you use them? And then how can you, I don’t know, keep them and potentially monetize them? Or is that not really an aspect?

Rob Csongor

Yes. That’s a big topic. In general, we – our customers’ data is their data. And then we have agreements with customers where if we’re using data to train deep neural network models, which they will get the benefit of, then we’re allowed to use that data. But it’s kind of on a per basis case. And then there’s some exciting developments coming in that space. So stay tuned to this channel.

Ross Seymore

Any other questions from the audience? Why don’t we talk a little bit about the competitive environment. There’s – certain OEMs are trying to go this route on their own, and then there’s the merchant side of things. And we’ve even had discussions today about some of the Tier 1s trying to attack the market. So how do you view that competitive environment having evolved, if you look back over the last couple of years? And how do you see it evolving looking forward?

Rob Csongor

Yes. Well, I think right now, if you look at cars, for the most part, what’s shipping is ADAS. So I think for ADAS, it’s – there is an established market out there, and there’s probably going to be a number of different players. And they’re going to fight it out. For autonomous vehicles, there – I think it’s really us and Mobileye. And then for autonomous vehicles – and of course, the big difference, the biggest difference is that we’re open and they’re not. So I think that’s – I mean, that’s a simple answer, but it’s probably accurate.

Ross Seymore

Do you see the vertical integration at OEMs as being a threat? Or do you think that’s kind of unique to the one, an hour south of here.

Rob Csongor

Yes. I think it’s unique to Tesla. I mean, Elon is designing his own seats.

Ross Seymore

Amongst plenty of other things, I guess.

Rob Csongor

It really is unique. It’s uneven there, I mean, it’s not unheard of that customers decide that they want to try to build their own solution, and we still work with Tesla in the other areas. But I think what we communicated to Tesla originally is still the same, that we’re here for Elon if he needs us. So he’ll do what he wants to do, and this is what he wants to do. I think, however, that dynamic is exactly a positive for us because for any carmaker that would like to catch up and/or pass Tesla capability, there’s only one platform that you can go to, that you can also use to go to and build your software expertise on, right? And I think one of the things that people have learned and one of the evolutions of this market is that I think carmakers are increasingly going to be reluctant to outsource the driving software of their car.

So, if you need a bug fix or something you have to go to Jerusalem and beg, and if you want to develop new capabilities. So, the question is, is a carmaker going to be comfortable and willing to outsource the driving of their car to a third party? Or is that something that you gradually, over time, want to bring in house and control yourself? Even if you don’t have it all today? And that’s, I think, where our open platform really is ideal because you’re welcome to use as little or as much of what we have.

Ross Seymore

So in the last minute we have, if we wrap all this up, I think at your last analyst meeting, you talked about the better part of a $30 billion TAM I think, in 2025. You guys, like you said to start off also, that you’re going to be the last company standing. You’re committed to this for the long haul, et cetera. We know this takes a long time from design wins to revenues in automotive in general, and even more so in the markets that you’re attacking. When do you think we’re going to see the financial rewards of these investments start to be immediately apparent when people look at your quarterly and annual results? How do you see that evolving from where you are today to that 2025 TAM?

Rob Csongor

Yes. It’s a question. The – I think we talked about some of it in the beginning. Things like Level 5, I think are happening on track or possibly even sooner. I think it’s not a secret that some parts of the automotive world are in some amount of pressure. Germany and Dieselgate and some of the financial conditions, I think, have pushed out some designs. So, I think roughly 2022, 2023 is when you’ll see an inflection of designs that where you get some ramps for those Level 2+ designs. And then Level 4, I think those designs will eventually become Level 4 over time. But just getting those Level 2+ designs out there, I think, is going to be the big thing. And then, of course, Tesla is going to drive the market. The more they drive their solutions out there, the more they’re going to create the conditions by which a carmaker could conceivably not be competitive. And then that will – that may have the end result of affecting their hunger to accelerate schedules.

Ross Seymore

Fear of missing out.

Rob Csongor

Yes, fear of missing out. But for now, I think it’s fair to say 2022 is roughly the right time. And I guess, we’ll see.

Ross Seymore

Great. Well, Rob, thank you very much for coming, not only this year, but to like three or four in a row of these. We appreciate you as an anchor tenant and for sharing your views.

Rob Csongor

All right. Yes, my pleasure. Thanks, guys.