Seeking Alpha

Intel Corporation (INTC) Management Presents at Mobileye Media & Customer Conference (Transcript)

|
About: Intel Corporation (INTC)
by: SA Transcripts
Subscribers Only
Earning Call Audio

Intel Corporation (NASDAQ:INTC) Mobileye Media & Customer Conference January 7, 2020 2:30 PM ET

Company Participants

Amnon Shashua - SVP, Intel, and President and CEO of Mobileye

Conference Call Participants

Unidentified Company Representative

Good morning and welcome to Mobileye 2020 CES Press Conference. Before we get started, I just want to read the quick Risk Factor Statement, and we will get the show started.

Today's presentation contains forward-looking statements. All statements made that are not historical facts are subject to number of risks and uncertainties and actual results may differ materially. Please refer to Intel's most recent earnings release, Form-10Q and 10K filing available for more information on the Risk Factors that could cause actual results to differ.

With that, it's my pleasure to introduce Professor Amnon Shashua, Senior Vice President, Intel, and President and CEO of Mobileye.

Amnon Shashua

Thank you, Tushar [ph]. Welcome everyone to our Annual Day Press Conference. At Mobileye, we are dealing with entire spectrum from driving assist until fully autonomous driving, so it's a complex domain, and every year it's getting more and more complex. So I'll try to simplify things and point out really on the major pillars, the major challenges, and then how we at Mobileye and Intel are handling those challenges.

So, I'll first start with numbers of 2019, all crammed into one slide. We shipped 17.4 million chips, IQ chips this year, overall, above 54 million IQ chips since 2007. So, this is -- it's really an amazing number, so 54 million cars with our technology. We have a 46% cargo year-on-year. Today, we have ongoing 47 production programs -- it's a length of about two to three years, once we get a design win until the product is on the road. So, we have 47 designed wins ongoing. These are production programs that are ongoing. We had 16 product launches. So, a product launch is a new -- it's a new car model with a new bundle. Normally it's with an upgraded IQ chip. So, we had 16 product launches, and 33 design wins this year, 2019. So, this is 2019 in numbers.

In terms of -- as I said, it's a complex world since we are the full spectrum from driving assist until robotaxi, consumer autonomous vehicles. So, here is kind of a breakdown of our business pillars before I get into technology. So, there is the driving assist, which is a very fascinating and exciting growing field, and there we are a Tier 2 supplier, we sell a chip with our content algorithms and software to Tier 1s, and Tier 1s integrate them into a module sold to the car manufacturer. Then we have our crowdsourced technology for building maps. This was designed for powering autonomous cars, but it has over the past two years, we have found new and quite very interesting new business models for these maps. So, it's a branch on its own. So, it's a data branch. We have another data branch for smart cities. I'll say a few words about it. This is again a result, a derivative of our crowdsourced data technology.

Then we have Mobility-as-a-Service. So, Mobility-as-a-Service is exactly what it means. It means that we're not just supplying the chip, or supplying a full self-driving system, hardware software, sensors that we will purchase in the future. We have an activity of building also with sensors. So, talking about the full stack, from end-to-end, give me a car and I'll make it autonomous, it's a full stack, but Mobility-as-a-Service is more than that. Mobility-as-a-Service is older -- the service layers of becoming a service provider, it's a fleet optimization, route optimization, fleet management, mobility intelligence, a customer-facing apps, tele-operation, back office, all of that we are gradually building, it's building, it is partnering, in the future we're open to inorganic growth as was stated by Intel in the past. So, it's a full and very, very deep entry into Mobility-as-a-Service. So, to say it in words that we all know it's becoming an Uber/Lyft in the area of a robotaxi.

Then comes the last pillar, which is the full stack self-driving system, but for consumer, autonomous vehicles. That we believe will come a step after a robotaxi, but we are designing things today that by 2025 we'll be able to price a self-driving system, end-to-end, sensors, hardware, cables, everything below $5,000. That price point enables installing these systems as built-in into consumer vehicles, again at the beginning, premium vehicles, but this is where things start; starts with premium and then trickles down. So, as you see, it's kind of a complex world from the business side; from technology side, it's even more complex.

So, this is in more details, the kind of the offering in driving assistance Level 1, Level 2, driving assist that we as a Tier 2, there is a new category called in driving assist called Level 2 Way Plus. We talked about it last year. I'll give a bit more details today. Then there is the Level 4, Level 5 Mobility-as-a-Service, here we are building the self-driving system stack, completely an end-to-end to power our own robotaxi, but we can sell the self-driving system to robotaxi makers as well. And then the holy grail of this entire business is when you as a consumer can go and buy a passenger car, which is autonomously enabled. So, whenever you want to sit in the backseat, press a button, it will take you to wherever you want to go. This is this is the Holy Grail, and this is what we're planning for. Okay? There we go back to becoming a supplier to a car maker, and then, there is the data opportunity, which is becoming a business on its own.

So, let's say a few words about the driving assist segment. And what I want to point out is this new category called the Level 2+. So, Level 2+ means a lot of things. It's a term that many suppliers are starting to use. So, I started here, let's define it in some way. So, we defined Level 2+ as when you have either a multi-camera sensing, so more than just a front facing camera, it could be multiple front facing cameras, it could be facing camera plus the parking cameras, it could be this plus surround cameras, a full surround setup of sensing, and using HD maps as part of the driving assist. When I talk about HD maps, I really refer to our REM technology, the crowdsourced technology, for building maps automatically using ADAS-enabled vehicles.

In terms of the functionality, so this is the attribute, the hardware, in terms of functionality, we're talking about everywhere and all speed lane centering. So you can also in a city environment and metropolitan environment also have lane centering, and also conditional hands-free driving or conditional automation in which the driver is responsible for a period of time, it could be seconds, could be minutes, could be driving monitoring system that watches a driver and in certain conditions enables a driver to let go of the driving experience. So this is called Conditional Automation, and you will like this Conditional Automation to be everywhere, at all speeds and everywhere. This is the promise of Level 2+. And again, we're talking about the price point, which fits a volume production. So, I am not talking about many, many thousands of dollars per car. This is why the camera system is so critical. First it enables this type of functionality, and second, the price point of cameras is really nothing. No, it's the lowest cost sensor that you can that you can imagine. So, you have cameras plus compute, and that's it.

Okay? And you see here a chart from the Wolfe Research in terms of the growth of this category, Level 2, Level 2+, so that's becoming a significant category, and you can see also from a business, I'll switch to the next slide, from a business standpoint we have more than 70% capacity of Level 2 systems running today, and here are a number of car models that have a Level 2+, like the Nissan proPILOT, which has a trifocal sensor in at the front that uses also HD maps, the Volkswagen Travel Assist uses our HD maps, and so forth and so forth. And on top of that, there are 12 active programs and 13 open RFQs in this field. So it's a growing field, and it's an important field, because from a business standpoint, we have here the possibility of ASP that grows from a three factor to 15 factor of the normal Level 1 Level 2. So, it's both exciting from a value proposition, but also from a business proposition.

And then what comes after that still in driving assist is what we call Vision Zero. So we adopted a non-term, we simply redefined it. So we call this a Vision Zero in which the car would do preventive measures, not emergency measures like today's AEB Systems, or preventive measures to, if you have a full surround system, we can use our RSS technology, we published a paper last year showing that we can adopt this technology to human driven policies driving policies, and use the RSS guarantees to prove to guarantee that if all cars have the system, there'll be no accidents. So I think this is the more horizon. There is nothing yet in this in this category, but we are working diligently and making this a new category, say 5 to 10 years from now. So, driving assist has the potential of moving from a fun sensing camera to a full surround sensing suite. So it's a huge value proposition, but also a huge business potential. Let's look, so what is powering the opportunity that adjustment and also autonomous driving that I'm going to show is around computer vision. So I'd like to get into a bit more details about what are we doing in the surround computer vision. So surround computer vision, we have cameras around the car 360 degrees, and we're processing the information from those cameras to enable functionally value like Level 2+, and I'll say why it is also relevant for autonomous driving.

So, this slide is a bit of a mouthful, but it is an important slide. So what is our goal? First goal is that we would like to enable just with cameras at full stack autonomous driving, full stack from beginning to end. Do everything that is necessary to do to support autonomous driving without relying on any other sensor no radar, no LiDAR. Now we are targeting a meantime between failure of 1 every 10,000 hours of driving, which is just to give you an idea, if we're driving two hours a day, it will take 10 years to travel 10,000 hours. So this is a significant the number that like to motivate why are we targeting this number and how we're building the technology to support this number. So here is the way when you when you look at human statistics, 10 to the power of minus four, 10,000 hours of driving is the probability of an accident with injuries, 10 to the power of six 1 million hours of driving or around that number is a probability of an accident with fatalities. Taking safety margins on top of that, we look at 10 to the power of seven, so we would like a critical error, one every 10 million hours of driving, which is an incredible number. There is no technology today that can reach any of those numbers. So, scratching our head a few years ago.

So, how do we reach this number and the way to reach this number is to create redundancies, but through redundancy to create not a fusion system, which is the dominant school of thought in the industry, but actually two separate to have separate streams, one stream is only cameras, and other stream is only radar and LiDAR, and each one of them to reach 10 to the power of minus four. And then because those systems are approximately independent, a product of them will give us a 10 to the power of minus eight and was safety margins and because they're not really statistically independent, will reach out 10 to the minus seven. So what we have here so now let's go back to the computer vision street. So at Mobileye we have two separate vehicles, and when we have customers coming to visit us, we show them a demonstration separately of those two vehicles. One vehicle, which I'm going to spend more time today is only cameras. The other vehicle is only LiDARs and underwriters and both vehicles, they do a complete end-to-end autonomous driving.

So let's go back to the cameras. So we need to prove to ourselves first that we can reach this incredible number of having a computer vision system, making a critical mistake, once every 10,000 hours of driving. And I just gave you an example, if we drive two hours a day, this is the equivalent of 10 years of driving. So making a critical mistake once every 10 years of driving. And then the question is, how do you do that? So it's not, it's not enough to show a demo that we can drive using cameras. We need to convince ourselves that we're building the technology in a way that can support this incredible number.

Okay, because we always tell ourselves, we're not in the business of science projects. It's not an academic exercise. We want to build a business and we want to build a business that can survive, the god forbid accidents that are going to be out there, right. So we need to convince ourselves that we're doing something very, very responsible before we put these machines on the road. So when you look at our sensing system, who are the customers. So now we're talking, there is no more mention of radars, no mention of LiDARs whatever I say is only cameras. Okay. So, we have three customers for this camera sensing system. One of them is our driving policy. So we're building a car that drives autonomously, the car needs to make decisions. The decision process is called driving policy, policies award from control theory from reinforcement learning, and since we're doing a policy in the area of driving, we call this driving policy. So the driving policy is the customer to the output of the camera system, because you can think of all sorts of outputs that you would like from the camera system, but that'll be just groping in the air, right, you need to have a real customer in order to find you and what you want out of the camera system, and the customer is the driving policy, if the output from the camera system is actionable from the point of view of running policy, and you can do end-to-end autonomous driving, then you define correctly, what is the output of the camera system. So this is one customer.

The second customer is our REM technology, the harvesting to build a HD maps automatically. So what the camera does in an ADAS environment, it captures information from the scene, packages them up in a very small bandwidth of 10 kilobytes per kilometer sends them to the cloud, and in the cloud we have technology to build high definition maps automatically. And I'm going to have a section in this today a bit about that.

The third customer for our sensing system is our driving assisted products. The Level 1, Level 2 and what I mentioned at the beginning is the Level 2+. This is also a customer for our sensing system. So when we break down the computer vision in this fully autonomous world in which driving assist becomes a special case, we have the detection, we divide them into four buckets. One of them are above road users, detecting all vehicles and other road users was cyclists and pedestrians and people on scooters and whatever road users these are dynamic objects. And this area is very, very rich in terms of how you specify it, and then I'll show some more details about it.

Then there is a road the geometry, everything about the drivable paths that we see. So it's not only the landmarks, we need to break down the scene into drivable paths and understand what each driver will path means. Then there is road boundaries, all the delimiters could be a curb could be guard rail, all the delimiters that we find in on the road, and then the road semantics, what can we infer from a semantic point of view from the information that we see this is traffic signs, traffic lights, a pavement markings, and so forth. So these are the four buckets. Now, comes the task of reaching the 10 to the power of minus four. This is an incredible number. So, the way we approach this is we create internal redundancies, these redundancies are not statistically independent, but take for example, vehicle detection, rather than building one humungous blackbox network, deep technology for detecting vehicles, we do it multiple times using different algorithms. These are these internal redundancies, and each one of those algorithms end-to-end, so it doesn't rely on other algorithms. And this way, again, we don't need to pull ourselves statically independence. It's just a software design approach to do the same task using different approaches, but each approach doesn't rely on the rest.

So, the way we look at it, now, there are several sources of information, one is appearance. Now you have pixels in the image this is appearance, vehicle has an appearance, a typical appearance and you use that typical appearance and all the training network to detect the vehicles for example. Then there is a geometry, things that we find from perspective from a motion, all sort of classical things that we as humans use in order to extract 3D even though our sensors to dimension our eyes the two dimensional sensor camera, the two dimensional sensor, we see 3D implicitly. We extract through the information we infer through the information, we don't have direct access to 3D information. This is called Geometry-based. So, the same thing for detection, and also for measurement, measurement meaning lifting the two dimensional information to 3D because in order to control the vehicle, everything needs to be situated in a 3D environment, so we need to lift all the two dimensional detections into 3D and here also we have appearance, and we have geometry. And again, the idea is to use them as separate tracks, not combining all this information into one algorithm, but using separate tracks.

So let me give you an example. Let's look at -- I'm going to focus now on road users. So let's look at there is a bus, a white bus here in the scene, and we're detecting a road user using six different algorithms. One of them is the classical pattern recognition, you're putting, you're detecting a three dimensional box situated on a vehicle. Using pattern recognition, so you have your train and network that scans the image and tries to find an area which you can put a box around it that will fit the appearance of the vehicle. This is the classical, when people talk about vehicle detection, they normally refer to this. There's another one which is called Full Image Detection. Say you have a large truck or large bus very, very close to you, few centimeters away from you. This is not something that easily putting a bounding box on it, you need to approach it differently. It's still appearance, but it's a completely different algorithm on how to handle very, very close objects. And I'll show you some examples later.

Then we have a free space we call this top view free space. So the green all these pixels of green is where the road is. And whatever, whatever is not road is suspect to being an object. So this is another way to find to do object detection. Then we have features like wheels. We use them also separately to define objects, then we have what is called in academic environment divider. So it's like LiDAR, but with cameras. So the fact that we have multiple cameras, we can use it in order to generate a three dimensional percept of the world using triangulation, classic triangulation, but we have multiple cameras in the car, and then we can feed that data into the LiDAR processing algorithms. As we said, we have two separate streams. So we have already algorithms that take LiDAR information and create an environmental model instead of feeding it LiDAR, we feed it the information coming from the stereoscopic multi-camera stereo.

Okay, so this is completely different track, both for detecting objects and measuring information. So rather than developing an algorithm that takes all these sources of information, and outputs, the objects, we have six different algorithms and also create those internal redundancies, the same thing for measurement. I want to lift from two dimensional to 3D divider, of course gives me 3D information. Visual road model using classical computer vision of a plane plus parallax perspective optic flow, we can get a road model and use that in order to get depth. We have a map, an HD map, which we build using our REM technology that also gives us information about the shape of the road, which we use to extract the depth. And we have another network that uses appearance in order to estimate range.

So, each one of them is a separate track rather than combining them all together, we have different engines to create the lift from 2D to 3D. And again, what's underlying all of that is reaching this 10 to the minus 4 probability, right because we it's not enough just to show we can do autonomous driving. We have to do it in a way that at the end, when we put all those streams together, we can get a very, very low probability of mistake, which is at least as good as humans, but much better than humans, has to be much better than humans.

Okay, so now let's go examples of each. So this is the full image detection. If you look here, when the object is very, very close to you, it becomes a very big challenge on detecting it as an object because there's really nothing here to say that there's an object. Let's look at this as well. Now, this is important when you're maneuvering an autonomous driving situation, when you're maneuvering in very narrow situations. It's a two-way road, there's oncoming traffic, there's blockages and now you're driving and the other road users are centimeters away from you. So it's a completely different approach on how to detect out of the ticket object. Let me show you an example of how this looks like. So what you see here, this is the top view from the system. We're getting closer to a blockage and now you can see this vehicle coming in here very, very close. These are the kind of situation in which you need this full image detection and passing through. And you can see this, this is really autonomous, this is a drone view.

I'll show plenty of more examples. So this is one of them, and other is how do you track? How do you track an object among cameras? So this is a network that uses what is called signature. This is a technique from face recognition, where you take an image of a face, and you map it into a vector of numbers, which is called a signature. And that vector of numbers is invariant to facial expressions, whether you change your hairstyle and so forth. So the same thing is being done here, where you use a signature of an object and find that signature in other cameras so you can track very, very long baseline tracking of objects. This is range net, so using appearance in order to estimate range. So, what you see here, this is the classical way of measuring range using some common filtering. So, it stabilizes over time, this is number of frames, and the red bar is the true range and you see here we're using a range net using the appearance to estimate depth, we can get immediately the range from a single frame. So, this doesn't come instead of the classical. This is on top of the classical, then another engine is pixel level segmentation. So, you can see here that every pixel is segmented, the color of the pixel is what it is, so green it is road and you see here delimiters like road edge are in red.

Let me run this again, and what you see below is a segmentation in two objects. So, each color is a separate object. Here we're coloring over the road users. So, the cars and the pedestrian let me run this again so that you can see. So, this is the pixel segmentation. So, every type of object has the same color and here this is the object detection. So, every color is an object and this is the same thing at night. So, again, so this is a stream running parallel to everything else in the system, this is not an engine that is combined with other engines of the system, this is a completely separate subsystem. This is from surround cameras, so same thing being computed from every camera.

Then we tell ourselves, there are certain objects that occur frequently. Those objects that occur frequently would like to have a model-based detection, so, to train a network for that particular class form doors. A door is tricky because it's hovering above the ground plane. So to get measurements of something hovering above the ground plane is tricky when it come with when talk about cameras. And it's sufficiently important from a safety critical point of view that you want to do something dedicated for it. So you can see that doors are detected as a separate class of objects. And on the right hand side, you can see how it is and you see the width point here, this is the door and how the autonomous car is maneuvering its way around the open door calculating a new trajectory in order to bypass the open door. And other separate class of objects are strollers, you can see here the stroller sufficiently ported from a safety critical point of view. Last thing that you want to do is to hit a child, so it warrants a separate detector. And you can see on the right hand side an example of a stroller coming in and the autonomous car is taking notice of it, then comes another engine this is the parallax net.

So, this is using geometry. So, every pixel is now colored based on the height from the ground plane. So, red is above the height plane, blue is below the ground plane and the ground plane is not necessarily a plane, it could be something that is curved and this is a network that is trained also on moving objects, that is not only stationary, stationary world. So, again this is a separate engine that is used for measurement and used also for detecting object or if something is above the ground plane, at least you know there's something there before you even know if it's a car or not the car.

Then there is what we call, we didn't coin this name, in the academic circles it's called LiDAR. So it's a deep network that takes the multiple cameras that we have in the car and combines them all together to do triangulation and get a dense 3D percept. There are many uses for it. One of them, there are all sorts of hovering objects, protruding objects that you need to know about. And so you need, you need precise 3D. So these are examples of the cameras, that their images are fit into this kind of network. And then when you look at it, you get a 3D percept, just like from a LiDAR, but much higher resolution because we're talking about the cameras from the existing cameras in the car.

So it's not that we have stereo units. We have 12 cameras around the car. So it's a long baseline stereo in the academic circle. Here's another clip while we're moving. So again, so this is now a separate route, a separate engine. It's not combined with all other engines that I mentioned before. It's a separate engine that can be used for detecting objects and also measuring range of those, to those objects. And here's an example how we use the radar data and feed it into the LiDAR processing stream that we have in order to detect the objects. So we take the same algorithms that we have with the vehicle with radars and LiDARs and feed it with the radar information rather than with another information in order to get the stream where do we detect objects and measure, measure range to them.

Here we can see how these different types of streams of information help us to avoid to define obstacles. So here is a route, which have many cars that are obstacles. And the system needs to figure out whether it's an obstacle that you need to overtake or whether it's a traffic jam that you need to simply wait and wait that is cleared. And it's kind of tricky, because your view is limited and the object which is the obstruction is occluding, but there are all sorts of tales. Like if you do a pixel labeling, and you know the pixels of the road, you see pixels of road behind the obstruction. So, you know, it's not a traffic jam. So it's putting all this information together in order to understand what are obstacles and overtake them. Then there's a semantics knowing about blinkers, emergency vehicles, knowing what pedestrians are doing, let me run this again because at the beginning, we see what this is. Okay, so we know the orientation of the pedestrian, we can go deeper than that and also understand hand gestures. So to know what it means come closer, you can pause, stop. I'm on the phone, this is also part of the system, as part of semantics.

So now let me put all this together. And I'm going to show you a clip from our autonomous car again, so we're talking about this stream with only cameras. So it's a car fit with 12 cameras. It's eight long range cameras and four parking cameras. In terms of the cost, it's really nothing, it's hundreds of dollars, and it's all running one EyeQ5 chip. The hardware has two EyeQ5 chips but right now what is being processed is only one EyeQ5 chip. So it's one EyeQ5 chip, just to give an idea what this means everybody talks about chips and so forth. So, an EyeQ5 chip has two dyes, each dye is a seven nanometer process. It's 45-millimeter square area.

So we're talking about a 90-millimeter square area running what I'm going to show you, so 90-millimeter square for chip is nothing. All these humongous chips that you know, Xeons and other, these are hundreds of millimeter square chips, hundreds closer or even to 1000. So we're talking about very, very small footprint very, very lean compute. Going to do what I'm going to show you right now. So this vehicle 12 cameras, one EyeQ5 chip taking all the information from the cameras and doing an end-to-end autonomous driving. Now we are developing this with the mindset of being able to be agile as a human. So it's not only from an algorithmic point of view, there's also our model of safety, which is called RSS, I'll say a few words about this later.

Putting those together enables an agile driving. Now agile driving is very, very important because the first use case of autonomous driving is going to be robotaxi. Before we can go and purchase a car, it's going to be robotaxis for a number of years. Now robotaxi is a business. As a consumer, you go and hail a robotaxi and it should take you from point A to point B, who is going to hail a robotaxi, knowing that's going to take twice as time to reach its destination. So it has to be agile, it has to drive like a human. So this is in the center of Jerusalem. The driving style of Jerusalem is Boston plus, I would say. So it's really challenging, people are not patient. So if you want to merge into traffic, you need to show your intent and you need to be serious about your intent, just the turn signal wouldn't do it, right.

Okay, so now so let's have a look at this. So we have here three panes. This is from a top view, this is a drone. This is the autonomous car you see here, the Intel Mobileye Logo. This is what the computer sees kind of 360. And this is just to show you that we're not cheating, it's really autonomous driving. This is sped twice as fast, because in the presentation, I want to finish this fast, so here's the autonomous cars, so you can see that it's a very narrow corridor, oncoming traffic, and it needs to find its way. So it's a one-minute clip. So it will be over quickly. And now it's going to enter into an unprotected left turn and now see what's going to happen. Okay, now we're showing you the pixel labeling. So, this huge bus simply stops. Okay, so now there's occlusion. So what do you do? You need to do an unprotected left, so now it's pushing its way. Now, there's a pedestrian going, pedestrian coming, you see this pedestrian. I think this is Jerusalem. Okay and that's to push its way through, okay. So we're talking about a system with a bill of material about $1,000 that is you see the proportions about it. And this is one stream. There's another stream of radars and LiDARs that at the end of the development will be put together to 10 point minus A7 probability.

Okay, I'll skip this. So, in order to show that we're not simply taking a section of few seconds out of 10,000 successful sections, only the successful section, we put this morning, the entire route, it's a 20-minute ride starting from Mobileye, ending at Mobileye into the heart of Jerusalem. It's available, it's in 4K, so and it's in original speed, so it's not fast forwarded, so you can see everything. You can see the strength, you can see the weaknesses, I don't think that there are weaknesses, but if there are, you can see them and for all to see, right. So I think it's very, very powerful. I've seen this morning a discussion in Reddit pages and pages and pages of discussion. I think this is what the industry needs to be transparent to show that what we're doing and not to be shy about it. So it's all the entire route is available at the YouTube and our YouTube channel.

Okay, so now let's go into mapping, which is a very important pillar. It's an important pillar to power autonomous driving but it has other business use cases. So I mentioned before, it's ADAS-enabled vehicles, which are 10s of millions of those, those vehicles. So, to power them in a way in which critical information from the scene can be extracted sent to the cloud, in the cloud, we call this aggregation. We build identification maps, and then we send them back to cars and the level two plus category and of course, our autonomous driving critically relies on these HD, these HD maps.

So, in terms of harvesting, the three carmakers that we announced two years ago, BMW, Nissan, Volkswagen already has been launched, we receive a six million kilometers per day of data, right. We'll be able to map all of Europe by next quarter and the entire U.S. by the end of the year, and we have additional three OEMs that I am not at liberty to say their names, but we have six OEMs that we have contracts for sending data. By this year, we would have 1 million cars sending data. By 2022, it's supposed to be about 14 million cars, sending some data. So this is very, very powerful. You can see here, this is data from the BMW Vehicles, so that BMW was the first to launch. So, you see the coverage after a few weeks, coverage in the U.S. So this tells you where are the areas that people can afford buying a BMW. Okay.

And this is Europe, and as I said, all of Europe by next quarter, most of the U.S. by end of this year. Okay, so what are we doing with this data? Today our technology, we can automatically -- so again, no manual intervention, the data coming from coming from the cars, going to the club, we automatically build high definition maps for all roads above 45 miles per hour. So I'm not talking about highways, it's more than highways. 44, 45 miles per hour is El Camino in Palo Alto, where you drive the traffic lights, traffic light after traffic light after traffic light, that's 45 miles per hour. So we're talking about the rich set of roads that today are mapped completely automatically. Of course, our ambitious is more than that; we believe that full automation of all types of roads by 2021. Okay, here's an example of kind of the mapping. This is in Las Vegas. So it's here, that's all of this is done automatically.

In terms of China, China is a very important territory for us. And it's growing in all aspects. Now all the spectrum of our activity in ADAS, 25% of all our design wins in 2019. We're in the Chinese, we are in the Chinese market. So in terms of again in China, again, we need to comply with the regulations in China, which are very complex. So in terms of, we have a joint venture that was signed a couple of months ago with the Tsinghua University Unit Group, with a mapmaker as part of the joint venture to allow us to start collecting data for maps. We announced a month ago with the NIO, they will be using their fleet of tens of thousands of vehicles to start harvesting data. With NIO, it's more than that they'll be taking our self-driving system and using it as built in into their car starting by 2022. That will be our first kind of trial and consumer AV, in China. And then recently, this is news that was announced this morning that with the site together with their mapmaker called heading, to start harvesting data and using the high definition maps to power level two plus with the in China by Shanghai, automotive. So we can see that we are building the right steps in order to be very active in China.

Now there're also Smart Cities opportunities, so the data that we collect from vehicles can power smart cities. And the more we study this market, the bigger it looks, so there's infrastructure asset inventory. We have a strategic collaboration with the U.K. mapping company we announced a year ago. There is pavement condition assessment that municipalities pay every year to do the survey. There's dynamic mobility what happens with road users, with pedestrians, trade walkers and so forth. That is data that municipalities pay for. The market is billions of dollars, really, that we studied very, very detailed. And here's an example of harvesting, which we collect everything that is useful in terms of survey asset inventory from the scene and send it to the cloud.

Here's an example of a pavement condition. So this is an area in Israel, in which we cover code, the pavement conditions, so where it is poor green is fine. Here's an example of a poor pavement condition. The next picture shows you how bad the road is. Here's another example where you have a pothole, also poor pavement condition. But by the way, detecting potholes and cracks that could become potholes is very, very important because when consumers damage their vehicle on a pothole, the city is liable. And it's millions of dollars, right? So if you can know in advance that you have a crack, and you go and handle it, then it saves a lot of money to municipality all right. Now, all of this is done automatically. There's really zero cost to all of this. It's an ADAS camera, a car with an ADAS enabled camera. This kind of information is very low bandwidth sent to the cloud and we can then create value to the commonwealth of cities. Here's another area of road condition score. This is you see here the crack. Again, all of this is automatic.

Okay. Let's go into the next pillar, which is the, the RSS. So, one of the ingredients, a critical ingredient and enabling them very, very nice demonstration that you have seen before in Jerusalem, it's not only the power of the algorithmic approach, how you do decision-making, but it's also the way it is the model of safety, because you need to define what it means to be safe. Now, it sounds simple, but it is not. When you look at traffic laws, there are many things that are well defined. Now when you reach a red traffic light, we need to stop, when you reach a yield sign, we need to slowdown, when you reach a stop sign, we need to stop. These are well defined things. But then there is this duty of care kind of bucket, which is you need to be careful.

Now what does it mean to be careful? That our societal norms being careful in Boston this is different from being careful in Phoenix, Arizona, so what does it mean to be to be careful their societal norms about that, there are legal precedencies? So humans can live with ambiguity, machines cannot. So we need to define what it means to be in a dangerous situation, because if you don't have a formal definition, then you become conservative, because you don't take risks because you don't, you haven't defined what dangerous is. And this is what RSS is doing. It is defining in layman terms; the idea is to defined assumptions that we humans make about other road users. So for example, if I'm driving in a multilane highway, and there's a car next to me, I'm assuming that that car is not going to spontaneously turn the wheel, the steering wheel and hit me, because if this is something that could happen, I'll never drive side-by-side to another vehicle, right? So we make assumptions. If we're driving behind another car, we keep a safe distance, we are making assumption about what is the braking force. Again, this is very, very implicit, we're not mathematicians, but now we are making assumptions, what is a reasonable braking force and based on that we maintain a safe distance. When we are reaching, when we have the right of way, and there is another vehicle in a secondary road that needs to stop, we know that right of way is given not taken. So we kind of calculate to ourselves is this car, does this car can stop to give me the right of way, if it cannot stop, it's coming too fast, it cannot stop. I'm not taking my right of way. I'll give away, right?

So we make assumptions, so the idea is to make those assumptions formal, make those assumptions clear, so that we can engage with regulatory bodies about these assumptions. Some of these assumptions are parameters. We want to set these parameters based on data and based on engagement with regulatory bodies. And then, the model takes the worst case, means that we're not predicting what the other driver would do. We're thinking the worst. Once we make clear what our assumptions are, then we're taking the worst case. And then, we need the mathematical model to show that we can apply the inductive principle, such that we are never, we never get into a dangerous situation as defined by the assumptions. And if we are in a dangerous situation because of others that put us into the dangerous situation with a proper response to get out of it without causing anyone else to be in a dangerous situation. And then, theory that proves that if all actors behave this way, they'll never be accidents. That's what RSS is, and I said everything just to say about it.

In terms of -- so this paper was published two years ago, and we have been working diligently it was this area was managed by, led by bank by Intel to start having a discussion with regulatory bodies, with industry players, with partners with carmakers to start standardizing it. And again, the technology is built in a way that is technology neutral. It doesn't have any benefit to Mobileye or to any particular actor. It is the understanding that if we don't standardize it, no one will be able to launch autonomous driving, maybe for demonstrations, but not as a business.

So the last, which was announced a few weeks ago, there's a new IEEE body, which is established, it's based -- it's inspired by the RSS principal, the chair is chaired by Intel, by Jack Weast from Intel. And there has been also a safety report mentioning RSS and the definition also endorsement by all sorts of actors. In China, it's even gone further than that. There was a technical body that was established a month after the RSS was published, and has finished its work, it basically standardized RSS in China. It will take effect March 2020. So the Chinese are working fast, and we believe it's going -- it's a long process. It's a long journey, but eventually this needs to be standard wherever autonomous driving is going to be going to be deployed.

Okay. And last thing is our Mobility-as-a-Service say, the business status. Again, remember, we had level two plus, we're building autonomous driving, we're building a self-driving system. We're also building the entire stack for a service Mobility-as-a-Service. We mentioned a few months ago, was last year that we have a joint venture with Volkswagen and Champion Motors in Israel to launch a commercial Mobility-as-a-Service early 2022. So no driver behind the steering wheel to give about 200 vehicles in Tel Aviv, and then it will scale to all of Israel. And then of course, we'll use all the lessons learned and to ramp out outside of Israel. In parallel, we have established. This was announced a few weeks ago. RATP is the biggest public operator in France. And we have an agreement to work together on deploying commercially, self-driving in Paris. It will start October this year with the driver, the safety driver, taking the kind of car that you have seen that I've shown. It's going to be a number of them in Paris, taking passengers in October 2020 this year, where the work is towards removing the driver, a 2022, 2023 timeframe.

We announced today we have something very similar to this joint venture in the indigo city in South Korea and deployment, commercial deployment 2022. So while we're deploying in Tel Aviv, we will deploy in France, we will deploy South Korea. We mentioned a month ago the deal with the NIO. There is 8 vehicle deals as our robotaxi. NIO will be taking our self-driving system and in 2022 timeframe, they will launch in China, vehicles of consumer AV and those vehicles, we can purchase them and use them for robotaxi fleet as well. So it's a double advantage one, it's our first experiment with consumer AV in China and also, we have a platform for our robotaxi.

In terms of hardware, the hardware that's running today has two EyeQ5s. What will be launched in 2022 is the hardware with six EyeQ5s, I don't think that we will need six, but we don't want to take risks. So to have enough computing power to do whatever we wish to do and this is -- this is very, very lean in terms of a compute. This is going to be ruggedized automotive qualified.

EyeQ6 is coming out, is going to be sampled in this year, and we will deploy it in 2023. So the entire hardware would be running on a single EyeQ6 chip, and this is our path towards 2025 to reduce the cost of your self-driving system to below $5,000. Our self-driving system here for the Tel Aviv, for the joint ventures that I mentioned before Tel Aviv, France, South Korea will be around somewhere between $10,000 to $15,000, the cost, we will be able to downsize it to less than 5,000 in 2025. And this is critical 5,000 becomes a reasonable number for consumer AV. Again, premium cars, but this is how things starts and starts with their premium cars.

So, to summarize the main takeaways, so first the Level 2+ is a growing new category, both in terms of incredible value to consumers, and also a very, very interesting business potential. Okay and surround computer vision unlocks considerable value. Okay. Realization of a safe Level 4 and also unlocking the full potential Level 2+ requires surround computer vision. And here we need to build it as a standalone end-to-end quality. As I showed you in the clip, just with cameras, we can do self-driving, it's not going to be safe enough, we need radars and LiDARs as a separate stream, but in a Level 2+ context in which a driver is responsible. This is very, very critical. The fact that we can support a full end-to-end autonomous driving just with computer vision, and that requires considerable thought. These are the all these internal redundancies that I mention, it's not enough to show that we can detect an object; we need to show that we can detect an object at infinitesimal probabilities of mistakes. And for that we need all those internal redundancies.

Level 2+ requires HD maps, but those HD maps are not confined to a zip code. They have to be everywhere. This is where the power of all crowdsource technology comes in. Consumer AV requires HD maps everywhere. Again, this is where the power of our crowdsource technology comes to play. The crowdsource, the REM data is a great value to smart cities. This is a new business, new business line that we are building.

RSS regulatory, we call this regulatory science. This needs to spread more and more to be evangelized to be transparent in order to engage with regulatory bodies in order to take autonomous driving from the realm of a science project to realm of a real business. And then, the road to consumer AV goes through robotaxi, there is no way to jump from where we are today to consumer AV without going through the robotaxi face.

A number of reasons, one of them is cost that I mentioned. Another is from a regulatory standpoint, much easier to regulate an operator than to regulate the consumer. An operator operates a fleet of vehicles, has reporting responsibilities, has back office responsibilities, tele operations, responsibilities all of that you cannot put on a consumer. So you need a few years of robotaxi before you go from a regulatory standpoint before you go to consumer AV. This is why robotaxi is our next step. And all these ventures that we mentioned with Volkswagen and Tel Aviv with the Diego with the France is in the robotaxi area. We build a robotaxi as a step to go to consumer AV and while we're doing that, we build a business which is the Mobility-as-a-Service. Thank you. Okay. Thank you.

Question-and-Answer Session