Seeking Alpha

Intel Corporation (INTC) Management Presents on J.P. Morgan Tech Conference at CES (Transcript)

|
About: Intel Corporation (INTC)
by: SA Transcripts
Subscribers Only
Earning Call Audio

Intel Corporation (NASDAQ:INTC) J.P. Morgan Tech Conference at CES January 7, 2020 3:25 PM ET

Company Participants

Tom Lantzsch - Senior Vice President General Manager, IoTG

Conference Call Participants

Harlan Sur - J.P. Morgan

Harlan Sur

Okay, let’s go ahead and get started. Okay, great. Alright. Good afternoon, and thank you for attending our 18th Annual Technology Investor Forum at CES. Very pleased to have Tom Lantzsch, Senior Vice President and General Manager of Intel’s Internet of Things Group, or IoTG, particularly relevant given the focus here at CES, of more compute intelligence, more connectivity in areas like retail, manufacturing, transportation, industrial. I’ve asked Tom to start us off with a description of his role and responsibilities at Intel and provide a brief background of the Internet of Things Group.

But before I turn it over to Tom, let me just read their Safe Harbor statement. As a reminder, today’s discussion may contain forward-looking statements, which are subject to risks and uncertainties. Please refer to Intel’s SEC filings for risk factors that could cause actual results to differ materially.

So, with that, Tom, thank you for joining us. Let me turn it over to you.

Tom Lantzsch

Great. Thank you, Harlan, and thank you everyone, for making the time to spend with us today. Yes, let me give you a little perspective about this business, probably the most unknown business at Intel, although we’re pretty proud of what we’re doing over here.

I joined Intel about three years ago. I – prior to joining Intel, I was – had spent 10 years at ARM, so I think I have pretty good exposure to the entire semiconductor industry in that and I’ve been in this business now for, I hate to say it, but I think it’s my fourth decade. So, it’s sort of scary.

But about 2.5 years ago, we embarked on what we thought was the right strategy for Intel in this IoT space. And at the foundation of that strategy and taking a look at the entire semiconductor world, we believe that our focus needed to be first of all on businesses, enterprises, schools, governments, non-consumer.

Harlan Sur

Yes.

Tom Lantzsch

So, that’s this first point. As part of that, to deal with what we thought where the compelling events and compelling applications, we believe that where the data was created from all these connected things, that the solution wasn’t only going to be all this data was going to get sent to the cloud. And there were a lot of reasons for that and you guys have heard about them, latency, the amount of data created, blah, blah, blah, lots of reasons.

And so, we embarked on a fairly aggressive strategy and focused our efforts on what we called at the time consolidation of workloads at the edge of the network. We weren’t talking about edge computing then, but it was really about this notion of consolidation of applications closer to where the data was created. And we leveraged a lot of the traditional software resources that were historically in the cloud and brought it down closer to where the data was created.

So that, that is at the foundation of our strategy. And we believe that that required us to do a couple of other things to support that. One was, we now needed to have differentiated silicon to enable that. So, no longer was it purely we were taking silicon from businesses like our client group or our data center group, but we were also beginning to do differentiated silicon to enable those applications, both at the CPU center and also at the accelerator side of the business.

So, we embarked and started to do differentiated silicon that, that will become more and more apparent. This year, actually, you’ll see we announced our first product, I’ll talk about it briefly in a few minutes, at the end of last year.

The other thing we looked at was what’s the killer application that’s going to drive these applications over across this very fragmented vertical space, because as you guys know, this IoT space is made up of hundreds of different companies and applications. And so, it’s how do you deal with this fragmentation issue. And we thought that the killer app to really be the impetus for this change for these customers across these verticals was computer vision applications.

Adding cameras changed everything. And it didn’t matter if it was in a retail space, didn’t matter if it was in smart city space, didn’t matter if it was manufacturing space, didn’t matter if it was education. You name it, cameras – integrating cameras with inference technology at the edge was sort of the killer app. So, we doubled down heavily there.

So that, that was sort of the killer app. And then two other pieces of supporting technology or activities that we did was, we needed to fundamentally change the developer environment around enabling these applications. And so there we invested pretty heavily initially in computer vision technology applications to enable a developer community around that. We announced this developer environment called OpenVINO. Really, it was about democratizing AI and inference technology at the edge of the network. And really enabling that for our customers to use AI in these various computer vision applications.

As you can well imagine, you pick whoever you want, your favorite retailer, your favorite fast food restaurant, your favorite manufacturing location, very difficult for our customers to get access to these developers. And so, we needed to create this environment to that.

And as a – as an outcome of that, we created this thing called OpenVINO, which was this developer community actually makes it very easy for customers to develop applications, AI-based computer vision applications and write the technology once. And it determines which hardware it runs best on. It could run on a CPU, it can run on a GPU, it can run on FPGA, it can run on a Neural Net.

So, we did that, long story short on that. We announced it in June we’re adding about 10,000 developers a month. So, I mean, it’s going great. We’re over 150,000 developers worldwide and continuing to grow.

And then last but not least, our customers, our end customers needed solution providers, ISVs, to help them with their case and solve their problems, and so we created an activity, we call it Market Ready Solutions. It’s effectively ISVs around the globe, focused on very specific verticals to create applications up on top of these solutions that we’ve done.

And just to put numbers into it, we had a thousand different applications from ISVs this year to try to be a part of this program. We picked 300 and we scaled out with our end customers, over 10,000 of them year one. So, we got a lot of traction across the board in this.

So, that’s a little bit about the business. A lot of things that, again, most people don’t realize our average ASP on our products is over $100. We’re not cheap and cheerful guys. We’re high-performance compute guys.

Harlan Sur

Great. That was a great snapshot. IoTG is on a $4 billion, kind of annualized revenue run rate. It’s been growing at a double-digit percentage year-over-year growth clip. Can you just shed some light on the sustainability of this growth? And what the greatest revenue drivers are for your businesses? And what have been some of the key end markets and applications that have driven this growth?

Tom Lantzsch

Yes. Like I said, I think the catalyst of vision systems has been like the application that spans across all of these verticals. I can honestly say, every one of our verticals are growing pretty fast. I mean, they’re all growing double digits. We – I inherited a core business of stuff that clearly is not growing that, but its traditional. I call it traditional embedded compute, but even that business is pretty nice, has grown probably in the high high single-digit growth rate. So, it’s geographically dispersed.

Our business has geographically dispersed, but everywhere from education to healthcare to factory automation, it’s fairly distributed. I can’t say that there’s one that’s more prevalent than the other. They’re all growing fairly strong.

The other indication on the sustainability is, as you said, we passed our first billion-dollar quarter in the third quarter, which is nice. Obviously, we’re pretty proud of that. The question that I ask my team is not that we passed a billion-dollar quarter, but how fast were we going when we passed it, what’s the speed past it. And we have an activity that we track as most companies on design wins. Through the first three quarters of this year, we had more design wins than we did all of last year.

So, it gives me confidence that we can continue this growth at least for the foreseeable future. Our challenge has been actually this year is, as you know, and it’s been well publicized is unfortunately, we haven’t had enough product for customers, which has been a little bit of a struggle through the first three quarters of the year. So that, that spent a little bit of an inhibitor even though we’ve had incredible growth rates year-to-date.

Harlan Sur

Our customer is getting what they need now from you?

Tom Lantzsch

You know, I just met with some today so far so good, but it’s too hard, right? It’s hard, it’s just – it’s painful for our sales teams. It’s – certainly, we don’t like putting our customers in this situation, but so far, we’ve been able to scrape claw.

Harlan Sur

A follow-up question on the growth and the sustainability of the growth. How does Intel’s approach for IoT differ from that of, you know the traditional PC data center markets? You mentioned vertical approach, partnering with market leaders, differentiation to silicon Market Ready systems and system level support. Can you just provide some examples?

Tom Lantzsch

Yes. I think maybe what makes it – probably what makes us maybe most unique is the fragmentation of this space versus all the rest of the colleagues of mine. I mean, maybe take our PSG or Altera, what was our Altera business? Let’s put that aside for a second.

With the exception of that business within Intel, our businesses is really a big channel play. And we’ve got really strong channel partners around the world that we have developed over the years, both at multiple layers of the value chain, ODMs, distributors, ISVs.

What I think we’ve done a really good job of and one of the reasons that we have to have this vertical focus is at the end customer, the real consumer, this technology, each of these vertical solutions that we go to to the market, the ISVs are very typically, very, they’re not large. They’re smaller size firms. They’re focused on a specific vertical and they’re geographically oriented.

So, you may have a great manufacturing ISV partner in Western Europe, but it’s totally irrelevant to us in Japan. So, I think one of the secret weapons we have is our brand and our position globally and a strong channel that we’ve been able to establish over the years. We just had an event in India, okay, India, I’ll pick India, we just had an event at 1,200 customers show – 1,200 customers and partners showed up to an event. I mean, it’s the kind of scale that we can bring to the partner. I had a similar one in China this year, hundreds, right?

We did a webcast 18,000 people. 18,000 people logged into a one-day webcast when we were talking about AI and at the edge. I mean, so, it’s a great advantage, frankly, that we have well and beyond the technology that obviously we love and create every day.

Harlan Sur

Intel, to my knowledge, started talking about edge computing a while back, and more and more of your compute peers are talking about edge computing, but I think you guys defined it about two or three years ago. Can you tell us about Intel’s journey to the edge, and how does Intel define edge?

Tom Lantzsch

Yes. Well, we sort of define it, it’s not things, right?

Harlan Sur

Yes.

Tom Lantzsch

I mean, we do some – I do some silicon for what I call things, a camera would be a thing in my world.

Harlan Sur

End points, end points there?

Tom Lantzsch

Yes, camera would be a thing. I sort of like things that think are sort of interesting, things that are smart are not so interesting. I think other people can do that better. So, we’ll do Neural Network-based technologies on smart cameras, because we believe there’s a natural loop with the next connection.

So, where we define the edge is in two different dimensions. There’s an on-premise edge, which is predominantly where I focus, which is really focused on vertically specific use cases, factories, could be electrical grids, stores, whatever, that’s one activity, which is all these things get connected and then this processing happens.

There’s another network edge that I collaborate with our, which is a different group. It’s our networking group that mostly serves the carriers. That’s another network edge CDN like technology. So, but it’s – we don’t define it as far as things. I think it was a thing, not really our sweet spot. We do some things, but that’s not really our sweet spot.

Harlan Sur

So, I’m a visually sort of oriented guy. So, when I’m thinking about, let’s say, smart factory floor, where do I see your products?

Tom Lantzsch

So, you’d see, here would be an example of the areas where we would participate.

Harlan Sur

Yes.

Tom Lantzsch

…and things that we’re doing differently. So, if you take a look at a discrete manufacturer, a car manufacturing facility, you know at one of their manufacturing modules, traditionally, they’ll have several different controllers running that module. They’ll have a programmable – a PLC systems. Those PLCs are usually cards, usually have MCUs on it. Here’s what we’re doing differently. We’re virtualizing all those PLCs. So, what was a microcontroller business? I virtualized them in software and put them on a Xeon server.

Harlan Sur

Yes.

Tom Lantzsch

Why did customers want that? Why is that important? Because they can upgrade these things on the fly. They can have redundancy on the fly. They don’t have to have cards to go manage. They can remotely manage these activities. So, that would be like one module. But they’d also then have a camera system to do computer inspection. Today, most of that’s in a separate box. It’s a separate box that they have to manage. It’s a separate application that they have to manage. And so, what we’re doing now is, we just make that an application and put it on the same basic platform.

So, if I had to explain this to, I’ll say my mother for simplistic words. It was sort of like, in 2007, before we all had one of these things, we had a GPS module, we had an mp3 player, we had a camera, we had a phone, we had all these things. And all we did is put them together and made it this thing. That’s sort of what we’re doing.

Harlan Sur

Right.

Tom Lantzsch

And as these sensors get connected in these various locations, we’re putting that all together and increasing the amount of compute, and then only sending a subset of that data to the cloud because just, like I’ll use a car as an example, our Mobileye team just to talk about the amount of data that’s getting created. For every kilometer, a single camera, Mobileye system gets driven, it creates about 4.5 gigabytes worth of data for every kilometer. You can’t send all that to the cloud. It’s just impossible.

Harlan Sur

Right.

Tom Lantzsch

And so, what we do in our mapping applications that they do is, we actually inference that technology. We pick up things like pedestrians and bikes and traffic signs and all sorts of information. We only send 10 kilobytes to the cloud. 400,000 to one compression of data created versus data that gets sent to the cloud. And the same thing is happening in the industrial space. You just – they can’t handle this much data. It’s just – it’s – they don’t need to handle it.

Harlan Sur

Right. There's someone here. Wait for the microphone.

Question-and-Answer Session

Q - Unidentified Analyst

Thank you. I appreciate the time. I’m a little out of my depth here, but processing that’s happening at the edge when you talked about the incredible amount of data that’s being created in the difficulty managing that. Are you supporting the various Big Data frameworks at the edge? And that’s really where the value proposition is deploying the chip, low power and then supporting TensorFlow or whatever the case might be to do analytics at the point of ingestion?

Tom Lantzsch

Yes.

Unidentified Analyst

So, you still need those frameworks for AI analytics data ingestion and analytics on top of the architecture?

Tom Lantzsch

So, yes, for inference or computer vision inference or inference-based applications, like you’re talking about, be it TensorFlow via PyTorch, via whatever. Yes, so this OpenVINO development framework, basically, what that does is, it takes those frameworks and models that were developed for those frameworks.

It puts them into a format for our end customers that then when they go to deploy this depending on the hardware that’s available to them, what – like literally, they only have to write once and depending on the hardware that’s available to them, because a lot of these applications that we’re dealing with are constrained.

I mean, they’re not – there is an infinite amount of power in some cases. There’s not an infinite amount of bandwidth. There’s – they may be in rough environment temperature wise. So, there’s not – there’s constraint compute versus like go to the cloud and just spin up another instance. So, the answer the question is, yes. Those inference algorithms are all – native frameworks are all running locally at the edge to go do this inference.

The training today predominantly happens still in the cloud or customers train those inference in this – in the cloud, so that doesn’t go away. But yes, the inference actually happens at the edge. So, the answer is yes.

Unidentified Analyst

Of your, of [indiscernible]?

Tom Lantzsch

Yes, and we can do it, like I said, we’ll do that on a CPU with accelerators. We’ll do that with a GPU. If there’s a GPU accessible, if there’s sometimes we may use FPGA accelerators. We have another technology that we do a lot of our own, another acceleration technology, as a company, we bought several years ago called Movidius. So, we use Movidius acceleration, which is Neural Net technology. So, it sort of depends on what the customer is trying to do. And it could be as simple as a PCIe card that’s slapped into that same system that the customer then can optimize their solutions.

Unidentified Analyst

It seems like this AI at the edge is going to become increasingly more important just because you have this. Whether it’s factory floor, retail floor, smart building, whatever, you just get inundated with all this data that has to be analyzed and decisions need to be made in real-time. And so, as you think about your design win pipeline, your engagements, what percentage of your future customer engagements, deployments are going to be at the edge are going to be sort of AI-focused?

Tom Lantzsch

I think we will, two, three years from now, we want to talk about AI focus or not AI focus, because they’re all just be part of it. I mean, if you don’t know, if you went to the press briefing yesterday, we’re integrating AI technology into new CPUs for even laptops. And if you saw the application where we showed how Adobe uses it to do applications and content creation, AI will just be, like, it’s air, right? It’s going to be just everywhere. And so, I don’t think we’ll be talking about it much, because it’s going to be everywhere. That’s my take.

Unidentified Analyst

So...

Tom Lantzsch

So big, so the answer is, three years from now, 100%, I don’t know 100%, but vast majority.

Unidentified Analyst

So, that leads into the next sort of discussion, which is software and firmware become extremely important in this kind of scenario, right? And so how important is it? How does Intel capture mindshare with developers and engineers working on these embedded applications, these AI-focused capabilities, everybody’s working with different software architectures? How is Intel helping your partners sort of bring all this together?

Tom Lantzsch

Yes, like I said, I think we’re trying to enable sort of the next-generation of developers. If you are, I’m going to just pick an age group only, because I don’t know how else to say, but if you’re in your mid-20s and you want to be a developer, you have a specific developer mindset, which is probably not like the traditional developers that worked on it embedded factory automation systems, which are probably my age. So, we’re trying to modernize that entire development framework and make it very almost cloud like and how or Mobileye centered and the developer community.

Again, we started with computer vision activity, because we thought that was the first one. And we sort of – we internally talking about it, is democratizing these applications. We need to make it democratize this application and organizing. So, it’s this combination of democratizing and organizing. We don’t create frameworks; other people create frameworks. We just enable those frameworks to be used. We don’t – aren’t the best AI, I mean data scientists. We enable data scientists.

And more importantly, we try to enable people that aren’t data scientists to be data scientists, because there’s a finite number. And the appetite for applications is infinite across these various markets. So, we want to just create more and democratize more and I think our scale does that. And this is an example of sort of the progress we’ve made on that. There’s an online training company called Udacity. They do a lot of online training. They’re well-known. Microsoft will use them, Amazon use them, they’re well-known.

We offered a scholarship for training of AI inference at the edge. 15,000 people signed up in 48 hours. 30,000 people signed up in two weeks. It was the number one training class ever for Udacity versus any application that they’ve ever done. So, the appetite for this stuff is insatiable. And so, we’re just fueling into enabling this next-generation of developers to go do cool stuff.

And I – we’re, again, we’re Intel. We’re fortunate we can recruit some really good interns, and so I get the opportunity to meet with them and some of those bright talent from universities and colleges around the world. And I asked them, “Do you like working on industrial automation stuff? Because you would think that they wouldn’t.” And actually, they love it if you give them the right tools.

The problem is, people don’t give them the right tools and aiding. But if we give them tools that they’re used to, to do cool development, and it’s more like they’re used to doing development and not having to go run around and chase boards, but actually just go to the website and spin stuff up and get instances that they need and do evaluation and provide the right kind of development environments. They love doing this stuff.

Harlan Sur

Yes.

Tom Lantzsch

And so that’s really our quest is to just create more and more developers in this democratize world globally.

Harlan Sur

In addition to AI, the other strong technology adoption curve will be 5G. So, how does the 5G impact the direction of your business going forward?

Tom Lantzsch

Yes. So, if you look at this business probably 10 years from now, I think one of the biggest verticals will certainly be industrial manufacturing. I just – it’s a great opportunity. In the short-term, probably, you go to look at this thing over time horizons about 5G.

In the short-term, probably the biggest impact that’s going to make is in – actually in manufacturing and doing – enabling private networks in manufacturing systems, largely due to the latency capabilities as a lot of these advanced factories want to get to more and more autonomous-based systems. And from traditional factory lines to more 2D manufacturing, where you have AGVS moving around, 5G is great for this because if it’s – if it’s, first of all, its reliability, its manageability and also its latency characteristics.

So, it really enables for these next-generation applications. And I think you’ll see more and more of that in the short-term and then we’ll go from there, but in the short-term, that’s really where our focus is.

Harlan Sur

At the silicon level, you mentioned in your opening remarks, the IoT Group will now have to provide – will now have the capability to provide differentiated silicon to address customer requirements in areas such as acceleration technology, IO and so on. Can you give us some examples of where Intel is actually delivered more customer-focused silicon to customers?

Tom Lantzsch

Yes. So, again, we started this couple of years ago. So, it takes a while unfortunately for us to get these chips out and show them to everybody, but we announced our first one in November called Keem Bay. It’s an accelerator technology. It’s a Neural Net technology. Just to put context, it competes with who you would think it that talks a lot about using GPUs for this space. It’s 4x performance of a Nvidia TX to. It’s par of the performance of the Xavier of one-fifth of power.

So, that’s the first accelerator chip that we’ve announced, and, again, it’s all supported by the same software framework called OpenVINO. Again, in this AI world, software is really the big issue. This stuff is hard. And we’ve got a lot of people working on compiler technologies to make it easier for customers to deploy. So, that’d be the first example. There’ll be more. Tiger Lake was announced as a product yesterday.

Harlan Sur

Yesterday.

Tom Lantzsch

Well, it was announced as a PC product. There will be a Tiger Lake IoT product coming down the path, and a lot of that we baked in early on into that product, a lot of really good features like time sensitive compute, functional safety capabilities, it’s going to be really important for autonomous systems. Neural Net tech – I mean, some acceleration technology like you saw to do more AI.

So, you’ll see more and more of these differentiated products as we go through 2020. We got a couple of that we’ll be announcing, but though, so it’s happening at both the accelerator and the AI level.

Harlan Sur

Okay. And then my last question from a financial perspective, the team is driving high-20%, low 30% operating profitability over the past several years. Given the growth opportunities ahead, we assume that near to mid-term, the team is going to be prioritizing R&D spending just given the big opportunities in front of you, but how should we think about the segment operating profitability profile over the next several years? If you continue to drive strong mixture, if you continue to drive double digits top line growth, is there the opportunity to drive some OpEx leverage and expand operating margins?

Tom Lantzsch

Yes. So, our strategy is, if you just sort of look at this, we sort of look at the competitive space in our market, and our goal is basically to grow at least twice as fast as the market. And we’re pretty happy where the operating margin is.

Harlan Sur

Yes.

Tom Lantzsch

It was, let’s call it 30% in that range right now. Three years ago, when I joined, it was 22%. I thought that was too low.

Harlan Sur

Yes.

Tom Lantzsch

I think versus our competitive space. we’ve now got it at 30%. I think I’m pretty confident with that. And if we can continue to grow twice the rate of the market and hold a 30% op margin, I think, it’s a pretty attractive investment portfolio business for everybody. And so far, we’ve been able to do that over the last three years. And I don’t see any reason why we shouldn’t continue looking like that. So, I think the answer to your question is sort of flat line a little bit on the op margin and more emphasis on growth.

Harlan Sur

Yes. Great. Well, we’re just about out of time, Tom. Thank you very much for joining us. Really, I appreciate it. Very insightful.

Tom Lantzsch

Thank you.

Harlan Sur

Thank you.

Tom Lantzsch

Thanks for the opportunity.