Intel (INTC) Presents at Barclays Global Technology, Media and Telecommunications Conference (Transcript)

| About: Intel Corporation (INTC)

Intel Corp. (NASDAQ:INTC)

Barclays Global Technology, Media and Telecommunications Conference Call

December 06, 2017 03:15 PM ET

Executives

Dan McNamara - VP and GM, Programmable Solutions Group

Analysts

Blayne Curtis - Barclays Capital

Blayne Curtis

All right. We’ll go ahead and get started. Welcome, everyone. I’m Blayne Curtis, semiconductor analyst. Very happy to have for the launch keynote, Intel. From the Company is Dan McNamara. He came over with Altera. He’s a Vice President and GM of the Programmable Solutions business.

So, Investor Relations asked me to read this. So, stay with me. This is in terms of risk factors. Today’s presentation contains forward-looking statements. All statements made are not historical facts and are subject to a number of risks and uncertainties. The actual results may differ materially. Please refer to the most recent earnings release, Form 10-Q, 10-K for more information on the risk factors that could cause actual results to differ. If Intel uses any non-GAAP financial measures during the presentation, you’ll find on the website, intel.com, the required reconciliations to the directly comparable GAAP financials. How did I do?

Dan McNamara

Very well.

Question-and-Answer Session

Q - Blayne Curtis

Perfect. Welcome. I thought maybe a good way to start, it is about two years since the acquisition. I was wondering if you could maybe just kind of look back over those two years. And obviously, you’re already in the process of moving to an Intel Fab. So, maybe you can kind of just look at 14-nanometer product, and where it’s come, and where you are today?

Dan McNamara

Sure. So, it’s been, yes, almost two years. And if you look at, there are couple of things. I think, I can talk about the integration and then talk about the 14-nanometer. So, why don’t we do that? So, if you look at the integration and the acquisition. The question I get all the time is, how is it going? And I would say that, it’s gone remarkably well and for many reasons. I want to through those.

So, the first is, Intel set us up as an independent business group, which allowed us to keep our marketing, engineering, and sales and customer support all in a cohesive group, and allowed us to really focus on our execution strategy and maintain the customer engagement. I think one of the issues or concerns from customers, when the acquisition was announced was, Altera, we had I think a differentiating customer support model. And they were concerned that if that went away, that would be a problem. So, we wanted to really double down and make sure that we didn’t do anything to really interrupt that flow. So that’s gone well.

Secondly, even though we are part of an independent business group, we are allowed to leverage the broad scale of Intel. And what that means is the Process Technology which you just mentioned, TMG, Manufacturing and Process, Platform Engineering Group, which is our centralized engineering, and then, thirdly is really sort of our -- what we call, our software group, which is getting ever more important as we go forward. So, that group, working closely with them is -- if you look to the trend line coming into Intel on deliverables across any one of the products, the work with those three groups has probably accelerated our schedules by about six to eight months. So that’s gone extremely well.

The third piece is -- and by the way that was a key part of the thesis. Intel felt that they could accelerate our engineering and manufacturing. So that’s gone extremely well. The third piece is really the synergy. And this is where the customer value increases. So, when you look at a processor plus FPGA, whether it be Xeon in the cloud, whether it be Atom or Core at the edge, whether it be an SoC in the infrastructure; this tightly coupled with FPGA plus IA technology is a really showing value proposition. So, in every different vertical, we are bringing to market different solutions. And the interesting part is, it’s the software overlay that’s actually the real value and the IP library. So that’s gone very well.

And then, lastly, if you think about it, the timing of this with all move -- Intel’s move from PC-centric to data-centric company and the flood of data, FPGAs, I call there -- if you think of them, think of them as a data shovel. So, they’ve always been very good at digital signal processing, very-high throughput, very-low latency and very-high flexibility. So that has set extremely well with the overall corporate strategy. And in every vertical segment, we have a very good solution. So, across the board, it’s gone very well. And I wanted to make sure that was stated because there were a lot of concerns early and I wanted to just make sure that I -- I think we’re getting it right on this acquisition. If you look at 14-nanometer, we are really excited. So, that was our first process with Intel foundry and that’s Stratix 10 family.

So, we were a bit delayed. If you think about it, we were delayed at 20-nanometer as Altera. And that was probably -- if you think about, probably the things didn’t go extremely well or if you had to pick something that didn’t go well on the integration, it would be 20-nanometer execution. So, once the deal closed, we were still getting our 20-nanometer products to market, which in turn delayed Stratix 10.

So, when you look at 14 though, I just had a very good and detailed overview last week. So, it’s our largest pipeline ever, over 30 plus years at Altera. We are converting that pipeline. In Q3, I think Brian and Bob mentioned this on our earnings, Q3. Q3 was our largest design win quarter in well over three years. And we are also bringing to market over the next month or next quarter, 50 gigs SerDes technology integrating HBM2 DRAM and we are also shipping up quad-core A53 in the FPGA. So, it’s really going well. We were definitely delayed from our competitor but it’s really -- we’re picking up some pretty good momentum now.

And what’s interesting, the most interesting about this is this. So, what we’re building is really a platform for the future of heterogeneous integration. So, if you look at this, and I’m not sure they’re going to be able see it, but if you look at it, you have a 14-nanometer FPGA from Intel; you have SerDes E-tiles [ph] or transceivers from TSMC; you have HBM2 DRAM from third party vendors; and you have an integrated quad-core ARM A53, all on the same substrate. But the beauty of this is, it’s all connected via Intel’s Embedded Multichip Interconnect Bridge or EMIB. And that is, basically all the connectivity gets built right into the substrate. So, everybody else that does this has an interposer board here that adds power, cost and slows the performance down. So this is unique to Intel. We see this as -- what I mentioned is really the platform for the future, because now we can do very fast derivatives. And in ‘18 you can see a lot of new derivatives whether it’d be different tiles or different FPGA slices, you’re going to see a lot of different service. So that’s what’s really exciting about this family.

Blayne Curtis

It’s good entry point. And when you look forward, can you maybe give us a perspective as to what does that roadmap look like? You’re 14, the next Intel node would be a 10 which they will be ramping next year in more volume. How does that dovetail with what you’re working on?

Dan McNamara

Sure. So, on 14, we’re going to proliferate the family more, all through 2018 and maybe into early 2019 with effectively different tiles that I’ve just showed you but also different densities for Stratix 10 family.

If you look at 5G radio or 5G baseband, there is a power and cost concern. So, this is our largest device here. So, we’re going to get smaller devices and more derivatives out. 10-nanometer is another good story for the integration. If you recall with Altera as an interdependent company, we were what I would call, sequential shop. So, if we got delayed on 20-nanometer, that just pushed out 14, and effectively that’s what happened coming into Intel. So, immediately day one and working with BK and some of the team’s folks, we decided we needed to go parallel immediately. So, we’ve been working on 10-nanometer for two years now and we’re going to deliver 10-nanometer devices in 2018, sampling across. We have a very good engagement model with the key customers four 10-nanomter. And we believe we’ll have a very good performance per watt solution and obviously be in good timing from the standpoint of the major growth sectors for us, so very exciting.

Blayne Curtis

Good transition. Obviously, there is a broad audience, and they may not even know what 10-nanometer is. Maybe just taking that step back and talking about the end-market drivers. Since the acquisition there is a company and NVIDIA has had a pretty good run, since 2015 obviously AI is now huge. So, it does -- I think you guys as Altera were very early to talk about large TAM within the data center. Maybe you can just kind of address what type of applications do you see FPGA is to sitting into within the data center?

Dan McNamara

Yes. So, we don’t have a solid datacenter opportunity early, and call even three acquisitions we talked a lot about has been a $1 billion opportunity for the FPGA market, we believe that and one more for a number of reasons. If you think about the innovation cycle we’re in and you think about the cloud and the data center, it’s all about total cost of ownership and really efficiency. So, when you look at that. And I think Navin Shenoy talked a lot about the broader TAM we see at Intel across the data center, which is a large TAM and we’re going at it with a complete portfolio of products. So, FPGA is one of them.

But, let me talk a little bit about FPGAs in particular. So, we see three main areas for opportunities for us in terms of the overall base. And the first one would be, what I would just call infrastructure. And that would be, I think the network type across, think of smart network interface connection, think of compression and encryption algorithm. So, all of that is really running the network in the data center. And you want to offload that as much as possible from the Xeon because you don’t want Xeon cycle go on, just moving data and storing and fetching data. So, we see a big opportunity there. And Microsoft has talked very loudly about what they’re doing there, so big opportunity in infrastructure.

The second level I would say is what I would call predictable high cycle applications. Think of video transcoding, think of AI, as you mentioned. These are workloads that come with large amounts of data, typically non-standard type data formats, and this is where FPGAs play extremely well, and you mentioned GPUs. The advantage of an FPGA over GPU is latency. GPU is like the batch-up data, large amounts of data. That’s why they’re really good in training. FPGAs can make decisions or process on a frame-by-frame basis. And the reason for that is you can customize a circuit, but you also can - you don’t have the burden of an OS. So, every cycle you can do something very efficiently. So FPGAs play well in that heavy data intensive area. And then, what I would call, higher level applications. And those would be, think of genomic acceleration, think of high-frequency trading, think of -- the buzz word now is FPGA as a service, we’re hearing a lot about. Those are areas where we also see a good opportunity. And in that area, we’re really building an ecosystem of partners. Because there, it’s all about the solution set. It’s the software stack and the IP that you enable -- if you think about your bank and you want to do some work -- you really don’t want to be dealing with an FPGA, you want that FPGA to be extracted away, and that’s what we’re looking at there.

So, three main areas of thrust for us in the data center, we do see a pretty big opportunity, the AI in particular that you mentioned. So, couple of things. That’s just only workload and the exciting workload and probably will change virtually everything we do, every industry going forward. I think the Intel position is really unmatched in value, if you think about Xeon for inference you have or working on Nervana ASICs for training. We have FPGA for inference and I’ll talk more about that. And then, we have Movidius for very, very low power at the edge and then we have Mobileye in the auto. The beauty of that is not the individual silicon pieces, it’s the software stacking. All of this is integrated under one software flow and IP library, which is really the value for the customers. So that’s what exciting.

Now, FPGA in particular, I just talked about, FPGA is again with a parallel architecture and embedded memory, a very, very good at real time inference. And you’ve probably heard of the Microsoft Brainwave demo that they did at Hot Chips. They were getting less than a millisecond, very high performance in throughput. So, we see that. We’ve also announced NEC’s NeoFace facial recognition system, which is using our FPGAs and there is number of others. So, we’re -- from FPGA standpoint, we play an important role as part of Intel. It’s a broad portfolio. We think we are well positioned. But we also believe within FPGA that from the edge to the data center, we have a very important role for AI, and the reason being is, there’s so many different topologies. The topologies are changing. So, if you want to do a convolutional neural or an RNN or some different sort of technology, those are changing almost daily. And everyone is customizing it and wants their own. Again beautiful situation for FPGAs because you can change it on the fly. And so, we are excited about that.

The other thing about an FPGA in the data centers is, think about this, the flexibility. You can do one of two things in the data center. You can optimize for efficiency or you can optimize for flexibility. I’ll give you an example of something that I think is really a good way to think about it. If you look at an FPGA in the data center and we just finished Cyber Monday. And China has a similar day, 11/11 day where there’s just tremendous online trading. So, FPGA in the data center can accelerate search all day along. And then, after midnight when the search stops, you can on the fly, in milliseconds, change that inherent circuit in the FPGA to do analytics. And that’s the beauty of the FPGA versus an ASIC. You can’t do that with a GPU or an ASIC. You can on the fly change that circuit through to react to different workloads, and that’s really where the value is.

Blayne Curtis

It’s interesting, you’ve seen some very big TAMs run out there, make 1 billion look small. Even on inference side, you’ve heard 10, $15 billion. And I think you are right when you look at -- it may not be one solution fits all. GPUs definitely need to have a batch process to keep that big parallel ship running. You may also be competing against CPU that’s idle, so the economics work pretty well, it’s free and you can do some things at night, night cycle where you look at what pictures I have loaded in and figure out what to show me. So, that’s what I try to dissect in terms of what are those applications we really do need that latency. I was wondering if you had some examples where people are really going to have to want a millisecond of latency to get something done.

Dan McNamara

So, a couple of -- a lot of the low latency needs are at the edge, so smart city infrastructure, facial recognition. If you think of the whole surveillance application, it’s getting smarter and smarter. And if you think about years past with surveillance, you were limited in the number of cameras you could put out there because they want to consume it and it was just going to storage. Where now with AI and deep learning inference, you can react on it in real time, millisecond wise there’s an ID. Another area is in a factory floor. You can use FPGAs on a production line to look for defects. So, again, you need to make a decision almost every cross cycle because you are looking -- you have large amounts of image data and you are trying to make a decision on that image frame by frame. So, those are kind of the areas where if you think about -- in an auto, if you are going on the auto Autobahn 100 kilometers per hour, you don’t have time to make a decision, you need to ID an object and you need to make a decision very, very quickly. So, there -- I think you’d to surprised that the number of applications for inference that are -- really, really need that latency.

Blayne Curtis

I do want to ask you in terms of -- obviously Intel brings scale, but also when you look at the way you’ve seen GPUs deployed there on some sort of card sitting next to a CPU. If you look at the Microsoft deployment that’s a card sitting next to a CPU. You’re the one company that has both sides of the equation. I was curious if you’ve identified any particular advantage to be having these three pieces of silicon or is there something that you could do down the road to take advantage of that opportunity?

Dan McNamara

Yes. So, we look at this -- I think you’re absolutely correct. There are so many different, what I would call, platforms, to support. So, you can look at it in a package. So, instead of having all these tiles, you can put a Xeon or an FPGA. You can do it discretely on a board. You can build a separate adding card. You can put FTGAs on the motherboard. So, there is a number of different platform choices to solve this problem. I think the overarching area of for us is if you look at any one vertical, it’s a different solution and we’re building out those solutions right now. But, the key is the software flow. It shouldn’t matter what the platform is. We really want to abstract that layer of hardware away and provide that complete solution as Intel whether it would be -- and if you think about it, we already have our Xeon Acceleration Stack for the cloud, we have a computer vision SDK, software development kit, for packet processing, we’re delivering what we call a DPDK. For AI, we have a deep learning acceleration. And so, we’re building up all these different software flows and IP sources for virtually any hardware platform. So, for us, we’re looking at other areas where we may need to go monolithic, there are certain areas where you might need to. And so not really ready to talk detailed roadmap. But if you think about the cloud, the infrastructure, so 5G and the network, the transition of the network, the EDGE. Those are four different application sets where we probably will service all four differently. And again, it’s -- the silicon platform is not the important part, I think it’s the software and IP over it.

Blayne Curtis

I do want to ask you, obviously in the valley, there is a certain fixed amount of people who know AI and there is probably a much smaller percent of people who know that and how to work with an FPGA. So, how do you make your solution accessible to these AI engineers if you can find them and hire them so they would use your system versus just using something like a GPU because that’s the established solution.

Dan McNamara

Yes. It’s a really question. That’s our biggest challenge. Our biggest challenge right now is, it’s the good and bad of an FPGA. The good news is, it’s infinitely programmable, and it’s -- you can optimize it to very low level at the gate level. So, AI, that’s what’s happening. In deep learning inference or training, you can optimize, as I mentioned, on every cycle. The challenges with all these emerging topologies, how do you build frameworks to enable everyone, and that’s what we’re doing, building out across Intel, a complete a software stack, that will support different frameworks, TensorFlow and Caffe and things like that. But the topologies are the hard part of us. And we’re working, we’re basically developing IP primitives, so that the whole software stack becomes so much easier. So that is probably -- if you think about it, that’s probably why GPUs actually ended up in AI, it was there, brought a lot of computing power and the software there. But we have really accelerated our software. So we believe we’re right in the -- we’re getting close to have this sort of seamless approach to deep learning.

Blayne Curtis

And I do want to ask this question. FPGAs have ups and downs in particular end markets, because anytime there is volumes and the economics work, someone does something more dedicated and a tailwind becomes a headwind. Obviously, the size of the AI TAM has created a lot of interest and there is dozen semi companies getting -- we haven’t VC [ph] money in semis for a long time and also now all these startups. Nervana would be similar, more training oriented, but it’s in the context. So I’m just kind of curious, as you look out, if this is really a multi-billion dollar TAM, why wouldn’t you see more dedicated silicon and is there still a room for FPGA?

Dan McNamara

Yes. It’s back to what I talked about. I think it’s optimized for efficiency or flexibility. And I think right now, we are in, if you just look at 2017, 2018, 2019, we’re in what I would call a massive innovation phase. And what I mean by that is all about the data. So, when you -- I think, read an article last week. McKinsey analyst mentioned that 90% of data continues today wasn’t available two years ago. That’s only going to multiply by orders of magnitude. So, putting more demand on -- it’s going to create more new uses, new devices, but the whole -- the strain on the infrastructure and then the flexibility in the data center is going to continue. So, you’re going to have to react to all of these new uses and new workloads. That’s why I think that we’re in a very good innovation phase. I do believe that ultimately sure, we’ll lose some to ASIC. I think that ultimately when a data center gets -- sees a workload that’s not changing anymore, they’ll look to do something. And that’s why this is so important to us, because you can pull out that one workload and integrate their ASIC. So that’s why we believe that we’re building a platform to really sort of enable the innovation and stay in it also.

Blayne Curtis

So, while talking about the data center, I want to get the opportunity to kind of look more broadly direct adjacency with the auto? But obviously, you have a broader business that you’re running before. So, I’m just kind of -- opportunity, where else do you see opportunities in the next year and beyond?

Dan McNamara

Yes. So, my job before this was the embedded side of the house. So, I’m -- very near and dear to me. So, we do believe that there is a big edge acceleration opportunity. So that’s sort of growing there. So that’s big for us. The other thing for Stratix 10 is the DSP capability we have in Stratix 10 and also the security features we’ve put in, give us a really good situation for military. So, if you think of military, we have electronic warfare, they’re going to digital Beamforming which is right for FPGA. So, we see a big opportunity in FPGA -- FPGAs for military. And in auto, we also see a very good opportunity. We’re growing pretty aggressively in auto today across three main areas. One would be infotainment, traditional; two would be sort of discrete issues. So, we announced DENSO stereo camera. So that’s a discrete issue where our device is doing all of the processing. And then, you have this move to level 3, 4, 5 where it’s centralized architectures. And there, we can play any number of roles, whether it would be I/O expansion or sensor fusion, which I think Audi A8 was announced that we’re doing sensor fusion there right next to a Mobileye device.

So, again, when you look at auto, the total Intel silicon portfolio is very, very strong and we believe that we’re going to be a piece of that. And then, the other area where -- it’s really our bread-and-butter, it’s really 5G. And then, we also believe that this transformation to the wired network or SDN is really important. Because the FPGA companion to Xeon, if you think about what the carriers want to do is really be able to really fine tune and deliver differentiated services. So, we have to get to those deeper level packet processing type of tasks. The FPGA is really good accelerator to Xeon. So, we really believe that -- I would say that the cloud has emerged as -- for FPGAs as a very big opportunity but the traditional comps infrastructure and the embedded is still there, and we are still sort of servicing that.

Blayne Curtis

I do want to go back to a bit of a geeky level. So, I apologize to everybody. But, in terms of the issues that you had with 14-nanometer Stratix, [ph] they are behind you. If you were going to get to compete for a socket today versus a 16-nanometer part, how does your 14-nano part? I know, people probably -- I mean they are not comparable, so, let’s just forget what the number is, whether it’s performance or power or cost, how do you line up?

Dan McNamara

It’s a good question. So, we have a very good competitor and we’ve been competing with them for many, many years. And I think the competition makes us better. But, if you look at 16-nanometer which was our competitor, they have more breadth because they’ve been at that no longer. So, they no doubt have more options. I am going to go back to what we are doing. I believe we have advantages in DSP for deep learning in some of these data parallel workloads. We have security features that are differentiated. And then, early next year, we are going to have differentiated devices for I/O, so we will be driving 56 gig SerDes E-tiles. We are going to have this differentiated device with HBM integrated, which will give you unparalleled data bandwidth, memory bandwidth and latency. And then, we also were the only high-end FPGA with ARM processor. So, you’ve heard us clearly talk Zynq about, a lot but this is typically focused on their mid-range and low-end families. So, we have it in our high-end. So, there’s some, key, we believe differentiators, in spite of the broad breadth that they have out. And by the end of this year, we will be in volume production and we are going to, as I mentioned, get more derivatives out also 2018. So, we feel very good about the situation right now.

Blayne Curtis

And I guess as follow-up, just how long is this window going to be open at this node before people start looking at the next one, which would be I guess 7, 10?

Dan McNamara

Right. As I mentioned, 10, we will be sampling 2018. I think we are going to -- actually I think that much like 20 and 16, 14, they go on together. It’s no longer, let’s -- it’s just one node because as I mentioned we need to go more parallel in our development cycles. So, 10 offers much better performance. So, customers in certain applications need that 10-nanometer. So that’s going to continue.

If you look at what we call advanced products, which is 28, 20 and 14, those product lines are growing very aggressively for us. They were later than we expected but overall, we are pretty excited about where that is going. They grew over 25% in Q3; we expect that to continue into Q4 and into next year. So, we feel good about the traction. I think, you will see, we will probably be delivering 10-nanometer devices and derivatives of 14 well into 2019.

Blayne Curtis

So, maybe just to conclude, we’re running out of time, but if you can just -- as you look to next year, when you sit down at the end of 2018 with BK and he’s grading you on your performance, what do you need to accomplish? The derivatives are one part of you just explained. You talked about developing the ecosystem and other areas. What are your main couple of areas that you have to accomplish next year?

Dan McNamara

Yes. So, as I mentioned, the next two years are critical for design win and innovation. So, for us, the opportunity is there. I would say, it’s really stick to the strategy, which is A, build the best silicon in the world and leverage this platform; two is really enable a new set of users, software, developers, data scientists. They don’t really care about embedded memory and logic. They want to be able to leverage the power and performance of the device by just riding code. So, we’re building out software suite there. We need to do that more quickly. And then third is to really vet out and develop synergy and tightly coupled FPGAs plus processors and SoC technologies, so that the one plus one is three for Intel across virtually every verticals segment. And that’s where we’re focusing. That’s -- if we can just stay in those three vectors and deliver, we’ll be in good shape.

Blayne Curtis

Okay. We’ll end with that. Best of luck. Thanks for joining.

Dan McNamara

Thanks, Blayne.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

About this article:

Expand
Tagged: , Semiconductor - Broad Line,
Error in this transcript? Let us know.
Contact us to add your company to our coverage or use transcripts in your business.
Learn more about Seeking Alpha transcripts here.