Intel Corporation (NASDAQ:INTC) Barclays Global TMT Conference December 7, 2016 11:15 AM ET
Diane Bryant - EVP Data Center Group
Blayne Curtis - Barclays
Q - Blayne Curtis
I extend my welcome to everyone as well. I am very happy to be kicking off the conference with Diane Bryant from Intel. She is EVP and runs the Data Center Group, a long time Intel exec, one of the few EVPs. I thought one question that we're asking everybody is as they look into next year, what are the drivers, I thought maybe expand a little bit more since you have been at Intel so long seeing the transitions to PCs, you've been a big part of the server transition, so just maybe your thoughts on what's next for Intel as you look out maybe even beyond next year?
Yes, so we have, as you know, earlier this year we made a very explicit shift in our corporate strategy. Our corporate vision is if it's smart and connected, it runs best on Intel, and the shift in our corporate strategy was really a move towards that virtuous cycle of growth that we all see and we feel it's our job to help drive the transformation of billons of connected devices, connected through a highly reliable, highly capable network, connected into a data center with more and more cloud services being deployed to those devices, generating lots and lots of data that give great insights.
And it's that virtuous cycle of more devices, drives more services, drives a bit of the cloud that we believe we're core in delivering and we also clearly believe we're the only one that can deliver technology solutions truly end-to-end. And so that's the focus, it's not a shift away from PCs, PCs remain one of those connected devices, but it's really a reflection of the fact of the diversity of connected devices and things.
Great, and as you look into the server market, I think we'll get into the new things you're working on to kind of expand the growth there. But if you look at just the core server business, obviously still growing high single digit this year. But kind of as you look forward, just your visibility on both the enterprise side as well as some of your visibility into the deployment in the cloud, which is truly the biggest growth factor there?
Yes, the enterprise has been an interesting market this year, as you know, we saw it in our Q3 results, the things we do know about enterprise IT with high confidence is number one, businesses are more and more relied - dependent upon and relying on technology. So the demand for compute cycles by enterprise continues to grow at about a 50% CAGR, so significant growth in compute demand by businesses.
We also – the other thing we know with certainty is that it is a hybrid world. Businesses have applications. They have affinity to being on-premise. Businesses have applications that gravitate nicely to public cloud and the fact that it is a hybrid world, I think was nicely reinforced last week by Andy Jassy at the Amazon Reinvent event and he clearly said enterprises are going to be deploying applications on-premise for many, many years to come and I was particularly excited about the big announcement that came between Amazon and VMware just putting an exclamation point on the fact that it is a hybrid world.
And you take Amazon the largest cloud service provider of infrastructure and you take VMware, the largest private cloud deployer and you put them together and it creates a wonderful environment for enterprises to deploy applications on-prem or in the cloud without fear of lock-ins.
So the world is hybrid and what we've seen in 2013 is that, those applications that do have affinity to the cloud and a clear example of that is collaborative workloads, we see that shift. So if you look at the public cloud market in 2016, enterprise services will grow at about twice the consumer based services.
So enterprises services and the public cloud will grow at about 35% this year, consumer services growing about 13%, so enterprise this year, so is moving some of those workloads that naturally gravitate to the public cloud. I am still a firm believer that enterprise is going to grow on-premise, but we just simply need to make it easier to build a private cloud and that's why the announcement between Amazon and VMware is so important.
Microsoft's announcement earlier this year of Azure stack on-premise and so you have a seamless movement of applications between Azure stack in the public cloud, and Azure stack on-premise is another great announcement that's going to make it easier for enterprises to refresh their on-prem environment in an efficient private cloud manner.
One of the themes that has risen this year is AI and you recently had an AI Day. We had a bus tour in the valley looking at autonomous driving, but it is obviously way wider than that as you look at deep learning and applications that people haven’t even thought of that they are coming up with. Maybe you can talk about your portfolio to address AI as we go from the Xeon Phi to FPGA or now Nervana, maybe outline that? And I think people don’t necessarily think of Intel as a play on AI.
Oh, really? Okay, you got me going, so yes, we just had our AI Day. You know it is interesting AI or Artificial Intelligence, that term was obviously founded back in 1956 when the artificial intelligence word was formed and then it grew out of favor because nothing really materialized through the 70s. And so people stopped using the term artificial intelligence. If you think about it, in 2012 the big buzzword was Big Data and then the big buzzword went to data analytics and then machine learning and then deep learning got a lot of buzz.
And so we reinvigorated the term Artificial Intelligence, because in the end that's the goal. The objective here is amassing large amounts of data, driving large amounts of compute on ever more sophisticated algorithms in order to deliver artificial intelligence. And artificial intelligence being where the computer system exhibits human-like behavior in that it can both learn and predict. And so, we've been investing in big data, data analytics, machine learning, artificial intelligence. It has been a key - one of our three growth drivers for the Data Center Group since 2012, and our investments you may remember in 2014 we did our big investment with Hadoop, so you need to amass that data and so we have our collaboration with Cloudera. Then in 2015 we announced the 3D Xpoint memory. So we have now nonvolatile memory solutions that we’re, I'm happy to say, we're shipping samples now and getting great results, so you can amass massive amounts of data in a persistent manner for in data, in memory analytic solutions. We announced Knights Landing obviously earlier this year. We've just acquired Nervana in August.
So we have been investing in this broader space of artificial intelligence for the past five years and it is an exciting space. We have a broad portfolio and it is the fastest growing workload in the data center. It will be the number one workload growing 12x by 2020 and it is the future. It is still a small market. It is still about 7% of all servers, this year we’ll run analytics workloads and that is consistent with last year, so it is still growing at market because it is still rather nascent, but we see tremendous potential in it and we'll have a portfolio that runs all of the many different algorithms and implementations of artificial intelligence.
I know deep learning gets a lot of focus from the GPU gang, but it is 0.2% of all servers and really it is the broader algorithms that are running today delivering things like customer recommendation engines and cyber security solutions and fraud detection, all of those algorithms are not deep learning algorithms and they run very broadly on Intel architecture today.
You do bring up the point obviously GPUs have gotten a lot of the early wins and attention and when you look at - you acquired Nervana more dedicated silicon for those applications, what do you need to do to get those solutions to market and what work is needed to port over any of the work that has been done on a GPU to run on this new silicon?
First I have to correct you that GPUs have gotten the early wins because 94% of all servers running analytic workloads run on two-socket Xeon servers today, 3% run on a two-socket Xeon server with a GPU, and 2% run on some other architecture like Power or SPARC. So we dominate that small world of artificial intelligence and we have the vast majority of that.
Now to your point though, when we acquired Nervana, Nervana was building an ASIC that would go into the PCI Express slot as an accelerator and GPUs, GPGPUs are also accelerated. They sit in the PCI Express slots. When we talk about the accelerator market the objective is to pull that accelerator into the processor and there is a lot of value in doing that. Because then you have access to the large memory footprints, you have low latency access, you can scale the workload and so the objective here is to pull those and integrate them in. And so what we announced at AI Day is that Nervana will first launch it as an ASIC. We want to get it out the door, they have customer commitments. We want to get it out the door, but the next generation will integrate it into the Xeon processor. So it will be a bootable processor with the Nervana deep learning engines and we've stated that we will deliver 100x performance improvement over today's GPU solution by 2020, so big path.
Yes, obviously a lot have talked about in your group, autonomous driving, we had the bus tour in the valley and lots of efforts going down this path and you've had some announcements and partnerships, maybe you can describe where you are aligning in the opportunity for Intel?
Yes, autonomous driving, it is one of those really exciting use cases of that virtuous cycle. So the car becomes the connected thing and connected to a cloud delivering cloud services, certainly autonomous driving services, there are all kinds of cloud services. And so we have, you saw the, we had an announcement with BMW, but really all of the car manufacturers are looking at this world of autonomous driving.
There is general consensus that highly autonomous driving will be available by 2021 and so that means that as a car manufacturer you become a technology provider, you become a cloud service provider in many ways. And so they need to find technology partners and what we find is they come to us because we can provide the end-to-end solutions. So you've got a car becomes chock-full of high-tech compute solutions connected across a 5G networks for high reliability, high bandwidth, low latency, connected into a data center where you have cloud services being deployed.
So we can provide end-to-end technology solutions for autonomous driving. It is really great exciting proof point of artificial intelligence and machine learning algorithms, a very rapidly understanding environment and taking the correct action, so there is a lot of innovation going on in that space and it is – we booked over $1 billion in deals so far this year on autonomous driving. We cannot talk about all the wheres and whens, but it is a market that's really growing.
And there is definitely a lot of various opinions about the architectures and deployments, obviously there are opportunities in the data center, but then within the car, I think you guys have described that it could be a server on wheels obviously there will be different range of solutions. But where do you think you will see initially and kind of do you actually do think you’ll have a Xeon chip in a car one day?
Oh absolutely. No the Xeons are in the cars today actually, Xeon and Xeon Phi, and I think there is a lot of value and I think it is – sometimes it is underappreciated, a lot of value in having a consistent architecture end-to-end. So we can deliver Xeon and a Xeon Phi in the car doing the scoring of the model, the rapid [inference] [ph] of the model as well as the Xeon, Xeon Phi in the data center doing the training of the model. You have a consistent architecture that you are building your solution on whether it is training or scoring, whether it is development or deployment, one architecture and it is a very compelling value proposition and it is why we feel that it would make great traction in the automotive industry today and that we'll continue to make it.
And you talked about the stickiness of GPUs as a CUDA, part of the beauty of these analytics solutions is they are built on framework. So, you can build a deep learning solution on a TensorFlow framework from Google or Caffe from Berkeley, those frameworks abstract away the architecture. So, if you are developing the deep learning solution it could get ported on to, it could pull the libraries from a GPU solution or it could pull the libraries from an IAS as the developer of the solution, you don’t know that. So this stickiness is not really - is not really, is not sticky so it doesn’t really exist. So that’s another thing of, I think there is a perception that once there is a win in a GPU it’s a sticky solution and it is really not, just based on the way these frameworks have evolved for analytics.
I want to ask you one other area in the server market that you’ve seen good growth is in telco side, maybe you could just describe where you are seeing this growth, what types of applications and then I do want to cover 5G too?
Yes, the telco space is one of the places that I continue to talk about, it’s one of our three big growth strategies, our growth vectors are cloud, the network and analytics, or AI. So the network is a $19 billion silicon market so it’s the same size as the server market. We have made good traction in winning the network space because the carriers are stuck with the situation of having a whole lot of high cost proprietary fixed function boxes and they fundamentally can’t support the build out of the network from a CapEx and an OpEx perspective with these fixed function devices.
So there is, a lot of envy, I always talk about the telcos are very envious of the cloud service providers because they are able to deploy their infrastructure in a very dynamic, agile way because it is the virtualized automated cloud and so that’s the direction the telcos are going, of taking all of those functions - AT&T will say, you know there is over 200 unique functions in the network and virtualizing them. And if you are going to virtualize them the definition is Intel architecture because that’s where virtualization runs today.
So we are making a lot of progress, we’re trending to 12% of that overall market this year and continued to grow. Altera, the acquisition of Altera adds another 4 points of share and we don’t count that because we acquired it, but you could say we have - Intel has 16% share of the network market.
And last year we had trials on 40 unique functions, this year another 70, so we just keep taking each one of those network functions whether it’s load balancing or security or intrusion protection or virtualized radio access networks or customer premise equipment, everything is getting virtualized and we just keep cranking away at each one of those and proving that they run best on a virtualized Intel platform, so it’s a very exciting growth opportunity for us and we’re doing quite well.
And you said 12%, so when 5G I guess in 2020, it’s not that far away so, do you think what needs to happen for the workloads to be virtualized by then because obviously there will be new problems of 5G?
Yes, so 5G as in many ways with the deployments having today in anticipation of 5G because you need to have a virtualized environment when you get to 5G because 5G just drives out the demands on the network from reliability perspective, the bandwidth, latency perspective, everything just gets bigger when you go to 5G. So the carriers are moving to get into a virtualized environment running on Intel architecture by 5G, AT&T they said their network was 5% virtualized last year, now on-track for 30 % this year and their goal is 75% by 2020 and in general 2021 is a year that folks are talking about deploying 5G.
We’re on track for the first preproduction deployment of 5G in Korea in 2018 so; 5G this is accelerate to getting the carriers moving into modernized virtualized network environment.
How much is the play and when you look obviously in different segments, and you’re selling modems, but obviously you are becoming a bigger player in the 5G landscape obviously those standards are being set. And your group you would sell through working as a base station the additive to whatever is virtualized in the server and maybe you can talk about opportunities on your side?
Yes, Intel as a whole, between myself and my peer organizations is just not response of the area interface side, but at the modem side of 5G and Intel as a whole has taken a leadership position in defining the 5G specification and it's obviously going very well. It's a big opportunity. I would still say in the telco space, the momentum, an indicator of the momentum of the move of the carriers to a virtualized environment, this year we have closed over $5 billion in deals with the carriers and those deals will roll out over time, but it just shows the momentum behind the carriers needing to get to a virtualized infrastructure in support of the move to 5G. So Intel is a leader in setting that 5G specifications. We will participate both on the air interface side, but then also the base station and all the core infrastructure behind it, so both ends of that interface.
You mentioned 3D XPoint, you have a lot of other projects you're working on, Silicon Photonics or XPoint or Omni-Path, maybe you could highlight some of these factors and as you're looking at next year which ones you are most hopeful on in contributing the growth as an additive to the [indiscernible]?
So we talked about, we talked about our double-digit growth out to the horizon, a significant portion of it is the growth beyond microprocessors and diversification of our product lines and we have always been in the Ethernet controller business, boards and systems and we have always had other products other than CPU but there is three new product lines that will contribute significantly to the growth because we fundamentally have a unique and compelling value proposition in those spaces.
And you listed them as Omni-Path fabric, the Silicon Photonics and then 3D XPoint memory and I will just quickly Omni-Path and Silicon Photonics launched this year, Omni-Path as high performing low latency, message processing fabric for high performance computing, so zero percent share of the 100-Gig market to 50% share in six months and I think we're going to look, I think that is a world record because zero to 50% share of a market in six months.
And then if you saw the top 500 list in November we're 67% share of the top 500 systems running 100 Gig, so lift from 0 to 67% of the top 500 in nine months. So a very compelling value proposition, we can integrate the fabric with the Xeon Phi or with the Xeon processors, so you get all of it, all of the value propositions of integration and that is going very well for us.
Silicon Photonics again is a $1 billion 100-gig market, we launched the 100-gig in June, it's a billion dollar market, the cloud service providers are moving to 100-gig within the data centers. They need to get to 100-gig optical, but traditional fiber optics is just way too expensive and we have a compelling value proposition, and we are the only one on the planet that can integrate the laser material, the Indium Phosphide light emitting material, we are going to actually integrate it on to the silicon IC wafer.
And so through that we get precision alignment, alignment of the optical is the biggest problem. It is where you get all the losses, it is where you get the costs, so by fabricating it, we get precision alignment because of sensitive lithography, you can't be more precise and it is classified construction and through the lithography process.
So it's a compelling value proposition, higher density, lower power, lower cost and all the compelling value props that you can imagine. So we're very excited about that and then the 3D XPoint memory. So, when we talk about analytics, it is all about how big of a data set can you have, right the bigger data sets that you have and more accurate the model is going to be, the training of the model and so with 3D XPoint memory, we can double the capacity of a DRAM system and we can deliver it at 40% lower cost per gigabyte.
So it's a very compelling value proposition. As I said, I am excited that we're shipping samples and getting very nice feedback. So those three things together, they are all multibillion dollar markets and we believe wholeheartedly we have a competitive advantage and we can win.
Okay, yes now just following up [indiscernible] it is obviously very difficult to stick that laser with the similar materials and you've tried it couple of times, but you're now feeling confident you will be able to ship this with the [indiscernible]?
So we can't now, so we launched in June started shipping to the largest cloud service providers in June, the second half of this year has been ramping volume production is going great and yes highly manufacturable, yes we can fix that indium phosphate right on that silicon wafer.
And then the 3D XPoint, I would say is kind of what applications are you seeing the most demand for it, obviously consistent between two different existing memory technologies?
Yes so it's anything that needs a large, that benefits from the large memory footprint that's usually different throughout the nodes. If you take an example, if you were to buy a two-socket Xeon server with three terabytes of data, so big memory footprint in support of the data analytics solution. You've spent $6,000 to $7,000 on the CPU. You've spent over $20,000 on the DRAM, so it just becomes prohibited.
So we can take that system, move it from DRAM on to 3D XPoint Memory and reduce the cost of that DRAM $20,000 reduce it by 40%. So now you’ve made it cost effective to deploy these very large datasets and we believe we’ll have unleashed this artificial intelligence to the analytics markets. So saying it is a small market today, but it's got crazy potential, we just need to unleash that potential. And so it is targeted at anything from large memory footprint, the data analytics, high performance computing workloads in general, especially visualization, HPC visualization workloads and hosting, cloud hosting solutions, so if you think about a hosting solution you want to have as many virtual machines per server.
The first thing, you generally run out of is memory per VM and so by having a large memory footprint, you could have more virtual machines per server, so cloud hosting is the other obvious application.
So we started at high level and went down to all the different products, maybe just going back to the high level here, going back to the real question, as you look out to next year you have a lot on your plate. There's lots of different vectors. What are the most important ones that you're putting most time in to make sure to get it right continue to grow like you wanted in your group?
Yes, so next year we will be launching the big next generation Purley platform, our next big Xeon platform. Next year we will be ramping Knight's Landing and then launching Knight's Mill. So Knight's Mill is the Xeon Phi solution targeted analytics. So our strategic vectors remain the same, drive the cloud and deployments which is both public cloud and we continue to stay at growth over 20% at the public cloud market, most consumer services and enterprise services, grow private cloud deployments and this could factor I think that Amazon and VMware announcement is critical for letting CIOs relax and realize they're not going to get locked in if they deploy a certain solution on-premise, they have the opportunity to break through the cloud.
So, driving that deployment with our next generation Purley platform and then the analytics space, we will have our next generation Xeon Phi out next year that's targeted at deliver 4x improvement over the current Knights Landing and deep learning workloads, so accelerating the artificial intelligence world. So those three vectors and then network, I can't obviously leave out network, so continued growth of our space in the network as it moved to virtualized environment. So our three vectors of growth remain constant and it's just executing against, you're right, a very diverse product line to serve all of those emerging workloads.
Well great, with that I will leave it there. I appreciate your time Diane.
Thank you. I appreciate it.