Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message| ()  

Executives

Diane Bryant - Vice President General Manager, Datacenter and Connected Systems Group

Paul Santeler - Vice President and General Manager, Hyperscale. Business Unit, Industry Standard Servers and Software. Hewlett-Packard Company

Jeffrey Snover - Distinguished Engineer and the Lead Architect for the Windows Server Division

Frank Frankovsky - Director, Hardware Design and Supply Chain at Facebook

Jason Waxman - General Manager, High Density Computing Data Center Group Company Intel Corporation

Mark Miller - Press Relations

Analysts

Damon Poeter - PC Magazine

Don Clark - Wall Street Journal

Intel Corporation (INTC) Intel Datacenter Press Briefing December 11, 2012 11:30 AM ET

Unidentified Company Representative

Good morning ladies and gentlemen. Please welcome Diane Bryant.

Diane Bryant

Okay. So, good morning, thank you very much for joining us for what is a very exciting launch for us, the launch of Atom SoC, the S1200. So, with this launch, we are adding to an already extremely extensive portfolio of products for the data center. The Atom S1200 is the world's first SoC that is enterprise-class developed and delivered for enterprise applications. It's a single chip solution.

Did I went too fast, so back up one. I am sorry. Did I hit the button too many times?

So, truly an SoC, this developed for ultra low power, high density enterprise class computing. As you will hear today, we have lots of industry support for our new product line. We'll hear from our great customer HP and our end users' Facebook and our industry partner Microsoft. We have over 20 designs already and it's obviously targeted the microserver space, but we are seeing wonderful pick up in other portions of the data center, including storage and comms.

And, with adding the Atom SoC product line to our data center portfolio, we are just reinforcing our commitment that whatever the workload is, it will run best on Intel architecture. We will deliver the optimized, application optimized computing solution. It's our commitment across servers, storage and network.

So, it's all about scale as more consumer services come on line, as businesses become more and more dependent upon IT for delivering both top line and bottom line growth as governments around the world continue to invest in IT build out in support of their countries' economic growth, it becomes all about scale and when you talk about scale, it drives an incredible focus on total cost of ownership. How do we manage this massive environment at a lower and lower total cost of delivery.

If you think about the cloud service providers today are deploying tens of thousands, if not hundreds of thousands of servers every year. If you look at the top 500 supercomputer list that was just published back in November, the number one Intel-based supercomputer had over 9,000 compute nodes in a single computer. It's just massive scale and enhances the focus on total cost of ownership.

Now, how you measure total cost of ownership depends a lot on which segment you reside in. So, if you are enterprise IT, total cost of ownership starts by driving up the utilization of your existing assets. That's why we've all spent the last eight years virtualizing our servers, aggregating those enterprise applications onto a single footprint through virtualization, driving up utilization and reducing total cost of ownership.

It's also about for enterprise IT total cost of ownership is about reliability keeping the business running and doing so at the lowest possible cost of support and maintenance of that wide variety of applications.

If you are in the technical computing space, total cost of ownership is all about how many FLOPS you can get per dollar, so FLOPS, per dollar from an acquisition TCO is the key and then from an operational perspective it's about FLOPS per watt.

And if you are living in the cloud service provider, you may measure total cost of ownership by performance per TCO, where its performance per capital spend, so performance per $1 at the acquisition cost and its performance per watt from an operational cost. There is also a very clear total cost of ownership focus around consistency.

Consistency in the data center drives down the cost of support and maintenance. Variation in the data center drives up the cost and so there is a clear focus on reducing the amount of variation, driving a level of consistency across the data center.

So, whether you are in any one of these segments, there is obviously a very wide range of applications to be supported and we have a very simplified graph here showing that range of applications. You have applications that are very compute-intensive, applications that are I/O-intensive and applications that require large memory footprint and memory bandwidth, very memory-intensive and so you have a wide range of application.

Of course, the objective is to deliver the right server solution stack to support that workload, but it's a clear balance between having the perfectly optimized solution for a given application and having as little variation in the environment as possible.

So, this is Amazon's view of their spectrum of workloads. Amazon started with that standard instance. You can see down in the bottom left-hand side and then has added unique instances within their data center environment to support the range of workloads.

So, for instance, up with the top there when Amazon started supporting high-performance computing as a service. It was very clear, they needed to deploy an HPC cluster, so they deployed Xeon E5 cluster that actually ended up number 42 on the top-500 list. Quite impressive, so it's the perfect instance for high performance computing workloads, but obviously there would be tremendous overkill for the bottom there in web hosting. So, it's all about having the right solution for the workload while minimizing total variability, so still even in the scale of Amazon, only seven unique instances across the entire data center.

So, our strategy continues to be, we want to address all those workloads. All workloads in the data centers, servers, storage and network workloads, all segments and to do so we provide a pretty extensive range of product lines. Obviously, on the top there Itanium for those very high end mission-critical applications and we have Xeon 5 product line that we just launched targeted at the high-performance computing space, so they are very technical computing intense, memory, I/O, compute-intensive workloads as a coprocessor and we have the Xeon family, so three distinct families with inside of the Xeon brand to cover the wide range of applications that are running inside of the data center and then we also have the Core. It's part of our product line up, so targeted at those less compute-intensive workloads, such as our storage roadmap, so Core makes for a good storage controller for the lower end of the segment.

And, for those I/O intensive workloads, we have made substantial investments over the years in building out a very strong fabric or networking portfolio. We are talking about we've actually been in the Ethernet business for 30 years now, starting with the 10-meg Ethernet controller, so Ethernet continues to be the backbone of the data center. It's what the data center runs on. It's what the cloud runs on, but we also recognize that there are other fabrics that are critical. For instance, in high performance computing, you need the lower latency, higher bandwidth solutions, hence our investment in QLogic and the InfiniBand product line as well as the Cray, IP and team in building out a high-end fabric for high performance computing.

So, we are invested in the fabric side and we have a roadmap of integrating these fabric components into our microprocessor product line consistent with the workloads for each of those segments. And now, today, of course we are announcing the addition of the Atom product line into that broad portfolio of data center products.

So, with the launch of Atom, we see design wins across the full spectrum. So, in the microserver category, of course when we talk about microserver, we talk about those workloads that are low compute, low I/O, so light weight workloads, but you want many, many, many of them packed into those small space. So, density, low power and low cost become the critical assets. And, hence, a good application of that is in the dedicated web hosting arena.

In the communication side, we have design wins on control playing, control processing for the switch environment and we actually have a design win that's a conversion off of a power PC onto Intel architecture recognizing, that back to low variability, when you have a consistent architecture running across your data center, across server, storage and network, you lower the total cost of ownership.

And then in the storage space, we also have design wins for the Atom S1200 in storage and the low end of the storage market as we all know, the amount of stored data continues to grow and with that massive growth in stored data we are seeing additional segments of storage emerge and so is need for low cost, low power storage solutions and the Atom S1200 is very well fitted for that and we actually have one of our design wins that a conversion of ARM on to Intel architecture, onto Atom. So, over 20 design wins to-date, as you can see across a wide range of customers and wide range of OEMs and this is just the beginning and we are excited about more and more design wins coming on board as the benefits of the Atom S1200 become realized.

So, this is our first custom-built Atom-based SoC, targeted at the data center, but we have been delivering products into this microserver segment for three years now. The trend as you all know, the trend towards more energy-efficient computing is not new. We've seen this trend over the past six, seven years of just ever increasing focus and desire to get lower and lower power and greater and greater compute within that power envelope.

So, back in 2009, we, we launched the category of microservers, we launched the category, we defined an SSI industry specification around it, so open specification invite innovation, invite the industry to innovate around microservers. We launched our first low power Xeon processor in 2010, its 45 watts. Now that was at that time our standard Xeon power envelope was 90 to 120 watts, so 45 watts was a pretty significant drop. But of course, this year we launched our low power Xeon processor at 17 watts, so significant reduction in power, significant improvement in energy-efficient computing.

Now, today what we are launching is the Atom SoC processor family and that Atom processor runs at just a near 6 watts a single chip solution, so integration of the chipset of the I/O functionality with the processor a single chip solution at just 6 watts. So, beyond the attribute of six watts, it's dual core. It has each cores threaded, so a total of four threads. It is the first as I said SoC targeted for the data center, which means it has those critical enterprise feature that are just mandatory to run an application inside of a data environment. It has 64 bits required for that virtual address space of the footprint that enterprise applications require. We have you ECC memory, so you need to have a reliable solution, so protect your data and it includes virtualization technology. So as I mentioned, we've all spent last eight years or so virtualizing our environment. No one wants to go backwards. You want to be able to continue to virtualizing and get the greatest value out of that capital investment, so we support virtualization and of course all the different hypervisor solutions open source, Microsoft, VMware, the full range run very well on Atom.

Density is the key metric around microservers packing many, many compute nodes into a single rack footprint. If you do a contrast and compare, so take a rack 15,000 watts per rack, you can include well over 1,000 nodes. Some of our OEMs haves gotten very creative and exceeded that number quite substantially, but 1,000 nodes or more of compute capacity into a single 15,000 watt rack. Compare that to here what we are all used to, a traditional rack mount server, RMS server, half wide you can get one-tenth or even one-twentieth of the number of compute nodes into that same form factor, so dramatic increase in compute density. And of course, it goes without saying it isn't our architectures so it runs the pervasive range of applications, middleware, operating systems, virtualization solutions that exist across the industry.

I was going to show you, we actually have one. It's very small, but this is the Atom S1200. So, given that we have both, the Xeon and the Atom product lines in our microserver segment, I am often asked which one is better and of course the answer goes it depends. It depends on what you are workload is and what the usage is. And so to try to demonstrate that, it depends. We have here a web hosting example.

So, on the right you can see a rack that is filled with Atom S1200 SoCs microserver solutions packed into that single rack. And on the left, we have the Xeon low-power microservers packed into that rack and so let's talk about what the different usages would be. So, let's say you are web hoster, so you are trying to host many, many, many, web pages, websites into your single rack environment, so small business is a great example.

I'll pick on Lisa, so we have Lisa's flower shop. Poor flower shop, but it has to have to have a website, you have to have web presence, but unfortunately that website doesn't get hit more than maybe once a week if she is lucky, so she wants to have her own dedicated web server and she is hosting it at godaddy.com. Okay? So, godaddy.com was to pack as many of those Lisa flower shop small business into a single footprint. You go to the other side and you have a web tier that has much higher compute demands, much higher transactions. You need the performance, but you still want the density in your web tier and so you have the Xeon microserver as the optimal solution.

So, on the right hand side with the Atom-based microserver, you can pack five times more dedicated website nodes into web server nodes into a single rack. On the left hand side, with the Xeon microserver, you can get up to double the number of transactions per minute per rack. So, based on the workload, based on the usage, you are going to choose the Atom-based microserver or the Xeon-based microserver.

So, for greater insight into what it means to create a system that directly benefits and is valued by the end user, I am going to invite up onto the stage Paul Santeler, and Paul Santeler is with HP, Vice President of HB and come on up and tell us what you guys are doing.

Paul Santeler

Thanks. I am excited to be here. It's kind of interesting that this is actually the second Intel launch I have been at in the last 30 days with Diane. I found myself in Salt Lake City about 30 days ago participating in the Intel launch of the Intel Xeon 5, really cool coprocessor. It's interesting if you look at it or these two parts are exactly as Diane talked about on opposite sides of the spectrum.

You have Intel 5, which is a very, very high power, high memory bandwidth, with I/O top end part of the line that does some really, really cool high performance technical computing solutions. And then on this side, there's something exactly the opposite. I think what it points out is, really that in today's environment, as everything kind of goes to these large scale applications at scale, we're now at an environment, where it's possible and also a good idea to be actually designing processors, systems and complete solutions that really focusing on those various different applications. One size no longer fits all and that's what you see going on. Intel has a good broad range in the middle, but now the certain applications were something like in Intel 5 is required and there's other applications where Atom is required, okay? And that's kind of an interesting dynamic that's happening in our marketplace.

So, this product, I really love it. I think this product is exciting in a lot of different ways and Diane already covered these, but first off, what Intel has done is, taken a product that's been designed and developed for really being able to address long battery life applications and that's why it's made different and they brought in all of the core requirements that are required for server. So, this is a true server class part that's been built on a very energy efficient core. Fundamentally it has 64-bit processing. It's got ECC capabilities and it's integrated into a very, very tight package and also as Diane covered, it supports x86 and this is important for a lot of different customers especially like a host provider.

A host provider has to provide on application for Lisa's flower shop, but Lisa may choose to run that on any application that she wants and therefore as a result, the operating system that they want to load on there is very, very dependent on what Lisa's flower shop needs. So, we are excited about this part. We are excited about where it is and now that Intel has got it in production, we can now start shipping and have been shipping products to our end users, beta products to our end users so that they can try it.

Now, it's important to look at as Diane indicated, before why is this part exciting. Why do I really care? If you look at some of the really large application providers in some of the products that they have to deploy in some of the applications that they are running, these large hyper scale web applications are very different. If you look at front end web tier, if you look at mem cache D, if you look at like offline analytics, those solutions do not require a lot of CPU. And if you look at the chart on the left, what we've shown is what we're seeing from a power or performance per watt. So, what you would call the power efficiency.

So, you are not going to go running Atom head-to-head with the Xeon and get more performance out of an Atom, but what you will get is more performance per watt than Xeon. And, so for the right applications, on the left you can see you get 2x the performance per watt with an Atom.

Now, when you are running a huge application at scale and you don't want to have to plough mountain over to make 3,000 new data centers then the power efficiency of the part really matters. And when you combine, a very, very power efficient part put together at scale in something like a Moonshot architecture then what you end up with is actually being able to lower the overall purchase price, lower the overall cost for that solution to be run by the customer and actually reduce the data center footprint, which is incredibly important to all of us overall and so really it's almost a triple play.

One of the customers that received the first system, the first Gemini data was a company called LeaseWeb, and LeaseWeb is one of those host providers that has to provide web pages for Lisa's flower shop. And it's really interesting to see that their first impressions on it were very, very positive. Overall, they have a problem in being able to lower the density and lower the power that it takes to run that application is incredibly important.

So, overall, we are excited about what our customers are seeing and we are excited about what it is that we have. So, overall, I think that the future around this product is very bright. I think the balance of being able to put the new Intel Atom, which has very, very good power efficiency together in a Moonshot federated architecture allows us to do something very, very different and unique for these customers that are running these new wave of applications at scale.

I look forward to seeing you next quarter, when actually we are going to be introducing our next generation Gemini product, so until then thank you very much.

Diane Bryant

Thank you. Very nice. Thanks so much, Paul. Appreciate it.

So, adding Atom to our data center product line is truly seamless. There is a massive installed base of applications, middleware, operating systems that all just simply run on the Atom SoC, and we have large software providers like Red Hat and Oracle that can speak first hand to the benefits of having a single hardware architecture running in your data center. That common architecture lowers the investment level; it lowers the investment level for the software developers. Not just the initial development, but the ongoing support cost, it leverages the broad range of developers that exist around Intel architecture in x86 as well as the development infrastructure in framework and testbeds. And maybe more importantly it lowers the total cost of ownership for those that are managing and running large data centers, so that consistency of architecture, that consistency in solutions stack drives down support cost, drives down total cost of ownership and someone that knows well about the investment required to develop enterprise class software solutions, I would like to invite on stage someone that is running data centers at massive scale.

I would like to invite Jeffrey Snover up on stage he is a distinguished engineer at Microsoft in their Windows Server department.

Jeffrey Snover

Thank you.

Diane Bryant

Thank you so much.

Jeffrey Snover

I am really excited to be here at the launch of Intel's Atom processor. Microsoft has been very successful with the Atom processor. Windows on Atom has produced great low cost energy efficient client systems, but today we now have an Atom processor appropriate for the data center, so we are really quite excited about that.

You know servers in data centers consume a lot of power and that power generates heat and therefore it has to be cooled, which requires more power, okay? So, if we can reduce the power consumption of the server in the data center, we can reduce our environmental impact and we can save a lot of money.

Now that's important to Microsoft, because we have many hundreds of thousands of servers running on our data centers. These servers are running our services like Assure, Bing, Office 365, and so it make sense for us to invest heavily in power efficiency and so that's what we did in Windows Server 2008, Windows Server 2008 R2, and we continue those heavy investments in Windows Server 2012, where we invested in improved core parking algorithms and in collaborative power management, where the operating system negotiates with the hardware to achieve power objectives, so this is a really exciting stuff. So, obviously, we were very excited when we heard that the new Intel Atom processor was going to support those features required to run a server operating system. Things like virtualization, EEC memory and of course 64-bits.

64 bits, the benefits of a large flat address space are just critical for a server operating system, so much in fact that Microsoft stop supporting 32-bit chips a couple releases ago, so we are very excited by the fact that now we are going to have a very low energy part that could run the demands of server. So, the neat thing about that is now Windows Server and all of our partners, our tools and our products will be able to run on a low cost, energy efficient Atom processor, but they will also be able to run on high end Intel-based servers.

New version of Windows Server 2012 runs on machines that have 640 cores and up to four terabytes of RAM, and it will support every server configuration in between, okay? So that's the power of compatibility. Windows Sever 2012 is one of the most dramatic transformations for the operating system we've ever had. We are no longer an operating system for just a server, we're an operating system that is cloud optimized and can run on single servers or on rack servers or on data centers full of servers and manage both, the servers, the network devices and the storage devices necessary to bring all that together.

So, indeed it's a transformative release and yet one of the most transformative releases we have ever had and yet the adoption is going incredibly well, and why is that? And the answer is that Windows Server 2012 has a very high degree of compatibility with previous versions of Windows. And so what that means is, the things that used to work continue to work and therefore customers find it safe and easy to adopt.

They are able to adopt things, the things that used to work, continue to work and they are able to pick up and use the new features and capabilities at a time of their choosing and so this is incredibly powerful. And again, one of the reasons why we are so excited about the Atom chip, because it's compatible.

It's compatible with the Xeon processors and so this incredible dynamic range of servers means that our customers will be able to pick the right server for their needs, pay just enough money to get what they need and then Windows, Windows Server, System Center, the entire Windows ecosystem will be able to run on that server, so we are really excited about this.

Once again, Intel and Microsoft are working together to provide the best platform our customers and making it safe and easy for those customers to adopt, so thank you.

Diane Bryant

Thank you and thank you for the great partnership. Appreciate it.

Okay. So, as the data center continues to build out around the world and as there continues to be a focus on total cost of ownership, we will continue to advance our low power Xeon processor line and we will continue to innovate and advance our new Atom SoC product line. As you can see, our next generation Atom SoC takes advantage of Intel's latest process technology 22-nanometers, so we now enjoy a four-year lead in our process technology over the rest of the industry that means we have the most energy efficient transistors on the planet and it allows us to integrate more and more capability onto a single component at ever lower power levels and we will demonstrate that as we move across our Atom SoC product line.

And with the scale, when we speak of scale and continue data center build out, you can't think of anyone, but Facebook, so we are very happy to have here Frank Frankovsky. I struggle to get out there, so Frank is with Facebook and he has done an amazing job of making total cost of ownership a competitive advantage for Facebook, so thank you for joining.

Frank Frankovsky

Good morning. Thanks for that intro. As Diane said, we do face unprecedented scale at Facebook, and that's one of the reasons why we are so highly motivated to figure out the most efficient way to scale the infrastructure efficiently and also to support all the people that are using Facebook.

As you've seen, we've had relatively quick growth in the number of people who are actively engaged on Facebook, but in addition to that also just the new services that we rollout to those people drive a huge amount of computing needs. So, about three years ago, we actually started to take more control over designing our infrastructure that's why we launched the open compute project as well to share that with others.

So, before I talk a little bit about SoCs and why we are excited about the direction that Intel and others are taking here, we'll talk a little bit about the scale of what we are facing at Facebook and why we are so motivated to get really involved in the guts of the infrastructure itself. So, you've seen about 1 billion plus users that are actively engaged. The really cool thing here is it's not just the wrong number of people, but the number of minutes they are spending on the site everyday engaged, sharing things with the friends that they care about.

140 billion friend connections, we call this the social graph, so it's not just the matter of the fact that 1 billion people are on the site every day or at least every actively engaged every so often, but 140 billion friend connection, the social graph and connecting all those people with the most relevant content that their friends are all sharing is a huge computational challenge.

Lots of photos on the site today, I believe it's probably the largest archive of photos anywhere in the world and the amount of that are added everyday is also pretty stunning, so I think what was mentioned earlier about rightsizing the CPU to the workload that's required.

I have to thank Intel actually, the Xeon class processors have helped us to scale Facebook every effectively so far, but we've applied that what I would call brawny cores kind of unilaterally across our environment and what's interesting about the smartphone class CPUs is we can right-size them to the needs of maybe a photo storage tier for example. Maybe that's a great place to start where we don't need a brawny core. What we need is maybe a smartphone class CPU that also include 64-bit, also includes ECC.

The amount that people are sharing is also tremendously growing, right? So, it's not just number of users, number of minutes engaged, but the amount of content that they are sharing everyday is also just exploding at an exponential pace, which also then we have lots of ranking algorithms that help us to decide what's the most relevant bits of content that your friends are sharing, how does your news feed get ranked and presented to you.

And check-ins, this is another interesting thing when people show up and hopefully some of you that showed up today checked in here at the Intel launch event. All of those bits of data are also getting rolled into all these bits of content, so that clearly requires a lot of computing power, a lot of disk space, a lot of networking technology. And again, that's why we are so interested in the space and that's why we are so interested in how do we get the right CPU applied to the right workload and really maximize.

The metric that we use internally, we call it RCUs or relative compute units, and that's really just a ratio of how much useful work can you get done per watt per dollar. That is really the only metric that really matters as you are scaling the site as long as you can provide a reliable service that's high performance then it really comes down to how efficiently can you deliver that service and that's just that used for work per watt per dollar equation.

So, to just kind of maybe further what Paul was showing you about different workloads for our different CPUs for different workloads just try and normalize, you know, envision for a minute, I am going to get the same amount of use for work done. I am going to present the same amount of data to two end users with two different types of CPUs. I have got wimpy cores or kind of SoCs or the smartphone class CPUs on one side and brawny cores on the other. Just for the sake of comparison, we are going to present the same amount of work that gets done on the back end of the infrastructure so that's normalized.

How many cores it might take? If I am using really low powered cores, it might be two or three times as many cores that are required for an Atom class versus the Xeon for example. And, so on the surface, it's like wait a minute. Why are you going to put more CPUs to do the same amount of work, but again remember it's an equation here it's how much used for work per watt, per dollar. And, so if you look at the amount of watts that you would consume in order to present that work with wimpy cores, at least based on some measured and some predicted results that we've seen, we believe that this is going to be a half to a third as many watts, right? So, current CPU that we are talking about today, the Atom class CPU is a very low powered CPU, some of these will range all the way from 6 watts up to call it 30 watts or so, but still that's a third of the current TDP or the total dissipated power of the Xeon class parts.

So, again, we are facing unprecedented scale requirements not only in number of users. How much they engage, how much content they share, how efficiently we want to scale this. That's why we are excited about Intel and others really promoting this move towards system-on-chip, right? When you take really efficient process in the manufacturing process to deliver CPUs that are this low power and you integrate some of the connecting chipsets and other things into a single chip package, you really dramatically drop the power required to power up that chip. And theoretically, you also drop the cost of those chips significantly. Obviously, enterprise class CPUs are manufactured at a different volume rate than smartphone class CPUs and that we hope ends up being a positive use for work per watt per dollar equation.

So, thank you very much for coming today. Appreciate the time to talk to you.

Diane Bryant

Thank you, Frank. Wonderful. Thanks. Thanks for all the innovation.

Okay. So, in conclusion, I want to thank you for attending our launch of the world's first enterprise class, low power SoC, the Atom S1200. We are delivering all of the futures in our product that IT and enterprise demand 64-bits, reliable memory through ECC, virtualization capability and clearly superior performance per watt as well.

We continue to support the microserver space with even greater levels of density as we move from the low power Xeon product line adding the new Atom SoC product line. And, with the S1200 then we also address these very light workloads that not just are in the microserver space, but also in comms and storage.

So, as hyper segmentation of the data center continues, as more and more workloads evolve and emerge, we are committed to delivering the best solution, the best product line, the best solution that targets each one of these workloads, delivers to the data center, the best total cost of ownership, the best performance per watt, the optimal solution for the given workload. So, it is a application optimized computing world that we are supporting.

So, with that, I will open it up for Q&A, and I would like to invite on stage Jason Waxman. Jason is the general manager of our Cloud Platforms Group, so Jason come on up and we'll open it up to the audience for any questions you might have. I am sorry, and Mark Miller from our press relations group. He is also on the stage and he will hand Q&A logistics.

Question-and-Answer Session

Mark Miller

Yes. Yes. So, in the room, we have Josh walking around. Just raise your hand. He will get you a microphone to ask your questions. We'll also be taking questions from the web, so why don't we go ahead and get started in the room. Josh, we have somebody here.

Unidentified Analyst

Morning, Diane. While we're on a subject of SoC, if I look at next year have of the out of order execution on a core and I wondered as you improve the performance of Atom, how do you differentiate between a very power efficient Xeon, which is [42 clock] is mode issue Atom, where does the dividing line hold in terms of where it fit.

Diane Bryant

As I am glad to raise that, because I think that we do want to make it clear that we will not hold back performance in any one of our product lines, so we are investing there significant micro architectural investments that will come into the [Avaton] product line to boost the performance of that Atom core and we'll put more cores down and then integrated fabric and integrate other capabilities onto that die, but we continue as you know to invest quiet heavily in our Xeon product line as well, so it also will continue to get greater and greater performance levels and so we continue to see a differentiation in performance and performance per watt in our Xeon product line and in the Atom SoC product line both of them continuing to advance and the magic as we've all been saying is it's all about workloads, right?

There are some workloads that will continue to benefit from many smaller cores even as those cores become more capable and other workloads that will continue to benefit from our Xeon product line, so we will see them both continue to move up, but thank you for acknowledging the investments we are making in Atom.

Jason Waxman

Great. Thanks. (Inaudible), do we have one from the web?

Unidentified Company Representative

Yes. So, first question, when is Atom S1200 available. How much does it cost?

Jason Waxman

It's available today, so we are shipping production to customers assuming that they are already and on top of it there's a couple of different price points, but as low as $54 in quantities of 100 just what we published.

Unidentified Company Representative

Okay. And a follow-up question, how this Atom system or Atom processor compare to similar class ARM servers?

Diane Bryant

Okay. I'll get started. He can chime in. So, today, there are no ARM-based enterprise class servers, right? As we keep saying over and over again, if you want to put a product in compute platform into a data center, you need the fundamental capabilities of that enterprise applications demand, the reliability 64-bits, virtualization capability, so today that comparison is not an apples-to-apples comparison.

We obviously take all competition very seriously. We know that your investments are being made and we believe we have a very good view into the alternative architectures and we believe we've got a substantial performance and performance per watt advantage and it's not just about the compute capability from a overall integration, SoC integration and capabilities at the system level, we believe we have a very compelling solution.

Jason Waxman

Anyone in the room? Yes.

Damon Poeter - PC Magazine

Yes. Damon Poeter with Pc Magazine. There are microserver builders out there who use Atom already. Do you see those products sort of transitioning to your new purpose-built with Atoms or.

Jason Waxman

Yes. I think, most if not all of them will transition to the SoC. I mean, if you compare it to the N570, which was the previous generation that Diane showed earlier, you are talking about lower power, more integration, much higher performance. So, really, it's kind of a no brainer to move to the next generation and that's our goal. The roadmap that we've shown has just continued to get deliver more value in each generation and so people that like that kind of class for those sorts of workloads, we expect that they will upgrade.

There in the front.

Unidentified Analyst

(Inaudible). Did I understand correctly that Avaton is going to include a network fabric?

Diane Bryant

Yes. We will integrate the fabric into the processor with Avaton.

Unidentified Analyst

Okay. Following up to your point then do you see OEMs signing on to and using Atom now, or do you think they will prefer to wait until they can get that benefit?

Jason Waxman

Well, we have over 20 design wins that are using it now and we do have integrated PCI Express into [center]. So, I mean, by the way there are people that are going to do PCI Express switching perhaps in their system design, so absolutely people are using it now and some people will take advantage of the enhancements that we make in the next generation.

Jason Waxman

We'll go back to the web. [Ronic]. Is there any others?

Unidentified Company Representative

Yes. So, will the 14-nanometer generation of Atom be a peak or is it a talk and we don't know whether the 14 nanometer will be more like the same architecture extension of 22 or the entire new architecture.

Diane Bryant

Yes, so look. So, I am not quite sure I got the question quite correctly, but the…

Unidentified Company Representative

The question pretty much is whether 14 will be a shrink of 22 or it will be an entire new architecture.

Jason Waxman

Yes. Basically, that's we'll talk more about that part. We are excited about it. It will be another exciting announcement. Unfortunately, today we are not really going to talk about what that product looks like.

Diane Bryant

I think the clear statement we are making is that we have a commitment to this roadmap at a one year rate. We will continue to deliver innovation every single year moving across that spectrum.

Jason Waxman

I would tell you it's very cool. That's as far as I would tell. Anyone else in the room? Yes.

Unidentified Analyst

I was wondering how bigger percentage of the server markets new HTC, Atom. (Inaudible).

Diane Bryant

So, there is a lot of speculation as to how big this market is. We have our opinion, but it's probably best to ask the industry analysts, so it's a independent third-party assessment. We don't want to come up as biased. You hear ranges anywhere from single digits of the total server market to 10%, maybe 15%, 20% of the market. We believe that entire microserver segment will stay in the 10% range. And as you can see, and I think Frank from Facebook did a nice job of showing which workloads it supplies to and there's just a finite number of workloads in that space, but I think the industry analysts are doing a good job assessing that space and we'll see as it evolve.

Jason Waxman

Yes. And just to sort of add on to it. We spend a lot of time trying to figure out if we can predict the future, and unfortunately our magic eight ball doesn't work quite as well as it should, so what we just decided to do is say you know what? We're just going to deliver. It's very simple. We're just going to deliver the best products. It's going to be delivering the best products in the Atom class segment, the best in the Xeon. To your point, let the lines blur, let the customer choose as we are much focused on getting the products out there than we are trying to figure out how big they are going to be.

Until we'll be ready is there anyone else, (Inaudible), on the web?

Unidentified Company Representative

No.

Jason Waxman

Any other last questions? Yes?

Unidentified Analyst

This is [Gaber Moe] from BMO. I just have a follow-up question on, you had showed two systems of Intel CPU revenues of roughly $30,000 to $35,000 from the Xenon and Atom. Do you expect they have a gross margin profile?

Diane Bryant

Yes. I am glad you raised that, because I missed that point when I presented it, which is I often get asked, you obviously would prefer the Xeon space and not have the Atom space grow, and that point that we made on the slide is that from an Intel revenue perspective it really doesn't matter, because of the density of compute our revenue off of either one of those Xeon versus Atom it's really quite a watch. In fact the Atom is slightly greater, so across our entire server product line each of our products have slightly varying margin profiles, but holistically it is still very good margin for us and we are absolutely fine if that Atom SoC does very well in the market.

Jason Waxman

Okay. One last question.

Don Clark - Wall Street Journal

Excuse me, Don Clark from Wall Street Journal. Was that dollars per rack?

Jason Waxman

Yes. It was the revenue per rack if you looked at and we had an assumption of how many say Atom systems could fit into the rack and then how many Xeon systems.

Don Clark - Wall Street Journal

Do you happen to have those numbers off the top of your head as to how many cores per rack or chips per rack?

Jason Waxman

Yes. So, now granted you didn't have even the 4,000 that Diane was talking about, so you can even do better, but we had assumed in that particular instance about 560 Atom servers in the rack on the right and then on the left hand side it was roughly a fifth of that, because we said 5X, so it's about 100, so rough order of magnitude 560 to about 100. All right, well, thank you again. Thank you to everyone. I am sorry.

Unidentified Analyst

I have last interesting question. Will integration of things such as fabric switches be a focus at the high end Xeon line as well?

Diane Bryant

Yes. We have committed to integrate fabric across our entire product line over time. Integration of fabric is the next obvious step. Intel's beat rate has been innovate and integrate, so we have more slot thanks for our ability to integrate additional capabilities in the fabric and from a overall performance elimination of that I/O bottleneck is the next step across our product lines.

Jason Waxman

Great.

Diane Bryant

Thank you very much for attending.

Jason Waxman

Yes. Thanks everyone for attending and thank you everyone on the web. Have a great day.

Diane Bryant

Okay. Thanks.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Intel's Management Presents at Intel Datacenter Press Briefing Conference (Transcript)
This Transcript
All Transcripts