Intel Corporation (NASDAQ:INTC)
March 06, 2012 12:00 pm ET
Diane M. Bryant - General Manager of Group
Mario Müller -
Derek Chan -
Unknown Executive -
Alex Rodriguez -
Ladies and gentlemen, please welcome Diane Bryant, Intel Vice President and General Manager, Data Center and Connected Systems Group.
Diane M. Bryant
Thank you. So thank you so much for joining us on what is a very big day for us, the launch of the Xeon Processor E5 family. The E5 is truly the heart of the Data Center, and we mean that very explicitly. And when we say Data Center, we mean all data centers, whether it is the public cloud service provider data centers, the foundation for fueling those services; whether it's enterprise IT data centers; telco service provider data centers; or the home of large high-performance computing clusters. All of those data centers, all of the infrastructure inside of those data centers is what we're targeting with the E5 family.
If you think about it, we are in a period of incredible growth. Today, there's 2.2 billion Internet users worldwide. That's projected, as you've probably heard, to go to 3.1 billion users by 2015. You think about the growth in the number of devices -- total number of devices, the growth in the number of devices per user continues to grow. And as we have this massive innovation on the client side, we equally have innovation inside of the Data Center side with technologies such as cloud computing.
Cloud computing, in and of itself, allows new services to be rapidly invented and rapidly deployed to all of those devices. So you can argue the cause and effect is the innovation in cloud and in devices, enabling all these new usages or are these new usages driving the innovation in the cloud and in the devices. But either way, the result is an ever-increasing demand on the Data Center infrastructure.
Yes, the number of innovations in new usages is quite impressive, that pace of innovation in new usages, new services, new capabilities that are being delivered, both services to the consumer side, as well as to business. If you think about it, right, we used to have a bank in every town, then we had an ATM machine on every corner, and now we have a banking solution on our smartphone in our pockets, just a massive breadth of new usages and new capabilities.
Another great industry that has shown remarkable innovation, completely transforming itself, thanks to technology innovation, is the automotive industry. You may have heard last week, Intel announced at the Mobile World Conference a $100 million venture capital fund to fuel the innovation around the automotive industry. Automobiles are becoming your yet another mobile device-consuming services on the go. So whether these services are for consumers, whether these services are for business, whether it's for entertainment, for convenience, for improved productivity, they all require a connection back to the Data Center. They all drive and fuel Data Center growth.
On the IT side of the house, the IT -- enterprise IT is transforming itself once again. I just last month finished a 4-year stint as Intel's CIO. And just in those 4 years that I was CIO, I saw remarkable change in the role that IT plays. And IT transforms itself every 6 to 8 year, so this is nothing new. But what I saw over those 4 years in IT is a dramatic shift in the role that IT plays. The old days of IT being a support organization, delivering services to the business, to a standard Service Level Agreement, homogeneous services independent of what the line of business is, IT is the organization that keeps the network up and closes the books each quarter, those days are gone.
Today, IT is truly inextricably linked with the business. IT delivers the business solutions. IT delivers the scale to the business to allow top line growth, and IT delivers the efficiency and the automation and the speed to deliver bottom line growth to the company. So the pace of the business is continuing to increase worldwide 24/7 real-time decision-making, all automated, all online. It's very clear, that businesses today rely on IT more than ever, again, driving the demands on the Data Center, driving demands on IT infrastructure.
And so all this translates to scale. IT must scale. Again, whether you're talking about the public cloud service providers or business -- core business enterprise IT or telco or high performance computing, the scientific side of the house, they all require the same capabilities. They all require the same scale. They all require the same innovation. We have to continue to deliver greater responsiveness, pure performance, being there, responsive on-demand capabilities. Energy efficiency power is and will continue to be a constraint in the buildout, so a focus on energy efficiency is demanded across the entire spectrum of data centers. Security, we're living in an environment where the security threats just continue to rise. Having the confidence of end-to-end security and knowing that your corporation's data and personal identifiable information of your employees and consumers and customers is safe, is key. Self-service, IT has got to be intuitive, easy to use, available, complete transformation in the way those services are delivered, whether it's consumer or business. And then, of course, it's all about scale in a balanced fashion across the Data Center. So it's scale across servers, but as well storage and network. So that balanced environment, that balanced scale of continuing to deliver more and more to our customers.
I can't think of a better example of a technology innovator that has transformed and continues to transform the experience you feel when you're in your car then BMW, and so I'd like to invite up on stage Mario Müller, who's Vice President of IT Infrastructure, both the design and the operations side. Mario, please come up and tell us about how you're using the E5 and the results. Thank you.
Thank you very much, Diane. So good morning, ladies and gentlemen, a very warm welcome also from my side. The BMW Group is one of the most successful manufacturers of automobiles and motorcycles in the world with its BMW, MINI, Husqvarna Motorcycles and Rolls-Royce brands. As a global company, the BMW Group operates 25 manufacturing and assembly facilities in 14 countries.
The BMW Group concluded 2011 with its best sales results ever. The worldwide sales of all of our brands rose by 14.2% and reached a total of nearly 1.67 million vehicles. So the company's strength then, therefore, is positioned as a leading provider of premium vehicles. The success of the BMW Group has always been built on long-term thinking and responsible action. The company has, therefore, established ecological and social sustainabilities throughout the value chain, comprehensive product responsibility and the clear commitment to conserving resources as an integral part of our strategy. So as a result of its efforts, the BMW Group has been ranked industry leader in the Dow Jones Sustainability Indexes for the last 7 years.
Coming to our IT, the BMW Group IT is 99.9x. I do not know the x exactly based on Intel machines in our Data Center and the same on 99.9xx on our client systems. We have roughly 100,000 clients here in our organization. We have 2,700 IT employees in the company, many projects there, 50,000 smartphone -- 50,000 mobile phones, 3,800 smartphones right now and the numbers are growing and growing and growing. We have, of course, a huge APC clusters in our company, all of the development there for computer-aided development design crash simulation tests. And many things more rely on very, very strong HPC systems. And therefore, we are lucky to use now the E5, especially the 2660 in our company that gives us, of course, the performance we need, the scalability we need, the I/O throughput and, most important also, the security. So that's the things that we need in our company.
Besides that, machine-to-machine communication. Our vehicles are connected to our cloud, our internal cloud that we have there at the BMW Group. So we offer many services to you wherever you go with your car. So there is Internet connection available in the car. There is teller services available that gives you, if you have any issue with the vehicle, direct service functionality. We have BMW tracking of cars. We have connected navigation, intelligent drive that means there's real-time traffic information also coming through the car, and everything is handled in our own Data Center, and we have there a connection right now with roughly 1 million vehicles, with 1 million requests a day and with about 600 megabytes of data volume, but that will grow heavily. So soon, we will have more than 10 million vehicles connected and that will lead us to 1 terabyte of data volume a day. So therefore, Diane, it's good that we get the E5 how into our Data Center.
Diane M. Bryant
So I have to say thank you very much for your time.
Diane M. Bryant
We're very happy to help you with that problem. Thank you.
Thank you, Diane
Diane M. Bryant
Thank you, Mario. So we are continuing on the path of delivering to our cloud vision for 2015. If you think about the cloud and why it is such a big deal, why we all spend so much time talking about it, it is one of those examples of a true win-win. The users of the cloud, whether it's consumers or business gets the wonderful advantage of on-demand instant availability. And on the IT side of the house, the cloud delivers reduced total cost of ownership through driving up utilization with virtualization, through reducing the OpEx costs, through automation. You drive down the cost of running IT. So it's a big win-win. But there are still limiters today that get in the way of cloud adoption, namely interoperability. None of us want to get locked back into a proprietary solution stack.
Second issue is obviously increased security concerns in a cloud environment. You're operating in a multi-tenant environment, so by definition, your security risk level goes up. And then also, regulatory requirements. We have our need to test too all of the Sarbanes-Oxley that all of our applications are running in a controlled environment. Internal audit likes to walk in and see an application running on a server, sitting in a Data Center and seeing it all very controlled and contained. And when you tell internal audit that the app is floating around in the cloud somewhere, it stresses them out a bit. So all these issues are limiters today to broader and broader deployment in the cloud environment. And that's what the Cloud '15 Vision is all about, is eliminating those limiters, addressing those limiters through standards.
And for instance, a big focus is on federated cloud, so standards that will allow you the confidence of being able to move your application and your data from one cloud to another, from public to private and without concern of data integration issues our data security issues. On -- in automation, continuing to drive up the automation, truly having it -- the full solution automated being able to burst not just within one private cloud or one public cloud, but being able to burst between clouds in an automated fashion, taking advantage of that burst capacity and then a Client-aware Cloud, so a cloud solution that is aware of where that service, the device that the service is being delivered to, what's the security level of the device, therefore, what data should I push to the device versus holdback in the Data Center. What's the form factor? How should I display the data? What's the battery capacity or the compute capacity? How much of the cloud service can be done on the device locally and how much should be done in the Data Center? So these are the 3 big focus areas for us in driving a standards-based cloud environment, making it easier and more secure and more flexible to deploy your solutions in a cloud.
And of course, you don't do something like this alone. And as we've learned at Intel over many, many years, the driving standards across the industry require strong partnership and the industry at large. There's many industry organizations and alliances that have been formed around this area. We are a technical advisor to the open Data Center alliance. We are a founding member of the Open Compute Project, and we are governing member of the open bridge alliance [ph]. So the goal here is to accelerate standards to address interoperability, security and management for faster adoption. These alliances are focused on the end user requirements. And through the definition of those usage models, we're able to back them out into technology solutions, and you can see we have over 50 technology partners in the Intel cloud business -- Intel cloud builders partnership that are developing those solutions consistent with the usage models that the greater end user population demands.
And so, of course, in this world of ever-increasing demand on the Data Center and on the infrastructure at large, we believe that the Xeon Processor E5 family perfectly addresses these needs and these challenges, specifically not just the Data Center side, but specifically in the cloud buildout. Obviously, one of the core requirements is pure performance. I think Intel has an excellent track record of continuing to deliver greater and greater performance levels to the industry. Over just the past decade, we have delivered over 100x in interop [ph] performance. So we've done very well on that. But we acknowledge that it isn't all about performance. Along with performance, we have to address the need to keep the platform balanced, so we need to address I/O, memory and compute holistically. We need to address server, storage and network holistically. We need to continue to focus on energy efficiency as we have done year after year after year, thanks in a big part to Moore's Law and the ever-increasing transistor sizes giving us greater performance at lower power. And of course, the security challenges, as I said. We have an obligation to continue to embed greater and greater security into our hardware platforms.
And so that's what we have done with the introduction of the E5 Processor family. Generation over generation, we're delivering 80% performance gains. We are delivering some remarkable architectural innovations in I/O, the first to bring PCI Express* Gen 3 to market, the first to integrate I/O into the microprocessor, so dramatic, dramatic gains in I/O performance through those innovations. And we continue to build upon our security features of trusted execution technology and hardware encryption and decryption, and we continue to deliver the best performance per watt.
All this goodness is demonstrated by our partners. We want to say thank you to all of them for delivering some amazing world records across a very wide range of your standard benchmarks that you all know and love, demonstrating world-record performance, world-record energy efficiency, performance across all workloads, whether it's standard enterprise workloads or web-based workloads or high-performance computing workloads. Across the board, you see a total of 15 world records demonstrated on the E5 Processor family and a big thank you to our partners for taking our technology and demonstrating the results at a solutions level.
So let's talk about the really complex, tough scientific and engineering problems and the applications that solve them. With the E5 Processor family, we are launching new instructions called AVX for short, Advanced Vector eXtensions, and these instructions double the number of floating point operations per clock. So a significant gain in floating point execution, targeting those really tough high computational workloads, whether they're technical computing, whether they're medical imaging, media processing and that the value of these instructions and the results that they deliver can be clearly demonstrated with our -- the fact that we are already a 10 of the top 500 supercomputers. Even though we're launching the product today. Last fall, we were already noted in 10 of the top 500, thanks to some early production unit shipments. So really demonstrating right out of the chute the level of outstanding performance we are demonstrating in high-performance computing and even at the very highest supercomputer levels.
We're very proud of these results. And you can see a couple of quotes here of some of the usages, the real-life usages, whether it's vision sense, delivering better video processing, so you have less intrusive surgery, always a good thing; or on the more whimsical side of the house, face.com using the AVX instructions to allow you to even more rapidly identify your friends online, something that I'm particularly probably will not be utilizing, but there's lots of young people out there that I'm sure will value this application.
So with that, I want to say when you think about animation and content creation and rendering and enormous farms of high-performance computing, it's hard not to think about DreamWorks. And I'm very happy to have Derek Chan here from DreamWorks to talk to you about his experience with the E5 processor. Derek actually runs all of operations for DreamWorks, so including in that is IT, but he has a very broad responsibility. And I'd like to invite you up, Derek. Thank you so much. There you go. Thank you.
Thank you, Diane. Thank you, everybody. Welcome. It's my pleasure to be here this morning. As the Head of Digital Operations for DreamWorks Animation, I get the pleasure of leading our team in really delivering the technology that is useful for making our animated films, right? While we care about the overall compute infrastructure, as we've talked about, sort of the balance between servers, storage and networking, really, the -- one of the significant challenges that we have is delivering enough processing power for our artists to take advantage of, right, to be able to continue to push the bar. So you must be sitting there and asking yourself, "Why is a cartoon guy here talking to me about processing power? What does he care about processing power?" Well, it actually takes a ton of technology to make an animated feature film, right? It takes hundreds of artists working for many, many years. It takes over 100 terabytes of data. We turn over a terabyte of data a day, right? We would use thousands of servers. We use hundreds of workstations, all to deliver sort of memorable characters. And if I think about our latest film, Madagascar 3, right, we're going to use over 60 million CPU hours to process that film, right? And on a nightly basis, we'll peek out over 15,000 cores on a nightly basis, spread over many, many Data Centers, some of which are in the cloud and being pushed outside of our walls.
So obviously, for us, our partnership with Intel is critical to our success, right? Not only are they helping to optimize and work with our teams to optimize our current generation of technology, our renderer, but they're also helping us deliver a new generation of technology that will literally help us transform how we make movies. So as Diane sort of mentioned, there's definitely advances that are coming, and we're seeing it with the E5 platform. So if I take our existing renderer and take -- and move it onto the E5 platform. We're seeing an over 35% performance gain from that platform and that family, right, and I take the AVX 256 bit instruction set and use that with our advanced shading algorithms, we're seeing an over 40% improvement in those technologies. So again, huge and fantastic results for us, we're very proud of.
If I look at all the things that we've done with Intel and the advances that we're making, we're so excited about the E5 platform and where it's going. It is truly the heart of our Data Center. So as we sort of approach our upcoming film, Madagascar 3: Europe's Most Wanted, we are seeing a -- it really is the opportunity. It's the first film that gets the advance -- the advantage of those advancements, right? It's breakthrough technology. It's breakthrough sort of entertainment. So one of the things that I wanted to do as we talk about all these advancements was to bring you something that we can talk about advancements, but I'd rather show you what the outcome is. So what I brought for you today is a never-before-seen clip of what we call breakthrough technology. Why don't we go ahead and roll that.
So anyways, thank you, all, and thank you, Diane, for...
Diane M. Bryant
Thank you. That's wonderful. I can't wait for June. Okay. So as I mentioned earlier, we have had some remarkable innovations on the I/O side in the E5 Processor family. I/O performance is critical. As we continue to grow the number of cores per processor and as we continue to increase the performance of each of those cores in addition to the move to 10 gig, the I/O needed a big revamp. And rather than me stand here, although I do like to talk about 1s and 0s moving around, rather than me do that, we have with us the gentleman who led the E5 engineering team from start to finish, Nazin Nordin [ph], and I'd like to give him the opportunity to explain to you what he actually did. So Nazin? Here you go.
Thank you, Diane. Good morning. I'm really, really excited to show you the breakthrough innovations that we have done in I/O. This simple diagram shows how data flows through the system. The data started the network adapter and then flows through the -- a discrete component on the board called I/O hub in the historical system. And then from there, it flows through the processor. And from the processor, then it flows through memory and then back to cache and works all the way back to the network adapter. This is how our previous innovation system worked.
What we have done is, we have done something called integrated -- Intel's Integrated I/O, which is a suite of features that offer dramatic improvements in the I/O. First, we have taken the I/O hub and integrated that into the processor. This -- with this integration, we reduced the latency of the data traffic by 30% and gets the data where it needs to go faster than ever before. As we integrated the I/O, we included some critical storage features such as non-transfer and bridging, hardware rate support, ASIC and the DRAM refresh to the processor to improve both reliability and the performance across the Data Center. We not only integrated the I/O, we also increased the performance of the I/O system. Intel is proud to be the first processor to support PPCI Express* 3.0. With PCI Express* 3.0, this is the latest spec from the PCI Express* seg and it doubles the bandwidth -- I/O bandwidth per port by speeding up the lanes and making architectural improvements over PCI Express 2.0. We're not just simply integrating the I/O and turbocharging it with PCI Express* 3.0, Intel has included a new technology called Intel DataDirect I/O, which lets Intel's Ethernet Controller park directly to the processor cache.
Again, you can see our historical in this previous system. The data flows through multiple components, the I/O hub, the processor and memory. And when data flows though all these components, the result is it takes more time for the data to get to where it needs to be and it also keeps the memory active. With DataDirect I/O, we intelligently re-architected this data flow such that the processor is the primary destination for net port traffic. This actually -- this means the latency between the processor and the network adapter is very short and which can actually double the I/O capabilities of the Xeon E5 family depending on the usage. This also helps keep the memory in a low power state because we are not using memory when it's not needed. All of these capabilities combined to create what we'd call the Intel's Integrated I/O. This is the breakthrough innovation that gets you to your data faster, to help you scale and meeting the growing demands of the users like Diane has talked about. So thank you.
Diane M. Bryant
Thank you. Thanks, Nazin. That's great. It's very exciting. Really impressive innovations. And the allocation of the cache to that network traffic is done completely in hardware, completely -- seamlessly no application involvement, no OS involvement at all. So it just happens, it just happens.
So year-over-year, IT's investment in security continues to grow. I know that firsthand, having been in the IT group for 4 years and watching the security budget just very predictably grow year-over-year. The environment we are all living in just continues to increase in intensity. The number of attacks to our environment is doubling every year. The sophistication of those attacks is getting more intense. The sources of those attacks from cyber criminals to nation-state cyber espionage to even insider threats. It's very, very broad. And really, the job of the IT guy in trying to secure the environment and ensure that the corporation's IP and their customer and employee personal identifiable information is secure, that job just becomes harder and harder.
As we move to technologies such as cloud computing and greater mobility, more and more networks of differing levels of security, more and more devices of differing levels of security, that just adds to the complexity, adds to the risk in trying to maintain a secure infrastructure, a secure Data Center environment. So the solution is to bring the security closer and closer to the hardware. If you're going to have a truly secure solution, it's going to be built into the hardware platform. If you think about it, the malware has gone from application mode to user mode to now root kits in the kernel. And the only way to protect against that is to be one level lower sitting in the hardware. And that's why our security features that are in the Xeon Processor family are so powerful.
With trusted execution technology, you're able to ensure that there is a root of trust, a known good software stack. This is incredibly important when you go into a virtualized environment, and you have multiple VMs running on a single server. With TXT, should one of your VMs become compromised, you can move all the VMs running on that machine off into a quarantined state, validate that the hypervisor hasn't been compromised by using TXT. And if the hypervisor hasn't been compromised, then you have confidence that only the one VM is at risk, and you can move the other VMs back on the machine and get them back to work immediately. Without TXT, you would need to bring down the entire system and all the applications that are running on that system to know that you've got a clean and secure environment. So dramatic improvement in the confidence of running in a virtualized environment.
With advanced encryption standard, new instructions, the Chief Information Security Officers of any enterprise IT organization have been asking for data encryption, data in transit and at rest. For a very long time, they would love to see all data encrypted. But historically, it just hasn't been feasible from a performance perspective. The guys in IT that own the infrastructure don't want to see the performance of their workloads decline thanks to encryption. With the E5 processor family now, you can encrypt. Using the ANS new instructions, you can encrypt and de-encrypt your data with no performance impact at all. It now is invisible. The performance of the processor is so outstanding. And With those instructions, it's hardware accelerated in the processor, no issue. So with this new processor family, the Chief Information Security Officers finally get their way and data can be encrypted both in rest -- at rest and in-transit.
And you can see some examples here of the usage of our security features. TXT increases the confidence in virtualizing and automating your environments. Even high-security applications, the level of assurance is such that the quote here from DuPont, "Their ability to use TXT to ensure that even -- they can confidently meet regulatory requirements around their applications and data. You can see on ANS the additional security that the encryption and the high-performing encryption gives, that GNAX is confident in storing their medical records in a public cloud, which is quite progressive when you think about how security is always the first thing people point to when you talk about moving your apps and data into a public cloud. So knowing that your data is fully encrypted gives GNAX that confidence."
Because security is always top of mind when you talk about moving to the cloud, I thought it would be nice to bring a cloud service provider onto the stage to talk about the value that they're getting in running their business and using the new E5 Processor family. So we have with us Alex Rodriguez, not the baseball player, we learned last night. He's the Vice President of Systems Engineering and Product Development from Expedient Communications. Alex, if you'd come up and tell us what you're up to with the E5.
Thanks, Diane. Good morning, everyone. Expedient is a cloud service provider providing services in the cloud to customers nationwide. And as part of that, we run into some pretty interesting challenges that we work with Intel to help overcome as part of their new processor launch. So 2 key areas of focus for us are scale and security in the cloud. If we look at our scale, we're seeing tremendous growth, just unprecedented growth, 167% year-over-year growth, and that growth is continuing to accelerate. But additionally, with that growth, we got customers bringing to us more and more secured environments. They're looking for us to maintain a level of security and meet the ever-increasing regulatory needs of their business. So we took these 2 issues to Intel, and they helped us to understand the platform better and to actually be able to execute and address these challenges.
So let's first talk a little bit about scale. We've heard a lot about these new CPUs and what they can do. But as Diane mentioned, it's about a balanced approach. And one of the places that we saw a need to increase scale was inside of our I/O. We had an issue with the fact that these servers are growing so much because of virtualization and the utilization going up, but as well because the servers are getting so large that we were exceeding and outstripping the bandwidth that we had to those servers. So we looked for a new solution, and we found that we needed to have something that wasn't technically complex because that complexity adds cost. But as well, it could potentially cause outages. In addition, we needed to make sure that our cabling costs were low. Because what ends up happening is the data infrastructure and Data Center infrastructure costs can be very, very great, especially when we looked at solutions like optical.
So we settled on Intel 10 Gigabit Ethernet as part of our platform for our next-generation clouds offering. And we saw some pretty dramatic benefits. We saw a 23% reduction in the number of switchboards and cables that we needed to deploy our solutions. We saw a 14% reduction in the infrastructure costs out of the gate. And as that technology matures, we're really expecting to see that further improve. The big thing was we got a 150% increase in the server bandwidth. That was exactly what we were looking for to match this quality and the speed of the new E5 chipsets. And this was all wrapped inside a simplified technical architecture, something that our engineers already knew with Ethernet. Our service went from looking like this to looking like this, bettering airflow, increasing our capabilities and keeping it simple so that we could continue to deploy our new cloud environments.
I mentioned security as our other challenge. We have a massive need for encryption at Expedient. Our customers again are bringing us very sensitive datasets, and we want to make sure that they're covered by encrypting them. That's also a big part of a lot of these regulatory standards that we're seeing today. So Intel turned us onto a feature set inside of the Xeon CPUs called AES-NI. So as Diane mentioned, AES-NI is hardware-based acceleration. Previously, we had to go to specialized ASICs to make this work. But now, we actually found inside the Intel Xeon CPU, it was always there, at least it was there in the last release, that we could actually access it and get some huge benefits.
So let's talk a little bit about what those benefits are. So we did a test using AES-NI and doing 256-bit encryption. So what's 256-bit encryption? Well, if I had a machine that could do 72 quadrillion checks a second, and it was set to brute force that piece of data to try to figure out what it was and decrypt it, it's going to take a long time for that decryption to take place. So we did 4 separate tests. The first test was with the previous Intel processor, the 5500 series. It didn't even offer AES-NI. It didn't have that feature set. And we've got about 5.3 gigabits of 256-bit encryption through a dual-socket system. We did that same test with the 5600 series, the current-generation Intel processor. And we got about 11 gigabits, so a pretty healthy increase. But then on that same dual-socket system, all we did was we accessed the AES-NI instruction set, and the rates went up to 18 gigabits.
But the real magic happens when we look at the E5. When we take the E5 numbers, we're close to 40 gigabits per second in a 2-socket box. We're encrypting 40 gigabits per second inside of a standard 2-socket system using AES-NI. So 118% improvement over the previous generation. What does this mean? Well, with a dual-socket system, I can take that Madagascar DVD that we talked about, and I can encrypt it in less than a second. I can take the Library of Congress and in a better part of the day, go through and encrypt all that. But most importantly to us, I could encrypt and decrypt every packet into and out of any one of our Data Centers with that single 2-socket box. That's allowing us to address security concerns with our customers more reliably with less overhead and really helping us have an impact to the business and making sure that people can adopt in the cloud space. So those are 2 areas that working with Intel we found to address, and we're very, very happy with our performance results.
Diane M. Bryant
Super compelling, thank you so much.
Diane M. Bryant
Thanks, Alex. Very nice. Great data. So that is compelling data, thank you. So the other key attribute, as we know, in delivering ever-improving results in the data center is energy efficiency. It's core, as we all know. Power will continue to be a constraint in Data Center deployment and in deployment of infrastructure in the Data Center. It is a power, it's a core pillar of Intel for computing. It's 1 of our 3 core pillars of everything we do in Intel, we do it with a focus on energy efficiency. With this a generation of Xeon processors, the E5 family, we have reduced the power consumption by 50% for a fixed unit of compute, so dramatic reduction in power consumption and we have done it. The brilliance of Nazin's team, they've done it across a wide range of features and capabilities from transistor level features to block-level features, to system-level features. And in this generation of processors, we're announcing the Turbo 2.0. So you remember, Turbo was launched in the prior generation, Turbo 2.0 allows us to increase the frequency of a single core. If you need single core performance, you can increase the frequency, the top frequency, by 900 megahertz. So significant pop in the performance thanks to Turbo 2.0.
We also have features built into the processor that have the intelligence that track the utilization of the CPU core. And if the core is not being fully utilized, the processor will ratchet back the frequency of the interfaces, ratchet back the frequency going out to memory, to the QPI, chip-to-chip interconnect, the interconnect into the cache, so throttling back all those I/O interfaces which tend to be higher-performance consumption interfaces since you're going off and on chip, ratcheting those back when you don't need them based on core utilization. So there's a lot of intelligence built into the processor that allow us to get this kind of dramatic improvement in performance, improvement in energy efficiency while delivering outstanding performance.
As an old IT guy, I am particularly excited about these features that are coming out now, so as Intel's Node Manager and Intel Data Center Manager, the enhancements are going to completely -- it's completely revolutionized the way we manage data centers in IT. If you think about it today, there in -- within IT, there's the Data Center facility guy, and he is incredibly worried about the power and cooling of the Data Center and optimizing the power and cooling of that Data Center facility. And then you have the IT infrastructure guys, and their passion is around driving up utilization and getting the greatest performance they can out of their infrastructure. And those 2 are optimizing the Data Center independently today. And the results are not only suboptimal, but it can actually be dangerous. We had a situation inside of Intel IT where the infrastructure guys were driving up the utilization, getting greater and greater performance out of the infrastructure and feeling quite good about themselves, when suddenly, our power load increased by 10% and put the Data Center in grave danger of shutting down. So it's a real-life example of what happens when you allow these 2 worlds to optimize independently.
With Node Manager and Data Center Manager, these 2 worlds come together. So with Node Manager, you have visibility into the utilization of the processor, as well as the power consumption, the power consumption at a server level, as well as a rack level. So you can -- dynamically, through the Data Center Manager, you can see what your utilization level is, what value you're getting out of your infrastructure. At the same time, you can see your power utilization and you can optimize across the 2 worlds.
And I really want to applaud Dell for their leadership and their innovation and engineering in taking Node Manager and Data Center Manager and incorporating it into their open-managed PowerCenter, so bringing this capability, being able to effectively and efficiently manage your Data Center across power and utilization, bringing that capability to their customers with the launch of E5. So thanks much to Dell for that innovation.
So as I said, we need to think about the E5 across the entire Data Center. It is not just an outstanding server solution. It is an outstanding storage solution. It is an outstanding network solution. And you can see that when you look at the number of designs we have launching today with the launch of the E5 processor, the power of the processor is demonstrated in 2x the number of designs that are launching with us, so we have over 400 designs, system solutions designs launching. That's 2x the number of the prior platform transition, so the prior talk when we went through another platform level change, so 2x the number of solutions ready to go. And you can see those solutions are not just in the server space, as you may naturally associate the Xeon E5 Processor with servers. It's across servers, storage and network and even within the network space. Of those 400 designs that are being launched, 100 of them are in the communications infrastructure. The communications infrastructure is being supported through the E5. We're supporting the additional extended life requirements of the coms industry, as well as the increased temperature requirements. So the E5 not only supports your -- what you would think as your traditional Data Center, but also your telco service provider infrastructure.
So to conclude, we're obviously very excited about the Xeon Processor E5 family. It brings 80% performance improvement over the prior generation. It brings amazing breakthrough in I/O performance as Nazin talked about, a 30% reduction in latency at 2x improvement in bandwidth, reduction of power consumption through integration, just dramatic change in delivering that balanced platform performance, bringing integrated I/O to the industry for the first time, bringing PCI Gen 3 to the industry for the first time. And, as I talked about, security, which is really core as we continue to buildout the Data Center and buildout using these new usage models like cloud. We continue to build upon our security solutions, TXT technology, as well as the hardware encryption and decryption technology with AES, that was so nicely explained by Alex. And we continue to drive the best ever performance per watt. Energy-efficient performance is at our core.
So I want to translate all of this innovation into something visual. Let's see if -- the video -- will you roll the video, please? Okay, so you can probably guess which is which, but if you're struggling, the one on your left is the old generation microprocessors. So if you are running a Data Center today, I guarantee you have lots and lots and lots of these servers running in your Data Center. And what we have here is we're emulating 30,000 users, streaming media down to 30,000 users. That's what the demonstration is showing you. And obviously, on the right, what you see is the new E5 Xeon processor family. And I hope you would agree that the difference is dramatic. The difference is very clear. I don't think I have to describe the difference to you. But I do want it to describe to you how it is achieved. And I know this is kind of small and kind of far away, but it will be up through the break, so you're welcome to come over and check it out.
So really, through the 4 big technologies that are integrated into the Xeon E5 Processor, you get these kind of dramatic results. It starts with the direct data I/O that Nazin described so you can see that 5300 processor on the left without it running 1 gig. The processor on the right, the new E5 with direct data I/O, taking you down into the encryption space. So on the left, given the lack of AES instructions, we're doing it the old-fashioned way. With software encryption, you can see how slow and how long it takes to encrypt the stream versus the picture on the right with the new AES-NI instructions and the incredible performance that the E5 family brings. Over to the graphics, the graphics display there, you can see the power of the AVX instruction set. And then to the top Node Managers. So the Node Manager display tells you your utilization of the processors at any time, the power being consumed at the server level and a rack level against the power cap for that rack. So not only is it giving you visibility into the actual usage of the processor, but you then can cap the power and ensure that you maintain the utilization of within spec.
So with that, I want to say thank you, again, for joining us. It's of a day for us with the E5 launch, I appreciate your time in coming out to see us and participate in the launch event with us. Thank you very much.
Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.
THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.
If you have any additional questions about our online transcripts, please contact us at: firstname.lastname@example.org. Thank you!