Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message|
( followers)  

Advanced Micro Devices, Inc. (AMD)

February 02, 2012 11:30 am ET

Executives

Unknown Executive -

Ruth Cotter - Director

Rory P. Read - Chief Executive Officer, President and Director

Mark Papermaster - Chief Technology Officer and Senior Vice President

Lisa Su - Senior Vice President and General Manager of Global Business Unit

Thomas Seifert - Chief Financial officer, Principal Accounting officer and Senior Vice President

John Docherty - Head of Manufacturing Operations

Chekib Akrout - Senior Vice President of Research and Development

John Taylor -

Matt Skynner -

Phil Rogers -

Manju Hegde - Corporate Vice President of Fusion Experience Program

Analysts

Unknown Analyst

David M. Wong - Wells Fargo Securities, LLC, Research Division

JoAnne Feeney - Longbow Research LLC

Unknown Executive

Good morning, and welcome to AMD's 2012 Financial Analyst Day. Here's Ruth Cotter, Corporate Vice President, Investor Relations, AMD.

Ruth Cotter

Good morning, everybody, and welcome to AMD's 2012 Financial Analyst Day. It's great to see such a full room here in California with us, and we'd also like to welcome everybody on the webcast this morning. We've a number of new faces who are going to be on stage today walking you through AMD's vision and strategy in the long term and also in the near term. So we have a lot of exciting things to share with you this morning.

In addition, I'd like to introduce you to Rory Read, our CEO. In addition, we have Mark Papermaster, who has recently joined the company just a couple of months; and Lisa Su, who's just been here all of 30 days, so we can't wait to hear what she's going to share with us today; and then Thomas, our veteran of the executive management team, on stage this morning. We also have 2 board members with us here today: Craig Conway and Paulett Eberhart, so a very warm welcome to them as well.

We'll take a brief look at the agenda here just to give you an outline of who will be presenting when. This will be followed by a question-and-answer session. So we'd really appreciate you holding your questions until that session, and we'll ensure that ample time is provided for that session. We will host a series of breakout deep dives this afternoon in this room after the lunch on topics that we think will be very engaging and interesting. We'll have a series of very dynamic product demos. We'll have a conversation about our supply chain. And last but not least, we'll talk about Heterogeneous System Architecture, something we're all very excited and animated about. So I encourage you to take note of those times and to make sure you don't miss those sessions.

We have a phenomenal demo area here at this event today. But before I get into those details, just in terms of a quick preview, Rory will outline at a very high level, and then the rest of the presenters will deep dive into the inflection point that we have in front of us and the opportunity that we believe AMD will capture as part of that. We'll also go into details regarding our execution ability and the importance of flexibility and agility as part of that. And then also our presenters will go into a series of other items for us this morning.

The demo area that I mentioned is very dynamic. It's down the hallway here for the folks in the room. So if you haven't been there yet, I definitely encourage you to do so over the lunch break. There's a lot of people there to help you walk through our product portfolio. And in fact, we've reenacted a home environment to show you very practical usage scenarios in terms of the user experience that the brilliance of AMD products provide for you.

As you take time here to carefully review our forward-looking statements regarding this event, just a few housekeeping items. We'd appreciate if you just stick your phones on silent mode, please. As I mentioned, keep the questions to the questions-and-answers session, and we'll address all your needs and requirements at that stage. And then we will have lunch followed by the breakout sessions in this auditorium.

Last but not least, and before we kick off the day, this is my Oscar moment. I promise I'll be brief. I want to thank all the people who've put so much tremendous effort into making today the success that it will be. The Investor Relations team are the backbone of this event through Diana, Elena [ph] and Armena [ph], so thanks to them. But they're also supported by a huge environment in the family that is AMD. So many thanks to our PR folks, our legal teams, our marketing teams, our facility teams, the list goes on and on and on. But suffice it to say, thank you, and I'd like you all to give them a round of applause as we now head into a video followed by Rory Read.

[Presentation]

Rory P. Read

Well, welcome, and I'd like to say thank you for joining us today at the Analyst Day. It's an exciting time for me personally after 5 months to be up here and share some of the thoughts that we want to take you through during the day. But I also heard that we have a room full of analysts, and I know there's a big game on Sunday. For a show of hands, I need a little interaction here from the crowd. Who is winning the game?

Unknown Analyst

Patriots.

Rory P. Read

All right, Patriots -- oh, no, it's a show of hands not a voice, Leslie [ph]. All right, show of hands, who says the Pate (sic) [Patriots] and the Niners are not playing? For those at California, I know you're depressed. Show of hands. Patriots? Okay, give me the Giants. Oh, it's pretty close, but I have to say the analysts slightly pick the Giants. My wife's going to be depressed.

All right, well, I'm delighted to be here. There's no doubt about it. In the first 5 months, we've made a lot of progress. We see an important inflection point occurring in the industry. It's one of the main reasons that I chose to come to AMD. Our company has a long history and track record of innovation. It has some of the best and brightest minds on the planet. And as I came here 5 months ago, I can tell you I'm more excited about the opportunity today than when I started 5 months ago. That inflection point is going to be driven around consumerization, the cloud and of course, convergence. These 3 Cs are occurring at a very important time in our industry, one where we'll see the breakdown of proprietary control points. Those control points that have dominated our industry for years and years. And when that occurs, shift happens. Shift is good. Share change will occur and the status quo will break down. And I think AMD is uniquely positioned to tackle this opportunity, this inflection point, to embrace it and to drive forward and take advantage of it.

Let's look at the next slide. What are we going to talk about today? I'll talk about how we've been getting AMD fit to fight over the past 5 months. We've got to execute. Clearly, to create value and to differentiate ourselves in the marketplace, we must execute better. It's been a key focus. We've made progress, but we have more work to do. We've got to first, execute with excellence. And as we do that, we build trust with our partners, our investors, our customers. And trust is the bedrock of growth. That's the future for us. That's the first lift of value we can create through better execution. That's what we're doing right now to reposition AMD. That's why we took the actions around the organization, the leadership, how we improved our cost position, how we're structuring the company. It's all around getting this company ready to compete and to compete in a big way as this inflection point occurs because that's the second lift of value creation and differentiation we can create.

We have to skate to where the puck is going. We have to go capture that inflection point and embrace it. That's the opportunity in front of us. We'll talk about those changes in the industry, those 3 Cs. Mark Papermaster, clearly, an industry veteran with a long track record of knowledge and background. He'll take us through what we're doing to create an ambidextrous agile architecture that will open up the business for us to create those solutions that make a difference for the customer.

It's about the experience that we create. How the world has been done and how processors and how this industry has moved in the past 25 years is changing. It's changing because of the marketplace, and that Mark will begin to talk about the architecture. Then Lisa will come on and talk about a focus around the customer creating a market-driven company, not one that's in an unhealthy duopoly, focused on the incumbent or on the most complex technology they can capture, but instead focus on where the market is going.

How do we create value? There's no mistake that Brazos is our most successful platform ever. It's a 73-millimeter chip. 30 million of those APUs sold. They're easy to manufacture, boom, boom, boom. Customers love them because they experience the graphics, the kind of work they're going to get from the cloud. Lisa will talk about our road map and how to capture that.

And then of course, Thomas, our veteran, veteran only 2 years, 2.5 years. He's our veteran. He's going to finish what he started here. We've been repositioning the balance sheet, getting fit to fight, creating a balance sheet that we can work from, delivering on our commitments to you and to our customers. That's key. He'll take you through that. And what we'll do throughout the day, I hope, is to share with you the ideas, the approach and how we see the opportunity to capture this next wave of success.

Let's move to the next slide. There's no doubt that as we look forward and we get fit to fight, I believe in my core that you need to improve execution. It's the first lift of value creation we can do in the company. We execute better, people can lean into the wind, they can make bigger bets on AMD, and they can go after it more.

How do we create that kind of execution? Well, you build it at the foundation. The foundation, you create a culture. We call it the AMD Way, one focused on ownership and commitment. I do what I say, and I own what I do. I make a commitment, I deliver on it. That culture's also got to be based on the market. And how do I drive that market? How do I understand where the market is going because it's about creating value. If you create value for your customers, that's how you unlock the value of your company. That's why we're here: the customer. That's the only reason we're here.

And then at the end of the day, it's about innovation. It's about leadership. I don't want to hear about being a fast follower or tracking this person or that track [ph]. It's about stepping out of the shadows and leading. These inflection points that we're seeing are our opportunity to grasp that movement. That inflection point creates the next wave of innovation. That wave is already beginning to emerge. We have to embrace it and capture it. You create that culture then on top of that, you lay it, that's the foundation, you lay in a strategy. A strategy, straightforward, easy to understand. It aligns every person, all what 10,000-plus AMDers across the planet. That strategy needs to align us and drive us in that singular direction. That will drive us to get better execution across the portfolio.

We'll finish what we've started. We've made great progress in the balance sheet, at deleveraging our position, putting ourselves in a position where we can look at our portfolio. Where's the IP that we need in order to capture this inflection point? How do we add it? Where do we want to make our bets? We took action in terms of our cost structure back in November. That was in place to get ourselves fit to fight, but also to go after the investment areas we need.

Next chart. So let's talk about what's happening in the industry. There's no doubt, and we see it every day from consumerization, this idea that devices are going to blur between segments. And clearly, the next 1 billion customers, what, there's 4.5 billion people on the planet not connected today, without this technology. The next 1 billion customers, they're going to be in entry and mainstream price points for sure. They're going to be in the emerging high-growth market. Consumerization's going to drive the whole set of new products.

The cloud, the cloud is going to be how we deliver data and information to these different devices. And it's going to fundamentally change the way data centers are constructed. I was CIO in IBM in the PC group and in the global services organization. I have an experience from that perspective of what old data centers look like. Today, they'll be tailored to huge amounts of data, high-scale volume transaction. And the processing and the kinds of power envelopes that are needed will be significantly different. That dense kind of cloud solution is going to be fundamental.

And you think about convergent. Every device, every screen is going to become interconnected the way applications, data move across those devices seamlessly. That's our opportunity. How do we link that material across those devices? Can we create an architecture? An SoC kind of strategy that allows from a 50-millimeter chip to a 300-millimeter chip the ability to link and deliver that time-to-market, that kind of graphics experience, that low-power solution and allows our customers to link across those devices. Those are the opportunities in front of us.

We see the new ecosystems emerging. The days of that proprietary control point dominating 95% or 98% of the volume of a particular set of clients. That's beginning to change. That day is beginning to end. A new wave is beginning to emerge. Who would have thought 2 years ago, we talked about Win 8 on ARM devices. It's going to happen this year. Those devices and the way devices are used, from tablets to ultra-thins all the way through, this is going to occur, this convergence.

Next chart. So let's think about this. We've seen these inflection points. This is one of the real beauties of being 50 years old. There's not many, okay? But there are a few. There is the beauty of being 50 years old is you've seen inflection points before.

When I joined IBM in 1983, you know what the dominant proprietary control point was? It was MBS and SNA networking system network architecture. It dominated the world. It had the gross margins, the market capital. And then this wave began to emerge. It stagnates first, and then a wave begins to emerge. It was client/server first then it went into mobility, et cetera. We're seeing another inflection point.

One of the reasons I came to AMD is because I believe that, that inflection point was occurring now. We can debate whether it's overnight or over a 3 or 5-year period, but it will occur. And those proprietary control points and pricing will fundamentally shift. As it does, that'll open up opportunity for us to be -- embrace that change, deliver an architecture and set of solutions that plays to our hand that creates value for the customer.

We're going to be able to fly under a pricing envelope that's created by the old model. They'll protect the status quo. We want to embrace that change whether it's in the server environment or the client space. This convergence, it's happened before. It's happening again. It's why I'm here.

Let's take a look at the next slide. And if you think about it in terms of inflection points, this is what I'm talking about. In terms of 1983 when I joined IBM, I remember they took us into a room and they said, "By 1990, we're going to be a $100 billion company. And at that time, it seemed very possible. But things change. The inflection point changed, and market capital eroded. Now great companies, they bounce back. They jumped the tracks, and they find the next wave to go on, and they've gone on to be an even better and fabulous company.

But as that inflection point occurred, new entrants emerged and began to catch that client/server wave. That inflection point redefined winners and losers. Shift happened. And as it did, market share changed. And that was the opportunity.

Now let's look at today in the next slide. Today, we're seeing the same thing. We begin to see in the marketplace some of the market leaders struggling in terms of their market capital, their flows, et cetera. They begin to flatten out in terms of the result. And a new set of players begin to emerge in this. And it's early in the cycle. There's no doubt about it. But we've got to be early in terms of seeing where the opportunity is and begin to change it.

Clearly, as I said in the beginning, and I don't want you to get the wrong focus here. In 2012 and even into 2013, it's about execution. We need to deliver and build the relationships with the customers. That's the first lift of value we can create. But because silicon takes so long to deliver, we need to predict and see where that inflection point is going. And in parallel, begin to change the architecture and the business models to capture where the puck is going.

It's 2 pieces: one lift is the first piece of value around execution and the current business; and the second lift about where the market is going. This inflection point has already begun. People said that the mainframe was dead in 1985, 1986, 1990. It really wasn't. But the proprietary control point fundamentally eroded, and it took a period of time. We're going to see that same erosion here today.

Next chart. So let's think about it. There's people that suggest that AMD is caught in the middle because between this large incumbent and this emerging set of players that are coming up from the bottom in terms of that ARM ecosystem. It's quite the opposite. Think about how we're uniquely positioned with over 7,000 engineers across the planet. The experience and knowledge of the processor, graphics space. Our solutions and our IP is uniquely positioned to take advantage of these 3 Cs.

We play in the entry and mainstream price points. It's not hard for us to continue there and take opportunity and share in that segment. We know the cloud's going to drive graphic in the way data is viewed at the customer and the converged device level. We know that with that IP today, what, 50% of the traffic on the Internet is graphics or video-based. Within 2, 2.5, 3 years max, it'll be 80%, 85%. And with the cloud driving it, it's only inevitable.

And if it's going to occur in the emerging markets, it's going to be in the entry and mainstream price points. That's exactly where we play. And if we capture with our IP around low power and around graphics, and you know the bandwidth in some of those regions is lighter, we can create the experience that's quite different. This is our opportunity to step out of those shadows and take advantage of this inflection point and lead.

I don't see us in the middle. I see us as an opportunity to embrace this change. The same thing in the server space. We've got momentum after, what, the past 2 quarters, we've started to see a turn in the Server business in the traditional space. And we'll continue to build on that. But at the same time, we'll make the best right now in terms of how that data center is changing. And with Lisa's team, they are meeting every day with customers across the planet. I've probably met with -- I don't know, 250, 300 major customers and partners and channel members over the past 5 months. They say the same thing. They see the same opportunity. And if we can create the execution and the trust level and then we can create the architecture and the technical solutions, those are our 2 lifts in terms of creating value.

Next chart. So if we think about our focus in the past and where we've traditionally been focused, we've been focused on the PC and the server space. Clearly, a base around discrete graphics. We dabbled in the embedded, and it's been primarily focused on the mature markets, North America, Western Europe. Our focus has been on processors, chips, and it's all about pricing. Of the 4 Ps, we love to pull the price lever. We've got to think about creating value around the other Ps. How do we create the promotion, the right product that wins? And it's always been driven through the OEM. We've been in this model of this unhealthy duopoly, almost pulled along by the gravitational force. And our team focused internally on that ultracomplex technical solution and how to capture that space.

Remember I mentioned Brazos, 30 million most successful APU platform we've ever launched? That solution has standard IP blocks. It's at 40 nanometers. It's N-1 technology. It's a 73-millimeter chip. Yes, we did outstanding with it because it created the experience that the customer wanted. We can build on that and get faster time-to-market. That's super important. We don't want to be on the bleeding edge of technology where we're leading in with our chin and we don't execute cleanly because that breaks down trust. And I came from an OEM. I know what it means to risk your business in terms of where you want to invest and where do you want to put your bets. You've got to bet on the organization and team that can deliver. We've got great innovation and great technology. Now we need to back it up in terms of the ability to execute.

Let's move to the next slide. I talked about the 3 Cs in terms of the opportunities in front of us. And what do they mean? I want to highlight it again. Consumerization plays to our hand. It's going to be around the value pricing, the right experience. It's going to be in the entry and mainstream price points. That next 1 billion customers, right there. It's going to be low power, there's no doubt. And I don't care if you're talking about tablets or fanless, it doesn't matter. What matters is that we create those thinner devices, the ultra-thins. It's nothing new. It's thin and light. It's going to continue. What we're going to do in that space? And Lisa will share a bit of that strategy. We're already doing reference designs. And our next-generation Trinity APU has more design wins at this point of the cycle than our record-setting 2011 year with the Llano chips. That kind of focus on creating the experience, the right kind of pricing, the right kind of solution and thinness in terms of the graphic representation, that's the future.

Same thing on cloud. We can create the right kind of serving technology in the traditional server space where we play so strong today around virtualization, around the data base work, around high performance computing. But if we embrace where this next wave of cloud serving is going to be in a dense-type solution with an ambidextrous architecture, it should allow us to embrace that low power. That's what's holding back those data centers. It's about power. It's about space. It's about performance per watt and square footage.

And then convergence. Clearly, here's an opportunity if we create an SoC architecture end-to-end from 50-millimeter chips to 300. We position ourselves to put those products across devices, across and adjacent embedded spaces. And don't misunderstand me. I'm not suggesting that we miss our focus. Our focus in '12 is around execution, delivering on our commitment, everyday, quarter in and quarter out. Build that foundation, that bedrock of trust. That's the key to unlocking that growth. At the same time, invest in terms of where we think the business is going. And we'll begin to show where we're taking the business in terms of that inflection point. But we'll show it over time as we execute and deliver on that.

Next chart. And then where are we taking the business? What's next for our company? There should be a next chart. It really talks about -- I know where it is. It's got the skater shooting the puck. I love that chart. Anyway, from the standpoint where we're going to focus, we're going to double down on the client and the mobility space. The APU is the right architecture, the right strategy. It creates the right experience at the right price point and at the right value.

I'm not suggesting to dive into smartphones and heavy-crowded space of low margin with an execution engine that's not proper. I want to focus on client mobility, thin and light, building on the APUs. Our low power, taking the power envelope down every quarter, every generation. Then go attack the server space. We've seen 2 solid quarters in turning that business around. Now we want to continue to build on that into traditional space and skate to where the puck's going in terms of dense cloud solution. And embedded in -- we've already shown that we can play well embedded using that same architecture.

We create that ambidextrous architecture, that agile flexible SoC architecture. How hard is it for us to go to thin clients where we play great today? Or to game consoles? Or on to medical or communication? I'm not saying stretch the supply line so far, but strategically pick those areas that are very adjacent, that are easy for us to, well, create an opportunity that allows us to be accretive to the business. But the core business has to be where the core business is. And this year, the first lift is on execution, second lift around moving the ball forward.

How do we create value? How do we win? How do we differentiate? It's about a solution, an SoC, bringing together our IP. And Mark will go into this in detail to share how we leverage that. We must continue to drive lower and lower on power. It's not only because of thin and light, it's because of high growth in emerging markets. It's in the entry price points. Why can't we deliver ultra-thins in the $500, $600, $700? Why can't we help OEMs redistribute the profit pool with solutions like the Trinity APU? That's an exciting opportunity.

We want to build on our software in APU and around the heterogeneous architecture that we believe will be fundamental to this converged environment. And it is going to be a world of integrating an ambidextrous architectures and ecosystems. We need to embrace that. And at the end of the day, it's about creating an environment where we get time-to-market. We can deliver and execute every time. Mark will take you through how we want to deliver that. That creates the ability to be more responsive. We can uniquely create silicon for specific purpose, and we can create deeper relationships with our customer set. Ultimately, we could even fundamentally begin to change the way processors are purchased down the road. Think about it. Almost like a game console model, this could occur.

Next chart. So from a standpoint, I believe that AMD is uniquely positioned to take advantage of this inflection point. I believe it's our time. Our opportunity is here right now. We have some of the world's best engineers, some of the greatest minds on the planet thinking about the technical solutions as we integrate across these architectures. How do we capture this inflection point? We're not afraid of it. We see it as the opportunity it is. That next wave of innovation is in front of us. It's our opportunity and our time to differentiate here around cloud, the consumerization and around convergence.

Our market is here in front of us. This is our time to step forward, marrying the needs of the market with our innovative capability to bring technology to market, and to do it in an agile, very efficient model. This is our opportunity right here, right now. We need to create a company that's built to win, to build a company to last. First, get them fit to fight in '12 and '13 creating -- and in parallel, the architecture that opens up this opportunity, 2 [ph] levels of value creation and the ability to compete effectively in the marketplace that is fundamentally changing. That's our opportunity. That's the new market. And that's the new AMD. Thank you for your time.

I got so fired up, I forgot to introduce Mark Papermaster, next.

Mark Papermaster

Rory, thanks very much. It's a pleasure to be here today, and it's an absolute thrill to be at AMD. We have phenomenal talent at AMD. There's phenomenal IP at AMD. And I've had the pleasure over the last 3 months of getting to know the team, understanding the depth of that talent, understanding where the IP is and really listening to our team as to what we can do to retool, to deliver that value more effectively to our customers. And it's an exciting story, and that's what I'm going to take you through here today.

So let's jump in. That IP that I was referring to is phenomenal compute, graphics IP that's been developed over years. It's been developed through -- AMD has developed through acquisition over time, and it's rooted in all of our products. And when you look at what we've done, we've been delivering that IP year-after-year with strength in relationships with our customers, deep delivery with our ODMs and our OEMs, combining the platform capability, combining the software enablement with that core IP. That's a deep relationship that we've been building that we need to leverage.

But what I'm going to talk to you today about is how we deliver this more quickly. Rory said it earlier. How we're going to bring more agility to the way in which we operate as a development team at AMD. And to do that, you actually do have to architect differently. You have to take that IP. You have to take those crown jewels and you architect it so it can pieced -- be pieced together in an agile SoC methodology. That's what we're doing. I'm going to show you what the result is of that and how that allows us to nail our customers' requirements more quickly and more effectively to the workloads that they care about.

Well, let's talk about what we mean here in terms of the trends that we said earlier. Consumerization, the cloud and this whole convergence. How does that play into that change that I talked about, that change of delivery in IP? Frankly, it's the driving force. It's that inflection point that says we have to change the way that we deliver technology. Why? Because the experience is difference of our customers. That device that you have in your pocket, your client device, your smart TV that you're using, all of these, our expectation whether we're at home, whether we're at work, we want a consistent experience. We want a consistent compute capability. And frankly, we want to persistently get at that data wherever we are, whatever device we're on, which means we're tying into the cloud in a connected way. And we need a consistent way to get that in a highly effective manner very quickly. So that impacts the way in which you deliver technology across those segments.

You have to deliver that technology in a quick way, in a tailored way to each of those segments. So that's what we're doing. That's why we're focusing on platform solutions. It is the end customer and those markets that we have to be able to deliver that technology with competitive differentiation. At the end of the day, it is that IP war chest and how you enabled it to deliver the platform that makes the difference. And in doing so, we'll leverage that partnership that we've had with our ODMs and OEMs. We're going to continue to build on that, but we're going to do it in a far more ambidextrous flexible way to deliver the solutions. That's what's different about what we're doing here.

And it turns out that you change the execution model to deliver that flexibility, to deliver that speed. And that's the engineering that we put around. How you architect each of these IP blocks, how you deliver the time-to-market, how you bring more quality as you change that methodology into basically a best-of-breed SoC methodology. We're bringing that right on top of the differentiated IP that we've been building on for years at the AMD company.

So let's talk about that. What I love is the first example of unlocking the power of the IP that we've been working on, and it's something we're really excited about here at AMD. It's called Heterogeneous System Architecture. And when you look at how we unlock this value of the CPU and the GPU IP that we have here, it's about getting it to work seamlessly, making it easier to take advantage of these compute capabilities.

And when it turns out that when you can allow these engines, the CPU engine and the GPU engines to work together more effectively, you actually get tremendous speedups in terms of how you solve problems. Actually, you can exceed Moore's Law in the rate at which you're providing this type of improvement.

We've got examples of unzipping files, face recognition, search algorithms that you're running. When you bring these type of day-to-day needs that we all have whether it's running on the cloud, whether it's running on your client device, and you apply this combination of CPU technology working seamlessly with the GPU, it's phenomenal.

But you say, "Mark, that's not new. There's been these type of engines. They've been around a while. What's different about what you're talking about? What's different about AMD's approach here?" Well, it's fundamentally different because the reason that you haven't seen this catch a wave and take off in the industry is it's been hard to program. It's been hard to unleash that value. And what we're doing is creating a construct, an architecture that makes it very easy. It's going to make it as easy to take advantage of that CPU and GPU engine that it's been for years writing high-level languages that you see running all the applications that you have day-to-day on your smartphone, on your tablet, on your client device. That's what this does.

And it gives you massive speedup. We're seeing -- let's say an OpenCL application, 125% speedup. We're seeing applications that run at half the power with HSA, with these approaches, this architecture applied versus conventional techniques. That's the beauty: Lower power; more performance at lower power. And it's really going to unlock developers to take advantage of the full compute capability of APUs.

And the way that we're going to go about this is in an open way, open standards. We have been huge supporters of open standards at AMD. And you're going to see this in the Heterogeneous System Architecture approach. We found that the way to really bring momentum around a new technique like this is to bring the industry with us. In fact, our competitors have said, "Yes, we're open." And they open a narrow segment of a piece of code released out there that's very, very different than an open standard like Linux where it's open to the full community. Every aspect of the tool chain is open to the development community. That's what we're doing with HSA. It's an approach that will allow hardware developers, ISVs, the OS developers, to come together in a way that there's a set of constructs enablement to allow us to unleash this value.

We're creating a consortium. We've walked out in the industry and talked to our developers, talked to OS, we've talked to competitors. And we've shared with them this proposal, this architecture, and we've gotten an overwhelming response. It's very positive. We're in the midst of this. And we're getting very positive responses to roll it out. We're releasing specifications. We have a summit coming up in June, which we're getting tremendous excitement around. And this is what's going to unlock this value, making it easy to take advantage of CPU and GPU.

Well, what does that do for us at AMD? That's, "Mark, that's a tide that raises all the boats." Exactly. That creates the ecosystem. That's what we want to do. We want this ecosystem, and then it lets our IP stand on its value and competitiveness. This is where we've invested. This is where we have competitive differentiation. We'll compete on that value with an open ecosystem that unlocks its capability to the developers, the software developers. Very powerful.

So this is a stepwise progression. We're on this journey. It's the APUs that Rory talked about that we've been shipping very successfully because that started the journey by bringing that CPU and the GPU onto the same silicon. It's got a physical memory, which is shared. It gets you an immediate speedup as you're running applications that could take advantage of both, but that's just the first step.

The next step as we're rolling out this year is allowing much better management of the power. As workload shifts, it might be on the CPU and then the aspects of application are much more parallel and can be taken advantage of in these parallel GPU compute engines, shifting back and forth managing the power in 2012 and allowing the GPU to access the x86 memory space. It's physical memory space, right? And then in 2013, we actually go to a virtual memory, 64-bit addressing with virtual memory shared between the GPU and CPU. And then in 2014, we have full complement of support for this Heterogeneous System Architecture where we have immediate contact switching between the GPU and CPU and really have unlocked, in a very seamless way with all the software constructs underneath this, how to get the full value and full savings of this combination of technologies. Two different ways we're doing computing on a seamless basis.

So when you look at why this is critical for us, this is our crown jewel, it's the graphics performance that we've had. Year-after-year, we have been delivering graphics leadership to the market. Our AMD Radeon HD 7970 that we just announced and began shipping at the end of last year again provided leadership capability. It's leadership performance is 4 TeraFLOPS, 4 TeraFLOPS, right, of compute capability. But more than that, it's done at a significantly lower power. We're improving performance per watt at each turn of the crank as we deliver our product.

We deliver leading-edge standards in each release of our graphics IP technology, DisplayPort, Eyefinity, the support for multiple screens, the latest in DirectX, we support DirectX 11. And we push this out in a leading-edge methodology and leading-edge technology.

We've got demos which I'm hoping that you'll have a chance to see over the course of the breaks and during the day here today to see this technology in action. We bring it out with our discrete GPUs, and we roll it right in through our SoC methodology into our APUs.

So let's talk a little bit about low power. I told you that, that was a fundamental element in terms of our strategy because we're rolling out low power, leading-edge low power across each and every aspect of our IP and our SoC design. But this journey is one we've already been on. We've been on this for several years.

So it started with AMD being a first in bringing key elements on board the silicon to drive drastic reduction in power and improvements in performance. We brought the Northbridge on board. We brought the GPU as we brought our first APUs in Southbridge integration. So first step was integration of function to bring down power and improve the efficiency. Next, we went to power management, onboard power management, which we've been shipping as well since 2010. Then we've gone to power gating. Turn on the power where you need it, where you're computing in that moment in time. If there's not operations going on, on that piece of the design in the architecture, shut it down. Don't have it burn any power. And then improve that even further by having that be very fine-grained. So you break it down to the precision of where you need to burn the energy at that moment in time and have the rest shutdown.

What we're rolling out this year in 2012 is more advanced power management. I talked about it a little bit in the graphics in terms of how we shift the energy from a CPU to a GPU based on where those operations are going on, so advanced thermal management. And then ultimately, as we complete our rollout of Heterogeneous Systems Architecture, this provides a very fine-grained control of where the computing, it's optimization of taking algorithms, taking the workloads that our customers are running and ensuring that it's optimized for efficiency and where the power is being burned.

End result, it's not about the cool technology or the features that we implement to enable these power savings. What matters, it's for the customer. What matters is, battery life. If you look at the APUs that we've been shipping this year, it represents a doubling in the battery life of what we had just 2 years prior. That's what Brazos is delivering. It's a phenomenal -- it's an end-user experience that changes. That's technology put into application. That's what it's about. It makes a huge difference in the data center where it's thermally constrained and your power constrained. There's power delivery into the square footage you have in the data center. This makes a fundamental difference to our customer. Taking technology and innovation and applying it to make the customer difference and being more agile in the way that we deliver that across our entire product line.

Let's go further when I talk about that kind of IP. When I say we're applying it across all this IP, this is our treasure trove at AMD. We've been working on this for years. We have a deep set of IP across delivering that experience, the multimedia IP to deliver that graphics experience in our discrete graphics, our APUs across each of the segments I described, bringing that power management across the board, bringing that into the application so when you're running and you're using our technology for displays right through the codec, right through that whole translation of the data into that visual image, bringing all of these features to bear across our core, right into our memory feature, how you're managing the memory.

So it's about taking the techniques that we have with low power, with Heterogeneous System Architecture, bringing flexibility and agility defining each of these IP blocks such that they can be put together in a very agile and flexible way that's going to deliver a different experience to the customer and a different mode of delivery for the AMD company.

And you know what? We we're going to give you an example of how we think about that. It's really I call it a horizontal and a vertical mode of delivery of that IP, and the delivery of the technology and strategy that we're on. First is horizontal. That's about piecing together that IP I showed you. How do we bring that together? What we're doing with the SoC methodology is we specify each of those IPs with a consistent standard way in which it can be put together with a consistent methodology. One methodology across the AMD company as to how that comes together, giving us that flexibility to mix and match all of that AMD IP I just showed you in a very facile way. That's a fundamental change. It's going to make us more agile.

That's the horizontal that we gain. The horizontal efficiency that we gain in putting these solutions together. But it's actually more than that because you've seen what we're already doing with the CPU and GPU. But with that flexibility, we can tailor solutions in a much more agile way to demands of our customers. What that means is as we architected this new SoC methodology with these consistent interfaces, it also plays and supports standards that are out there for ecosystems that exist outside of our world, outside of the internal IP that we've been developing at AMD, that allows us to very easily pull in third-party IP. We don't want AMD engineers working on off-the-shelf industry available IP. If our customers need that, we're going to be able to pull in that third-party IP directly into the SoC methodology that we're putting in place. Okay? It's a straightforward concept. It's very powerful. This is what we're executing.

Let's go to the left side. What do I talk about vertical, with this vertical integration? It's the strength that we've been working on for years developing those relationships with our customers because they don't just want a piece of silicon from us. That's not what the AMD company is about. We deliver solutions. We deliver the total technology. With the SoC, we'll be able to target those engines to our customer needs in a more facile way. But it's about the software enablement that we bring this with us. We have a massive investment in the software and enablement around each of our IP blocks that we develop. And it's about the platform. We build reference boards. We test this out. We deliver functioning platform solutions to our ODM and our OEM customers. And we can do so in a specialized way as we're doing with game consoles and services to help them test and give our customers more efficiency in delivering those platform solutions and the software enablement that they need.

So it's about the horizontal, the flexibility and piecing together, this -- our IP, as well as external IP, and it's about the vertical, nail the solutions that our customers need and they're asking us for, and doing that in a flexible and agile way.

So what I'm going to do now is talk to you about the perfect example. And it's what we're focused on to take what I just described to you and how we're changing the way at which we deliver IP, the methodologies we use, how we enable this with Heterogeneous System Architecture to an application that we're very excited about. And it's a data center. The data center's changing. Rory talked to you about this inflection point that we're going through in the data center.

I've seen this as what, many years working with our data center customers. And it's just very clear to us the value we can provide as we take this flexibility, this is what we call ambidexterity, to be able to mix and match different solutions of our IP. And you look at what's going on in the data centers, you have -- for many of these cloud workloads, you have huge farms. And they're running tailor workloads. They're running Web server. They're servicing search engines. They're running more specialized. Some are running high performance computer applications. And so we need the ability, and we have it with the approach we're taking, to provide flexible, tailored compute solutions on those type of applications, but do it in such a way that we're leveraging the investment we've made, not only in the IP but the low power so that we can take our designs optimized for low power and add IP that allows them to interconnect in a very dense fashion, very efficiently. Right? Integrate this together. And it's that aggregation when you put that compute capability, tie it in, working with others, bring the IO, bring the storage capabilities, bring the solution, the platform together with our IP and the flexible methodology we've developed. We can provide the solutions over the target that the data center customers need given the inflection point, given the challenges they're facing.

We understand those workloads. We've been working with them for years. And we're tailoring our solutions so that we can deliver our IP, our low power, our acceleration to the data center. It's going to be disruptive. Data centers, mega data centers in particular, the cost of running them is measured in power. It's measured in the computing per square foot. It's about lowering that total cost of ownership, and that's what we're going to do.

And if you look at how we go do this, it's simply about leveraging the capabilities, which we are developing. It's about executing, bringing it together on the plays centered over the data center in terms of what they need. So as we take that flexibility and that IP blocks and the Heterogeneous System Architecture, we bring that together in the set of solutions but interconnect it in a very efficient way so you can create hundreds, thousands of these compute elements together. This is what's fundamentally different. You bring efficiency. You bring efficiency to computing. And it's going to be a differentiation. And it's the feedback we have, this is the right strategy for AMD. It's leveraging our IP. It's leveraging to solve problems that our customers care about. This is a track run. We're bringing this technology, focusing on the data center. It's the immediate focus, that change in strategy that I've described to you to hit the market inflection point that we see in front of us.

So when you look at how we do this, how do we tailor for custom workloads? It's pretty straightforward. It's about architecting. It's about taking -- you saw all of that IP I showed you on the previous slide that we've been working on developing for years. It's our crown jewels of the differentiation, and it's architecting it right up front so it can be flexible.

You take the x86 IP that we have such a rich heritage, 64-bit computing for many years. We've been optimizing this. We have cores which are tailored for high performance per watt. We have cores that are tailored for the low-power applications. We have 2 lines of cores. We've been architecting them such that we take all the x86 aspects, we bring it right into that core IP block. So that x86 ISA can just be matched up with all the rest of the IP that we have. So flexibility as we dropped that x86 and mix and match it with the rest of our IP. But again, that approach of changing how we architect and how we interconnect our IP opens up the ability to bring in third-party IP. If our customers have some unique IP that they need, we're going to work with them. We can create purpose-built silicon.

So it's about leveraging our offerings, our IP. Understanding the needs of our customers and leveraging it. Leveraging that and leveraging the ecosystem that's out there that's creating IP, complementary IP to what we have here at AMD and bringing it together to solve the customer workloads. It's compelling. It's absolutely compelling, the feedback we have. I just spent 2 weeks between CES and travel I've done sharing the story with our customers, and it's a very positive feedback.

This is what they want, to be more efficient to solve the problems, and frankly, to differentiate themselves. So it's a very compelling story. But you've got to execute differently. We will be executing differently. We've already embarked on this journey. Aspects of it like low power and our march to Heterogeneous System Architecture to further unlock our CPU and GPU capabilities. We've been on that march, but we're doing even more. We're changing our SoC methodology to be rigorously modular to enable us with the best practice consistent methodology across each and every one of our design teams to be able to put this together in a much more agile way and have true IP reuse. We're architecting our IP for the target segments that we know that it will go, and we've architected upfront the interfaces it needs to be able to drop in each of those applications.

Secondly, it's about how you bring those solutions together in a way that's more quick in time-to-market. And so we're providing a more parallel approach to our hardware and software development. This is also a fundamental change. It's really enabled by the advent of highly capable emulation systems that are out there in the market today. And so what we're doing is defining the features for our software, for that enablement around the platform. We pulled all this in concurrent to when we designed that SoC capability upfront. And so what does this mean? When you line up all that definition upfront and pull that together, pull that development together in parallel, it actually is like a set of dominoes that fall to additively increase your time-to-market, additively increase your quality.

Why is that? Because when you line that up in parallel and you do this in a more cohesive way, you get the requirements in working cohesively, validating in parallel. While that long lead time silicon is in development, you're in parallel, honing how that runs in a customer application, ensuring you hit the performance goals, validating the features that you need right out of the get-go when that comes back. So you end up with more parallel development so it's just a pure shrink of that cycle. But when you go to deliver this to the customer, that validation is sure. You've already run those features. And what this means is we're more rapidly going to get our IP in the hands of our customer.

And then thirdly, I talked about low power. As we drive low power, it's all of the techniques I talked about we're putting into our silicon. It's about how we manage across the platform and the stack and how we optimize performance per watt across each of our IP, across the SoC as we integrate it and how we manage that power. It's the benefit we have of having been in the platform business for many years to not think as a pure silicon provider, but to think as a platform provider that we are to end up with the optimized solutions for our customers.

This is a change. We're leveraging what we have done. We're leveraging the experience we have, the deep talent that we have, but we're becoming much more agile. So we've had that talent, and we've created that highest performing -- historically, that highest performing capability in the markets that we've served. And so we've gone after that last 2% to 3% of performance and it's been a longer -- historically, it's been a longer development cycle. It's been a several-year development cycle the way that we have been operating.

We've used the absolute latest technology node with a new architecture that we've been developing. And frankly, we've had some challenges as we stacked all of that together, we've had a couple of areas where we had -- therefore, challenges when we stacked risk upon risk. What our customers need as evidenced by the Brazos example that you heard from Rory, is they want agility. They want that SoC solution. They want it tailored to the workloads, the problem set that they're solving. And they want it more quickly. So that's what we're doing.

We're simply listening to our customers, changing the way that we take that wonderful talent we've had, the wonderful IP that we've developed but bring it to our customers in a fundamentally different way. It's more quick. It's more agile and it's tailored to the problems that they want solved.

So I talked to you about the play on the data center, that's where we're starting. But we're rolling this out across everything we do, the methodology, the approach. You'll see it implemented across all of our segments. And at the end of the day, in addition to speeding those and getting our IP to market, it makes for a different AMD in terms of the ability to partner with us.

It's just going to be easier, right? Because it's easier for us to understand those needs and rapidly take our IP, bring it together to solve those customer needs. It's an exciting time at AMD. I am thrilled to be here, and we are running with this new play. The team's very excited about it.

Look forward to you hearing more detail over the course of the day. Thank you very much for your time. Appreciate it.

Unknown Executive

And our program continues. Here once again, Ruth Cotter.

Ruth Cotter

Great. Well, welcome back, everyone, as you start making your way back into the room, and welcome to everybody reengaging on the webcast. We're going to continue our program here this morning. And I'm delighted to welcome Lisa Su to give us a very solid overview on our product portfolio and the direction that's going to take. And she needs absolutely no introduction as I know many of you in this room are very familiar with her tenure. So, Lisa?

Lisa Su

Okay, good morning. It's great to be here today on behalf of AMD. And as you've heard this morning, I'm the new kid on the block. So it's really an opportunity for me to talk a lot about where we are going with our products. Rory talked about our vision for the company. Mark talked about the technology direction. I'm really going to focus on our products and how do we take our products to really grow market share in this very exciting market that we're in today.

It's also been an opportunity for me to take a step back and look at how I stitched together this great product portfolio that we have into something that's really truly differentiated in the market. So hopefully I can give you some of that as we walk through the presentation.

I'll start first with where we think the market is going. Rory talked about these inflection points. It really is a very special place in the semiconductor industry today, then talk about our technology leadership and then spend the bulk of the time talking about our product road map. And I know that many of you have been waiting for an update on our product road map since it's been a while since the last public update. We'll go through a lot of details of where we have made some changes and where we believe we're going to win going forward in the next couple of years. And then I'll talk about what the future holds and where we're going to go in terms of the new markets that we're pursuing.

So let's start first with the market trends. This is an extremely exciting time to be in the semiconductor industry. We've all been here for many, many years, and we've seen how we've gone from the PC era to the mobile computing era to the cloud and the Internet devices. But where we are today is really the era of convergence. And what that means to me is imagine having your data anytime, anywhere, anyplace and in any format, all interconnected with devices that really have that convergence. That means tens of billions of devices are going to be sold every year, and that's both on the consumer side and on the infrastructure side. And that creates great opportunity for us going forward.

Now I know a lot of people have talked about, "Well, hey, the PC market is not such an interesting market anymore. It's being taken over by a lot of different areas" and that might be true if you're talking about just the very traditional piece of the desktop market. But when you look at the segments and where the segment growths are, there are so many places where we can grow in this industry.

If you look at notebooks, we'll talk about notebooks a lot and the ultrathin form factors and the light and slim versions and what we can do in the emerging countries. That's tremendous growth for us. When you talk about the server market, beyond traditional IT, all of the new workloads and the mega data center and the cloud and what's going to serve all of those Internet devices that are being connected, that's growth opportunity for us. When you talk about the embedded space, lots and lots of opportunities in the embedded space, and those are spaces that are very adjacent to the technology that we have today. So that's room for growth for us.

And then you talk about tablets, and we know that tablets if you look from a revenue standpoint being a smaller sliver of the unit volume is very, very high, and that connectivity and that technology go across the markets. And you see that all of these are tremendous growth opportunities using the same basic technology for compute and visualization that we're very known for.

Rory also talked a little bit about this. But if you look at where the next 1 billion users are going to be, there's tremendous market opportunity in this area. When you think about the last decade really being the growth of mobile phones and smart devices being connected to the Internet and really driving growth in the mature regions. When you look at the emerging countries, just think about all of the opportunity for productivity, for education, for governments, for modernization of the infrastructure, all of that is tremendous growth opportunity for us.

And if you think about what really drives those areas, it's not about the nanometers or the pure bleeding-edge performance, it's really about what user experience we can really drive. And so when you think about that, you're going to hear us at AMD talk a lot more about the experience. What are we trying to drive? It's not just technology for technology's sake. It's how do we take technology to something that means something to the consumer and to the marketplace.

So when you think about some of the user trends, think about our own user patterns. When you're talking about a laptop today, it has more than enough compute performance for what we need to do in our PowerPoint, in our Word processing. But when you talk about driving the next generation, it's really about the natural user interfaces, the touch, the sensing, everything that these new devices bring to us requires computing power in a different way.

When you think about our display technologies and the idea that you want to be as real as possible and that realism comes through in these immersive displays, that drives computing power that's different from just straight regular computing. When you think about social gaming, that's another aspect that drives tremendous capability that's different from what is in today's computers.

Now when you go into the infrastructure, we want to connect all of these devices in the cloud where you see data everywhere, all interconnected, being able to get access anywhere. And in the collaborative environments, we want to take these video systems that are tens of thousands of dollars today, put it into your $500 PC, and that is the way that we think about driving the user experience in the next wave of computing.

So when we take all of these market trends, and I'm really trying to figure out how do we put simply what is AMD's product focus? AMD's product focus is really about aligning to what the market needs. It is about creating an exceptional user experience. So that is what's going to drive the consumer, that's what's going to drive the computing capability, the user experience across all device categories. So what you hold in your hand is the same as what you use at home, it's the same what you use in your car and what you use at your business environment. All of that is one single architecture and the ability to share across those areas.

It's about taking leadership, compute and visualization. So the idea is compute is practically free now, if you really think about it. With the advancements of technology, you can really go very, very far. We want to bring that to lower power and lower cost points and that's what we can do with our technology.

And then you heard Mark talk about flexibility. I think the value proposition that we want to bring to OEMs is one where we can really bring flexible system-on-chips that go across all of the different platforms and be able to really get the time to market. And this is the value proposition that we want to bring to our customers and to the consumer.

So now let me talk a little bit about technology leadership, and I am a technologist at heart. So the fun thing about technology is really figuring out how we put it into action. And when you think about technology for a company like us, you don't have to be the best at everything, but you have to be the best at a few things. And that's the differentiation that we have to get into our products, into our mindset, into the marketplace.

So in terms of technology leadership, we talked about graphics. Graphics is definitely one of our crown jewels. And the graphics performance leadership that we've demonstrated over the last decade has been phenomenal. We talk about performance at the very, very highest end in terms of GigaFLOPS, so we'll talk about our 4 GigaFLOPS machines. We'll talk about graphics in terms of performance per watt, driving it throughout our capability. We've also talked about graphics from the standpoint of giving you a different user experience.

So if you go into the demo labs later on today, you'll see our Eyefinity display, which goes across 6 screens and really you can see, you feel like it's different. It's different than just what's available in today's capability. But more important than that, the graphics technology ends up being the centerpiece of the entire rest of our road map. So graphics is a basic building block in terms of parallel processing capability that allows us to accelerate many, many, many applications. So desktop productivity, browsing, all of that capability. And this is really the secret sauce that goes into our APU line.

When you talk about APUs and application processing, I've really thought about heterogeneous computing for the last 10 years. I think heterogeneous computing is really what semiconductor folks think about as the best of both worlds. And if you think about it, it was first used in supercomputers then used in gaming machines. AMD really made a big bet bringing it into the mainstream. And the idea is that you want the right computing element to be used for the right application.

So in 2011, we launched the AMD APUs, and it's been fantastic. You heard it from Rory. We'll say it many, many times today. First to introduce heterogeneous computing in the marketplace. It had great customer reception. The performance of the Llano APU is 3x what a typical general-purpose processor would do, and that's the power of bringing the processor and the graphics capability together. When you look at the road map, and I'm going to talk to you about the road map for second-generation and third-generation APUs, we will take that in the mainstream to 1 TeraFLOP. And it's really just a couple of years away.

The thing about APUs is where do we think it can go in the market. And if you look at the market adoption for this technology, it's been fantastic. We've shipped over 30 million APU units today. And if you look at it, just started shipping in fourth quarter 2010, it really has made tremendous progress. 11 of the top 12 OEMs are shipping AMD APUs, and we see that continuing to grow and the momentum continuing to grow into 2012 and 2013.

Brazos, we've talked about. Very low power, very nice in emerging markets, really goes across a range of applications in both desktop and laptop. It's the fastest growing platform in our history, and we see the next generation growing even faster. So lots and lots of good customer reception around the APUs. And we continue to believe this is a strategy that's going to differentiate us going forward.

And I also want to spend some time on server. Because if you really think about where the growth for us is going to be, it's really the client side when we look at the growth around mobility and the server side, when we see growth around the data center. Server is a place that we are absolutely committed to be successful and win.

When you look at the architectural investment that we made with Bulldozer, Bulldozer was a bit ahead of its time in terms of really focusing on multi-threaded workloads for the server market. It was designed for server performance workloads. We've put a lot of effort into revolutionary power design. I'll show you what some of the customers are saying about the server architectures that we have. But once again, our focus in server is not on single-threaded performance alone. It's really on total system performance, as well as total system power, and that's where we're committed in the server market.

So now let me move to the product portfolio, and there are a lot of details in this, and I'll try to walk through each one of these to give you a sense of where we're going in the product portfolio. Starting first with our strategy. It was very important to pull together AMD's product strategy on one piece of paper. It's not a complicated strategy. It's actually a very simple strategy. It actually comes together across each of our market segments and includes why we think we're going to win.

Starting first on the graphics side. It's very, very clear that leadership graphics performance is absolutely critical to us, both in the discrete graphics market, as well as the capability to bring us through that visualization experience. This is going to be a very, very key part of our differentiation going forward. Our goal is to leverage that IP throughout our entire product portfolio. You're going to see that, that graphics IP is now going to go throughout our APU line and in every market that we serve because we believe driving this is going to give us unique differentiation and give the consumer unique value in that capability.

On the client side, it's all about the user experience. It's all about taking that user experience to lower and lower powers. We want to hit the volume sweet spots of the industry that includes mainstream, entry and yes, that's going to include tablets in a big way because that is where we believe there will be growth in the market.

When we look at the server market, I talked about the importance of the infrastructure. We believe the infrastructure is changing. We believe that new workloads are where the growth is going to be. And we're going to take that leadership and performance and performance per watt, but really customize solutions as Mark talked about in the data center space so we get that together.

And underlying all of this is execution. Rory talked about the first wave execution. Mark talked about execution. I'm going to talk about execution. At the end of the day, this product road map is a road map that our customers can count on, and it's a road map that we're going to build upon so that we have a very, very strong, very flexible SoC system that we can really get time-to-market from our customers' standpoints.

Okay. So now let's start with the client and graphics road map. As I said, there's a lot of information on these charts so I'll try to go through each box. And starting first at the top, on the graphics side, we're very, very pleased with the graphics road map that we have today. Our Southern Islands launch started just before Christmas. We got tremendous reviews on our Tahiti product line, and we believe that it has clear leadership in the industry with a top to bottom discrete road map. I'll show you a little bit more detail on the next chart.

Our second-generation Trinity APU. We've talked a little bit about it in the press, but I can give you more details today. We certainly believe this has tremendous, tremendous value proposition both in terms of performance and the performance that it hits and in terms of the power envelopes and the systems that it can go into, and the design wins have also been spectacular.

And then in our low power -- and our ultra low power line. So there is a change to our road map where we are continuing with our very, very successful Brazos line going into Brazos 2.0. And this is actually very good value for our OEM customers because they're able to take the all of the infrastructure that they put with Brazos and really take it into the next generation.

You'll see that we're extending our road map into ultra low power. Hondo will be our first tablet-based system, and we will see tablets with Windows 8 out there with AMD silicon. And those are the key elements that I'll talk about in the next couple of charts.

So with Southern Islands, I will say that it's really cool to have the fastest graphic cards out there. That's something that we're very, very proud of. We've gotten really tremendous reception from everyone in the market. When you look at the performance capability of Tahiti, as I said, 4 TeraFLOPS, 4 billion transistors, first to market with 28-nanometer technology, support for 6 displays. All of the key elements are there. This product has been very, very successful. We're shipping it right now. You will see before the end of the quarter the announcements of our performance in our mainstream line with Pitcairn and Cape Verde, but all of that gives you end-to-end leadership in discrete graphics capability. This is something that we are very, very committed to, and we'll continue to drive hard because we believe this is a key market and a key way for us to leverage our IP.

Going into Trinity. This is really our second-generation APU. This utilizes our Piledriver cores. So Piledriver's the next-generation x86 core, up to 25% performance over Llano. It also incorporates new graphics from our Northern Islands and Southern Islands capability. So we have the graphics core from Northern Islands. We have the video IP from Southern Islands. That gives us up to 50% graphics and compute uplift.

When you take a look at the numbers, you can really think about Trinity in 2 ways. You can either double the performance at the same power, which is great or you could take the same performance at half the power. So we can go from 35 watts from Llano down to 17 watts from Trinity, and that gives us tremendous performance-per-watt leverage, which really kind of takes this class of machine into a different space.

From a battery life standpoint, over 12 hours. We feel really, really pleased with where we're positioned from a power standpoint. And the important thing here, as we noted earlier, is that the design wins are tracking today ahead of where Llano was at the same time. So the Trinity products have started shipping already into OEMs, and you'll see systems on shelves by the middle of this year. So this product is ramping very fast and will be very key for our 2012 execution.

Now at CES and in the industry, there's a lot of talk about ultra-thins and what we can do with ultra-thins. So we have a number of products that are out there in the showroom, but I wanted to bring one up here on stage because this is a reference design that's built off of the second-generation APU Trinity. And you can see that it's using a BGA form factor for the ultrathin. So it's a 17-watt form factor, but the thinness and the lightness of it is pretty incredible. So at 18 millimeters, it is really one of the reference designs from one of our partners at Compal that many OEMs are looking at and will be looking at putting into the marketplace.

We believe this will bring the ultrathin form factor, which is very, very sleek and very light into the $600 to $800 price point. And that's the value of the APUs because you really have the performance that you need at the power envelope that you need at the right price point. And we have tremendous, tremendous belief in this platform going forward. So you can see a number of platforms out there with the Trinity system. But once again, it's very, very, very, good reception from our customer set.

Okay, now let me talk about the entry and the value lines. The Brazos 2.0 platform is a very important platform for us. We've talked about the success of Brazos in the APU. The fact that we can get platform continuity and execution and all of that for our customers was a very, very key element. When we think about how to really upgrade this platform, we've been able to turn on our Turbo Core technology, as well as optimize for Windows 8 and add some more features to it. But this gives a very nice platform continuity for something that has shifted very well in the 2011 time frame.

We're also seeing a very nice improvement in battery life, and that will be helpful as we go into the emerging markets. We've also been able to take a new skew from this into the Hondo platform that as I said, will go into the ultra low-power tablets. So Brazos 2.0 will really fill out the lower end of our platform, and will really, really focus on the ultra low-power points there.

Okay, with that, we move on to 2013. So 2012 is a year of execution. It's a year of Southern Islands. It's a year of Trinity, and it's a year of Brazos and Hondo. As we move into 2013, you can see our whole lineup will be refreshed in 28 nanometer. So we don't usually talk a lot about our graphics roadmap at these Analyst Day, but our Southern Islands family will be moved over to Sea Islands. That will be a new graphics architecture that will increase the performance and the capability. Very, very importantly, it will also add new HSA, or Heterogeneous System Architecture features. Mark talked about it. We are absolutely committed to making this up and down our roadmap because we believe this is one of our key differentiations in value propositions.

So in the APU line, we'll go to -- our Trinity will go to Kaveri. Kaveri will be our third generation APU. It was that 1 teraFLOP point that you saw on the APU line. So we'll get significant performance improvement. It has a new x86 core in it that will get more IPC and power improvements with Steamroller. It will also have new graphics and all the new HSA features that I talked about.

As we move into the low-power and the ultralow power lines, here is where we really drive for more and more performance at lower power. So we talked about the System-on-Chip architecture. Why we think System-on-Chip is so important, it brings so much value. In Kabini and Temash, we'll have our first true System-on-Chip implementations, integrating the Southbridge, the fusion controller onto Kabini and Temash. That gives us power improvements, that gives us performance improvements and that gives us really, system building material space and cost savings. That will be off of our Bobcat core series, which will be upgraded with Jaguar, which will give us more performance and more -- lower power. And it will also really give us an opportunity to scale these HSA features across our entire roadmap. So when we talk about our belief in APUs and HSA really being very, very critical for our roadmap, you see it in practice in 2013. And this roadmap is very much in execution, and is very clear that our focus is on time to market in this area.

Okay. So now let me talk a little bit about servers. So we've talked about the opportunities for growth, and we are really looking for -- I think my job, as Rory defines it is to take all this technology and grow market share. Server is a great opportunity for us. And it is clear that our server market share is not all that high today, but we view that we've made a very, very strategic investment. And there's a lot of opportunity for us to grow in this area. And the value proposition is simple. We want to provide something that differentiates ourselves from the incumbent and really focus on value, price-performance, new workloads, workloads that really take advantage of our architectural investment.

When you look at what that means, these are some of the numbers in terms of where the server market is going to go over the next couple of years. The traditional IT space is very large and very important, but it's not the fastest-growing piece of the market. When you look at the high-performance computing workloads, the virtualization workloads, the cloud workloads, here is where we're seeing a lot of the growth and here is where we're also seeing a lot of the opportunity because customers are coming to us, asking for solutions. They're asking for solutions that will help them take the total cost of ownership down. They're asking for solutions that will really help broaden their ability to attack these markets in a disruptive way. And this is where we've really focused our architecture.

Late last year, we announced the Opteron 6200 series. This is the Interlagos platform family, very, very significant change in our architecture. Very clear investment in Bulldozer and the architecture with the modular design. Clear that we were focused on multi-threaded workloads with the dual integer units and the shared floating point unit, the world's first 16 core x86 processor. We've gotten a tremendous feedback from the market and the customers about the architecture. It is fair to say, and we all know this, any new architecture takes time for the entire ecosystem to adopt to it. So we need software developers, we need the tool vendors to really adopt to the multi-threaded capability. But we're seeing progress. we're seeing real progress in the adoption rate. We've seen 2 straight quarters of server market share growth. Albeit we have a long way to go to reach our aspirations, we really believe that this architecture is what will take us there and it's a focus on not just performance, performance is very important, but it's a focus on power efficiency and how we really put features in for the data center. So things like our power capping capability that allow data centers to really decide where they want to set their overall power, these are the types of things that we think will bring differentiated value to our customer set.

Some of the things that we've seen already over the first couple of months. As I said, our customers have given us a very, very strong feedback on it. One of the marquee wins was the Blue Waters win with Cray. When you think about supercomputing, this is going be one of largest installations of supercomputers in the world. We'll have 50,000 Opterons in that installation. We hope as it comes together, we'll be close to one of the top few in the world in terms of supercomputing capability. That's an example of utilizing our architecture in a very differentiated fashion. We've gotten very strong performance on 2P performance for -- that's called the bulk of the workload, a lot of good database feedback, a lot of good virtualization feedback. These are just the beginning because as we said, it takes really the opportunity for the entire infrastructure to really adopt to the new architecture.

When you look at our roadmap going forward, our roadmap going forward is really focused on ensuring that we get smooth customer transition overall. So this year, it's about Interlagos and Valencia, which will power the majority of the systems that are out there today. We will also be starting to ship Zurich later in the first quarter, which goes for the Microserver and the lower end workloads. As we move forward, we made a decision to really take the infrastructure that we have and really move it forward without change for our customers. So Abu Dhabi will be a straight replacement in the same socket and the same infrastructure for Interlagos. We'll get higher performance with the Piledriver cores that'll give us more IPC and more performance. But it will also be very, very smooth transition for our customers set when we go forward. We had in the past, looked at should we add more cores or add new sockets? And at the end of the day, that wasn't the right answer for our customers. What they wanted was to start with Interlagos to move to Abu Dhabi, similarly with the Valencia and Seoul and so this is really where we're focus is really taking advantage of the infrastructure investment that we've made together with our customer set to be successful in the server space.

It is also really, really important for us to repeat the commitment to technology, and the server technology space is one where the treadmill is very fast, and you have to keep investing and keep going at a pace. The Bulldozer technology, we believe, has an extremely long life, and it has a lot of features that we can add and improve on with each subsequent generation. We've talked about Piledriver in 2013. Steamroller will also be a next-generation core that will give us more performance and more IPC capability. And as we go to Excavator, you're going to see some of the features that Mark talked about, about really getting a very modular, a very low power design technique. We're actually starting to share design techniques between our low-power core and our high-performance core so that we get very, very power efficient in the overall architecture.

So from this standpoint, you'll see that the Opteron line gives a very long life for the investment in our architecture. These will of course, also go into our desktop devices as they flow into the client space. But it gives you an idea of where we're investing and where we believe the technology is going.

Okay. So now let me spend a few minutes on the future. So we have a lot of exciting products on the roadmap 2012, 2013, execution, execution, execution. A lot of conversations with customers. What they tell us is more products faster, please. So that is certainly what we're working on. But there are also a lot of opportunities for us, and I really look at what we have here at AMD as a land of opportunity. Because we have all of this great technology, we have a lot of new markets to grow in. We just need to marry those 2 things together.

So when we look at the future directions, Mark touched on our technology investments, but I want to reiterate from my perspective. Leadership IP, there are not too many people in the industry who can talk about the realm of IP that we have, whether you're talking about graphics or multimedia or processing capability or all of that capability. It's really a unique treasure chest of IP. When you think about how that links into solutions, the Heterogeneous System Architecture really is something special because I'm a lifetime hardware person. But when you think about hardware, you can put all the transistors you want on a piece of paper. If you can't really program it, it doesn't help. And that's what the Heterogeneous System Architecture is about.

Low-power, a lot of people have said AMD, do you get low-power? I've heard that before I'd joined AMD. I've heard that since I've joined AMD. We absolutely get low-power, absolutely get it, and we're going to put it to our products.

And then ambidextrous architectures just opens up a whole new world for us. In other words, we don't have to be religious on any particular element of technology. At the end of the day, we're a solution company. Were trying to deliver solutions to the market, and there are some architectures that are good for a lot of things. There are other architectures that are good in different ecosystems and our job is to choose the best of each one for the marketplace. So in terms of product growth vectors, I'll talk about each one of these and where I think we can grow.

Very clearly, from a mobile market standpoint, we have a mission to grow in the mobile space. We've made very, very good progress with our notebook line, our APUs with Brazos, as we go into Hondo, as we go into Temash, we'll continue to really bring that power point down.

From a growth standpoint, it's clear that Ultrathins and tablets will be 2 of the biggest growth vectors in the mobile space, and we're going to go after that with a vengeance. What we see there is taking all of the low-power techniques that we know about. We can absolutely get x86 to less than 2 watts. So if you think about that capability, you can take the full Windows 8 capability into an x86 processor in a tablet system that lasts for over 30 days battery life. That is very much within the realm of our capability. Now when we leverage the APUs with more graphics and video processing at all that stuff, you get the differentiation for our product set. With flexible SoC designs, what you're going to see from us is more products faster in an ambidextrous way so that we leverage the best of all ecosystems as you think about Windows or Android and other areas. Clearly, all of those ecosystems are important for growth in this market, and we will support those.

And then really focused on how do we differentiate in the market. So there are a lot of players who are out there talking about tablets. And frankly, the volume isn't so high except for the first one. When you think about how you differentiate though, it is differentiation on the user experience and that is where I think we are unique. It's not just about the processing technology, it's about the user experience. It's about how we can manage and really marry that software capability with the hardware capability. So you'll see a lot more focus on how do we bring those special features to the customer. Things like 3D content, things like biometrics and facial recognition and speech recognition, all of those are uniquely accelerated by the APUs. And we can do that at very low power with our capabilities.

The server market is also a place where we have tremendous opportunity. Here, I'd like you to think about it in a slightly different way. So we've talked about our baseline server roadmap, and we are absolutely committed to that roadmap going forward with Interlagos and Abu Dhabi. What we see is an opportunity to really change the game. And when you look at the server market, those top levels, that cloud computing, that metadata center market is really changing server dynamics because now, you're not just talking about racks and racks of servers that just care about power and performance, you're talking about specialty workloads. And it actually will fragment the server market a bit. You will see different solutions that will fit different server workloads. It's not a one-size-fits-all, and that plays into our strength because once again, we can optimize for different workloads with our acceleration capability. There are some workloads that will be accelerated by the APU, and we're going to use the APUs in those workloads. There's some other workloads that may not be so accelerated by the APU but are accelerated by more, lots of little cores, and lots of little cores in parallel is another option that we have. And so our goal in the server space is to really attack those disruptive workloads with all of our capabilities including the SoC infrastructure that Mark talked about and the ambidextrous solutions to look across various ecosystems. So this is where it's about system-level optimization, not single threaded performance and we'll really be able to take advantage of that in terms of growing our server market share and also from the standpoint of really growing our presence this new in and emerging market.

And then I'll spend a couple of minutes on embedded. The embedded market is one that I know very well, given some of my history. And the embedded market is really a market of lots of different subsegments. So when I think about embedded for AMD, I think about embedded as let's take all of the technology that we have, the basic processor technology, the basic APU technology, the basic graphics acceleration technology and really pick the subsegments that are going to benefit from that. And that is really -- the key is taking our technology into different markets. And if you think about where that is, there are a lot of APU friendly segments, and I call them APU friendly because they can really benefit from the graphics in the parallel processing acceleration capability. Things like digital signage, things like thin client, things like medical imaging when you think about how those will look, gaming is a great place for our APU technology. There are some subsegments of communications and storage. This is not necessarily about new products SKUs, this is about taking our current product SKUs and getting really clear in terms of how we go to market with them, how do we grow the embedded ecosystem around AMD. This is an area that we haven't spent time on, this is an area that we haven't invested in, and this is an area that we believe we can really add value because of the capabilities that we bring in terms of processing.

So with that, I feel like I'm now a veteran of AMD at 30 days and my first financial analyst day. But I hope what I've given you today is one, we wanted to make sure that we shared with you all of the product details because there were a lot of questions about what we're doing with our product roadmap. I hope what you take from it is this product roadmap, is really good. It's really good, it's really focused on execution. It's very clear what we have to do over the next couple of years, and that clarity is extremely important to ourselves, to the marketplace, to our customers and that is very clear. What I also hope you get is a little bit of a feeling of how much excitement we have and how much excitement that I personally have because I look at AMD as an opportunity with tremendous technology potential. It's like every engineer's dream is to have all of these bits and pieces to play with, but really putting it together into products that will grow market share, that will return sustainable value and that will really touch the consumers from a user experience standpoint. So that's what we're really looking forward to, is really skating to the puck for the future. Thank you very much.

And with that, I get to introduce Thomas Seifert, our veteran of our crew who can take us through the financials.

Thomas Seifert

Thank you, Lisa. Well, it's a delight to be here today and talk to you. And who would've thought that 2 or 3 financial analyst days make me a veteran, so quite a surprise. I also have to tell you that from a CFO perspective, the last couple of months has been a sheer delight for me. I've new colleagues now that not only do they know numbers and understand numbers, they actually enjoy working with numbers. So you feel the excitement hopefully of the presentations today. I feel the excitement because the numbers I look at are shared in the value that they represent are appreciated by my colleagues. So we have a good time and I'm excited about what is happening. And let's use this now, let's put some numbers around the things that you have heard.

I'm going to take you through a couple of things today. I would like to have a short recap of 2011, talk about the business model we have put in place, what went well, where we have done well and where there is room for improvement. I'd like to talk to you about our capital structure. We have made significant improvements over the last year. We've done tremendous steps forward in 2011, where is the room to do more goodness, where is the optimal capital structure for AMD, how do we look at that. And of course, I would like to take you through our 2012 operational model. You've heard a lot of things that are going to change, exciting things, how we adjust our business model, how we repositioned the company and really make aggressive use of the Fabless model that we've ended up with, do this in the confines of the financial parameters that we have set for ourselves and make sure that we keep the financial momentum that we have started to build. And then also give you a glimpse into what is beyond 2012, what is the long-term financial model that we can imagine, where can we take this business model from an earnings perspective?

So let's talk about 2011 first. We've executed well to our business model. It was a year where we launched APUs, we shipped 30 million. We shipped also 100 million DX11 devices on the graphics side. We launched the process APUs, the low-power devices, a very successful launch. We've left some opportunities on the table too, especially in the server space. We picked up speed in the second and the third and fourth quarter, but we left some opportunity on the table and this is room for us for this year. I think we clearly stabilized the financial performance. We found a way to improve how we run the company around working capital. It had huge benefits as you will see in a second, on our cash flow generating capability. I think when we started the transition into the Fabless model, we set the real strength of AMD, and this new setup is going to be to transform ourself into an IP generating and free cash flow generating engine. You've heard from Mark and Lisa and Rory this morning, what we've embarked on in terms of making this IP generation come true. I think the progress you can see over the last 2 years turning AMD to a free cash flow generating engine has made tremendous progress, and I'm going to talk about this. And of course, we have shown consistent profitability for the year.

So from a revenue perspective, last year was certainly a notebook year. We launched exciting products, we dominated U.S. retail in the fourth quarter, I think 5 out of the 6 best-selling SKUs were AMD-based notebook SKUs. We dominated Black Friday, 70% or 80% of the revenue was on AMD-based systems. We've made tremendous progress in the emerging markets. We've grown market share in China retail for 3 consecutive quarters in double-digit numbers, tremendous progress. We left opportunity on the table on the server side without any doubt. We've picked up speed with the rollout of the Bulldozer-based products in the second and the third and fourth quarter, but there's opportunity. There's tremendous opportunity moving into this year and going beyond, exploiting that, but good progress.

So how does this help us with our financial performance? So just to remind you where the targets were for 2011, we gave what we thought at that point were ambitious targets in terms of gross margin, in terms of operating expenses, in terms of operating income and also in terms of free cash flow. And I think we executed well to the targets we set. We came in the range of the gross margin guidance we had despite some of the challenges and despite the fact that we left opportunity on the table in the server segment. I think as a company, we did extremely well, staying very disciplined in our approach, how we manage operating expenses. And for that very reason, we landed well in the OpEx corridors in terms of R&D and SG&A that we set for ourselves. We generated operating income and most of all, we made tremendous progress on the free cash flow generating side, $528 million of free cash flow last year was a tremendous step forward, from $355 million to $528 million.

If you look at our free cash flow yield, think this puts us at the top of our peer group. This is an important fact because this allows us for this year moving forward, tremendous amounts of operational flexibility to further work on our balance sheet but also to make sure that we have the strategic flexibility in order to improve the business model. So with this in mind, what did we do with all this good free cash flow, how did we progress on our balance sheet side? What did we do well, and where is the room to move forward?

I missed the chart. So with all the changes you have seen and all the changes you have been indicating in terms of how we transformed ourselves into an IP generating engine, it's really important to understand that as a company, we are committed to real fundamental changes. You heard us talk a lot last year that we started to redesign the core processes in the company, how we go to market, how we run our supply chain and how we design our products. And why is this important? It's important for really 3 reasons. If we want to make this shift in our business model, become a much more aggressive player with our Fabless business model, we have to increase the core speed of the company. And in order to achieve that, you have to redesign the processes that determines speed within the company, and that is how we translate market needs into products, how we design products. You heard Mark talk a lot about how we make changes there and how we run a supply chain, and SoC model is going to be a challenging model in terms of how you place inventory, how you manufacture. And we have to be prepared for making this change. We think there's about a $700 million productivity potential that we have attacked in terms of lower working capital, in terms of additional revenue, in terms of being faster in terms of time-to-market, its goodness. And it's important work that prepare us for making the shift on the IP side. And you also have to look at the restructuring that we have done last year from that perspective. This was about making the organization faster and more agile. So we used this opportunity to increase the span of controls, to take layers out in the organization, to bring this agility into the organization. And it's not so much about saving this money, it's about reinvesting this money and about $160 million out of the savings we have will go towards investments in the areas you heard Rory, Mark and Lisa talking about, into consumerization, into cloud applications and in device convergence, making sure that we're well prepared for this.

So what did we do with all this good cash flow? How do we -- what progress did we make on our balance sheet over the last 12 months? I think it's clear that we increased liquidity. We increased our liquidity now into $1.9 billion by $125 million. We worked hard on further debt reduction. We took out $200 million from the debt we had. We brought down our leverage in a very significant way, and we maintained our credit ratings. You actually saw that 2 days ago, we were put on positive credit watch from S&P. It's a very important step when I talked to you about optimal capital structure but good progress on the balance sheet side.

With this debt reduction process, we came from close to $4 billion not too long ago, now to about $2 billion. And most important, not only did we reduce the debt, we brought down the maturities in very sizable chunks. It's our target to keep the towers in the range of $500 million. So they're always in striking distance of the annual free cash flow capability that the company has. It's a very important step for us, and there is room to go. So what room do we have? We did well in 2011. There are $487 million of our 5.75% maturities going to come due in the summer. If we work to take them out completely, it would help us tremendously on our leverage, it would help us tremendously taking down interest expenses by about $28 million. And it would have earnings per share leverage of about $0.04. So there's significant leverage opportunity left on this balance sheet and coupled with the free cash flow generation capability the company has now put in place, I think that is really good news from a balance sheet and also from an earnings perspective.

So if this is all true, where do we think the long-term optimal capital structure is for the company? Well, we all know the semiconductor environment is a volatile place, and there's a lot of uncertainty in the capital markets today. If you keep this in mind, put it all together, we think that maintaining a cash level of $1.5 billion is going to be a good neighborhood for us, a good range for us. We also want to make sure that we keep the debt towers below $500 million. So they are in the striking distance of what we can generate as the free cash flow capability on an annual base. We will continue to deploy cash to reduce debt opportunistically. We have line of sight to becoming net cash positive. That is the near term goal to achieve that as fast as we can and get the leverage, the debt-to-EBITDA leverage down to a factor of 1.0x. But we also worked hard on our investment, on our credit rating. With the last move we saw from S&P, I think we have -- a good target for us is to reach really, investment grade level and with the work, that's not going to happen overnight, but if we continue the progress we have been making, if we continue work hard bringing the debt down, I think this is a good target to have for us.

So let's talk about 2012, and you heard Rory talk about and Mark and Lisa about the changes that we are going to implement, really taking an aggressive advantage of this Fabless business model we have, taking this rich IP that we have and put into marketplace in a better way, in new products. But we have to make sure, this is my job that this all happens in the confines of the goals we have been setting for us, that we keep the financial parameters and the financial momentum we have been generating over the last 2 years and maintain this momentum. And I think with all the advanced work we have done in terms of really restructuring the company, redesigning the processes, I hope you see that we're able to achieve that in a very good way.

So before we go into the details of the 2012 guidance, I would like to give you some key drivers and color around key drivers that are good to keep in mind. So if you look at this year for us from a revenue perspective, without any doubt, there's a lot of tailwind we're going to get bringing differentiated IPU offerings in the market. You heard Lisa's excitement when she talked about Trinity, the design win momentum is higher than we had on Llano year ago. We feel good about the momentum we have been generating with Brazos in the low-power space. This is going to continue with Brazos 2.0, and I think it was clear that there's room left in the server space to improve. This would help us also in terms of product mix and ASP performance. There is some headwind to be very honest from the supply chain perspective, we all know the uncertainty around the hard disk drive space. This will be something that we have to deal with especially in the first and second quarter. And while we have been making extremely good progress on our Wafer Supply Agreement, transitioning from one Wafer Supply Agreement to the next will also put some constraints around it. We're going to continue extremely hard on the earnings side to manage operating expenses well and stay disciplined and I hope I can impress this upon you on the next chart.

So from a cash flow side, if those things come together, we managed our inventory well. If we designed order supply chain to make sure that we can take working capital out of the system, so we watch this very carefully, and we believe that we will continue to see very strong free cash flow momentum in 2012 moving forward.

So how does this all add up now? What is the take on 2012? So we expect the market to grow about 5% this year. Lots of numbers and what the impacts are going to do on the supply chain side, 5% across the markets. And we plan to outgrow the overall market growth. We think we can take gross margins in a range of 44% to 48%. We started off well this year. We give guidance for 45% for the first quarter, and we are going to build on this so we started well and we'll build on this momentum. We see as I said, tailwinds, especially when we go in the second half of the year in terms of product mix, productivity on ramping the new products. Gaming revenue, you will see this in a second, it's going to help us, and we see, we're going to see some headwinds. This uncertainty, as I said, around the supply chain, the hard disk situation and we will have yield challenges because there are a lot of 28 nanometer products specially on the graphic side ramping into volume. We're going to manage operating expenses very tightly. So this will be the first year where we really give hard operating expense guidance, no ranges, $610 million per quarter on a yearly base. We started off low this year. We gave guidance for $590 million for the first quarter. This gives you an indication on how this should be just computed over the year. So we will see some climbing up towards the end of the year as we get a lot of good products ready for 2013 from an engineering and tape-out [ph] capability. But overall, we are going to manage operating expenses as tight as we have managed them last year. This will give us positive operating income. Taxes, you should model around $3 million per quarter. Capital expenditures in the range of $200 million, that is really very much back-end related, making sure our packaging capabilities are where they need to be especially with so much of our product moving into the low-power and ultra low-power space. We have certain investments also in our own data centers moving forward. So overall, I think it's good guidance for 2012. From a revenue perspective, as I said, we will continue to grow and gain market share. You heard Lisa -- there is strong commitment and talk she got from Rory, take this IP and turn it into market share gain. Strong growth on the notebook side. Server, lots of opportunity, and we feel good for this year with the momentum and the trajectory we took out of last year and the third and fourth quarter. And we will see good gaming revenue opportunity especially towards the end of the year. So from an overall growth perspective, 2012 should be a good year for us.

All the changes, all the indications you have seen, how we moved ourselves from agile to an agile IP generating SoC based company, taking really all the richness we have in our portfolio to the market, where does -- does this is constrain us? Where do we see the long range operating point? And we think we have line of sight to a target of gross margin beyond 50% moving out of 2012. The key drivers are growth acceleration. You heard Lisa talk about the market growth opportunities we have in '13 and '14 and '15. The enhanced product mix this is going to drive also in adjacent market segments that we have not been serving today embedded for one, server for others, also taking us down in to the ultra low-power space. And if one message should have come across today, it's about disciplined execution. We have seen tremendous tick up of our capability over the last 5 to 6 months and for our rejoined [ph] , there is a lot to go, a long way to go, but we've been making huge progress. This will have an impact on the earnings power of the company. So these will be the key drivers. You saw the key changes that Lisa talked about, how we repositioned ourselves in the cloud space, in the consumers, what impact the consumerization is going to have in our product portfolio and how big conversion is going to play for us. These are the key changes, line of sight to a gross margin beyond 50%. You marry that with the free cash flow capability that we have already started to see, I think this is a good and exciting outlook moving forward.

And with this, let me really summarize it. We have seen that the financials are stable. There is significant opportunity to grow for us. Lisa has been very clear about this. Rory has been very clear about this. It means that with this growth, with the performance we have implemented, our cash flow generation ability will continue and this would give us the flexibility we need from an operational perspective to continue to work on our balance sheet, to make sure that we really lift the potential that has in terms of earnings per share impact, but also provide us the strategic flexibility that we need in order to move forward and expand building this business model. It allows us of course, also to significantly reduce debt and I think it allows us to make this year a good year in terms of repositioning us. Not only cash flow generating engine but making and transforming in us engines [ph] crosses [ph] fastest IP generating today. And with this, my part is done and I think we transition to the question-and-answer session. Thank you.

Question-and-Answer Session

Ruth Cotter

Okay. So now, we're going to kick off with the question-and-answer session. We have some roaming mics around the room so we'd appreciate you raising your hands that you're already doing and we will start the QA session. I'd just ask before we start that you ensure you speak into the microphone with your question so the folks in the webcast can engage. We have our first question over here, please.

Unknown Analyst

Lots of energy here today. I was curious in terms of some of the new programs that you're starting and Lisa, your comments on doing more faster -- it seems like your OpEx budget this, next year is going be roughly flat with what you had in calendar '11. How come there's no impact on the R&D side to doing more faster going forward? And then I was wondering if you could share this because we didn't or I didn't catch the desktop commentary in terms of the roadmap or what your view on our desktop or -- are you lowering R&D in that part of your roadmap? Is that why it's coming down or -- just some clarity there.

Thomas Seifert

Yes. Let me take the operating expense question first. It's a very valid point, and that's why it was so important for us to start this project of reengineering the company and the core processes first. That allowed us to lift a significant productivity potential in the company by being faster to market, by taking working capital down, by readjusting our core R&D processes moving forward, how we develop products. And then, you have to keep in mind that the restructuring we have done, delayering the organization, making sure that we have the right span of control, not only leads to a faster decision-making organization, a faster reacting organization, but it also provided us with a $200 million spend envelope that is not going again into rebuilding SG&A. It's going to be reinvested in R&D. So from year-over-year, if you include the 3 investments and the productivity gains we have made, I think we spent more money, we put more resources towards product development on the R&D side but we've been able to contain that within a very tight budget. So I think that was a good achievement for the AMD organization worldwide.

Rory P. Read

Yes. No doubt, the productivity gains that we got per project win and the process work are now flowing through and moving through the system. The work that Mark is doing on the architecture work, the reusable, again getting more efficient, the common best practice tools and then the work that we're doing in SG&A around the right kinds of marketing investments. We must make sure the execution engine can deliver. If we spend additional marketing dollars too early, it won't produce the outcome that we want. We've also done some work around the coverage model to make sure we have redistributed our coverage structure to the high-growth markets that allows us to capture some of that emerging growth, all with the focus to allow us to invest into those attack areas that we see as the opportunity moving forward.

Lisa Su

Yes. Maybe I'll just comment on the desktop comment. So, certainly, we believe that APUs are going be our growth vector and that's where we're focusing the majority of our investment. The desktop high end will leverage off of the low-end server and it will have a Piledriver version as well on top of the current roadmap.

David M. Wong - Wells Fargo Securities, LLC, Research Division

David Wong, Wells Fargo. Could you give us what a minimum threshold size for servers needs to be for you to be able to justify investment in servers? What market share do you need to get to from where you are currently? Servers, server processors.

Lisa Su

Yes. I mean, clearly, we view server as a large opportunity for us from where we are today. I mean, our aspiration is to be double-digit market share and of course, as soon as possible like my boss would say. But the key is consistent investment with our customers and consistent investment in the infrastructure. So that is where we're going with the servers.

Rory P. Read

We've seen 2 solid quarters of improving velocity in the server business. We've got to build on that quarter in and quarter out, each quarter adding additional market share, and it's about creating the workload focus, the relationship and the consistency of delivery particularly in the roadmap as well as on the supply chain that allows us to build that trust again, for the future growth. But definitely, double-digit is where we want to go over time.

Unknown Analyst

You guys really didn't talk much about manufacturing. I know that's been one of the challenges in 2011 is your dependence upon GLOBALFOUNDRIES and their ability or lack of ability to produce products. So I guess, what's the strategy going forward there? Where are you with other foundry partners on the processor side and can you help give us some confidence that you'll be able to turn out some of these Trinity APUs?

Thomas Seifert

Good question, and you probably saw that we have one breakout session prepared afterwards, just talking about our supply chain approach moving forward. So you will get more granularity there. I would say we have made tremendous progress in the third and specially in the fourth quarter. And we've been rather explicit on this. Also in our earnings call, being put as partners together with IBM and GLOBALFOUNDRIES and us, significant resources infiltration [ph] and we've been doubling out -- we have been increasing output on Llano 32-nanometer based products by about 80% quarter-over-quarter. So the progress is good, and the status on Trinity is really good. You heard Lisa say that we already shipped products to customers. So that is very encouraging topic. Partnerships are defined that you work together not only in good times but also in difficult times, and I think we have brought the organizations from their 2 or 3 companies together in a very good way. So we feel good about the progress we have been making, and I think this goodness in terms of development on the relationship side is also what is driving other wafer supply discussions, and I think we said on the earnings call, we are on the finishing road, finishing last mile. And we're confident that we bring this to conclusion in the not-too-distant future.

Mark Papermaster

Let me add a comment as well from how we're approaching this from the development side because to have a sustaining deep relationship with the foundries is essential. And what we've done is put a very rigorous process in place. It starts early. It starts marrying their design elements with the foundry capabilities, having very clear communication and partnership with the foundries and mapping that forward but checking it along the way on both the design and the foundry side to ensure that we're marching in unison. That's what we put in place going forward and it's proving very effective.

Rory P. Read

There's a deep focus on building a true partnership and I think this has been an intense amount of time that we focused on over the past 5-plus months is to really create that partnership, rebuild it. And I can tell you firsthand that the relationships we have with the foundry partners, GLOBALFOUNDRIES, Atech, the other ones across the planet, we are developing a much better relationship. As Mark pointed out, it's about the teamwork to solve the problems. We win together. And we have a partnership like Thomas said, in difficult times and in good times. And what we are seeing is the focus on execution, running the test chips through the line, the gathering of data, the movement with John Dochetry's team in supply chain working with partners from IBM and GLOBALFOUNDRIES. We're seeing real focus on day in and day out executional improvement. And because of the work that we're doing at the partnership level, we're getting the right kind of uptick from their side of the organization as well. There's a commitment to this partnership. You can see we shipped 80% more 32-nanometer product in fourth quarter. We made our customer commitment. I'd like to improve the SKU, it was still a little too far back end loaded. We want to make sure we deliver every customer commitment within that month, this quarter and then the quarter after that, within the week that the commitment is made, week in and week out that those customers count on it, and know that it's there but you should know that, that partnership is improving.

Unknown Analyst

So let me just start off with Thomas and probably Rory on pricing. What is your pricing philosophy going forward? Do you think there's enough differentiation that you offer for you to move up the price stock? Or conversely, to ask it differently, if you -- do you think there's enough flexibility in your model today to absorb any type of price pressure that could come in the industry as you fight not just Intel but also the ARM camp?

Rory P. Read

I'll take the first part of that. From a business unit perspective and perhaps, Lisa will want to comment on it. We have to focus on the 4 Ps around the way we run that primary focus on the customer with the business units, driving the market analysis, where the market is going. What that means is, use all 4 Ps. Do the analysis on the product. How competitive it is, where does it sit? When you're talking about the fastest graphics processing capability on the planet with a 70, 970 graphics card, you are going to have a pricing strategy that's different. You might run it at a PFV, I don't know, 1.05 or 1.08. What we want to do is create the analysis across all 4 Ps, product, placement, promotion, price. And make sure that we have the competitive analysis and then the discipline at the BU structure to run that and then as sales executes within the quarter, they work within the framework of that. We also want to move the SKU-ing of the product set so that it's not as back-end loaded. We want to make sure that we're selling product and delivering it on a more consistent linear basis and we want to make sure that our promotions line up to -- in the emerging market, things like Chinese New Year, like Black Friday. We create that kind of discipline and structure into our execution models around true pricing analysis and competitive analysis we'll get better results.

Lisa Su

Yes. Maybe I'll just add to that. I mean, I think it's clear that our success is about a differentiated value proposition. So it can't just be about price, it's about bringing in the right feature set, it's bringing in low power at the right places, it's really tailoring to the solutions that are needed in the marketplace and I think that's what we have with all of our IP is really, putting that together. So you're going to see a lot more focus on market-centric value proposition ensuring that we have the right SKUs for the right geographies as well to really broaden that out.

Unknown Analyst

Can I just ask a follow-up? On the GLOBALFOUNDRIES answer you were providing earlier, is it possible for us to really understand a little bit more detail what happened between September and December. As of September, the conference call was not so good. By December, I think that improved dramatically. Is that something that is sustainable and also, to what extent will that affect the Wafer Supply Agreement you have with GLOBALFOUNDRIES? Are you going to move back to buying wafers now or are you still going to keep buying [indiscernible]?

Thomas Seifert

Well, we put in a lot of work together and we certainly think that the efforts that resulted in this significant improvement are sustainable. The way how we have started into the first quarter make us believe that we are in a really good trend. Semiconductor manufacturing is not a terrible [ph] thing, you know this. But I think the improvement has been in a very systematic way. How we communicate, how we look at the problem, how we look at data, how we run the manufacturing line, how we communicate between the entities, I think this is -- it's fundamental improvement and for that very reason, it's a sustainable improvement, that's how we look at it. And of course, this significantly improves emotional spend in a negotiation situation. We will talk about the specifics on the wafer supply agreement when we find it. It's premature to do this today. As we said before, we've made significant progress, good progress. I think there -- we have line on sight to close it in the not-too-distant future and at that point, we will talk about the content, not before.

Rory P. Read

It's fundamental in terms of as you focus on that execution from back in August and September through December. It's rebuilding the relationship across all levels of the organization, getting into this idea about ownership and commitment. I mean, people think of it as a soft kind of concept, but this idea that a commitment and ownership are linked together. It means that when we make a commitment to our partners to deliver a certain set of supply, we have to deliver it. And it doesn't matter what part of the model or the process breaks down. We own it end-to-end. It's sort of like the movie Apollo 13. Those guys had that capsule, had the problem and it blew up there. They had to figure out how to get it home. It wasn't about this team let us down, and I was good and the rest of them were bad. It's like no, we have a commitment to our customers. That commitment, I own. We have to own it across the organizations, across partners, and we have to deliver it. We have systematically implemented the changes to put in place, the test chips through the line, gather the data, systematically do the Pareto breakdowns of each of the defects, do the analysis on what we think to return. And now we're seeing a very consistent level of execution in terms of our forecasted curves.

Mark Papermaster

I can't emphasize enough what Rory just said. I've got many years of deep fab relationships. It's very clear, the equation needed is exactly as Rory described. It's very candid, it's based on goal alignment and rigorous data and execution. And that's what we have in place at this time.

Unknown Analyst

I have a 2-part question from a list of many. So I'll keep it short. The SOI, I'd like to know what you're going to do with the SOI because it's so very connected to low-power. And the other part is your manufacturing strategy whether on between GLOBALFOUNDRIES and whether or not there's going to be a TSMC mix consideration.

Mark Papermaster

Let me take the first part in terms of SOI. So if you look at our product mix today, we have 40-nanometer on bulk. We have 32-nanometer on SOI. We're going in 28 nanometers bulk, and we're determining beyond that, the next technology node. Because what we're focused on in each of these, is it's about optimizing our design for low-power and performance per watt as we described earlier. And it's the whole combination of the elements of what we're doing from the design, how we're managing power, what we're doing across the platform and of course, that core technology is very important. But it's about putting all that together, generation by generation, node after node and picking that best combination. So our roadmap through 28 is clear, and we're looking beyond that as to what is that best technology that marries with the rest of our design and elements across the stack which we're doing, to leadership performance per watt.

Thomas Seifert

As you know, we have 2 foundry providers today, our graphic partners are coming from TSMC. You saw from Lisa, that we've been making good progress. We launched our 28-nanometer product successfully last year, and we're going to extend that product family across the whole product cycle over the course of this quarter, actually. So very good progress. And we will have a balanced strategy moving forward. But we're not going to give details on which product at this point in time is going to be manufactured where.

Unknown Analyst

You talked a lot about SoCs during in the first half of this session today. I guess I'm a little confused in terms of where you actually see the company as a whole going along this route as we move out. I mean, are you seeing yourselves eventually becoming an IP factory that's selling SoCs? Are you becoming an SoC integration factory that's just handling the integration and the interface with the foundry? Just where do you see this going and how is that actually creating value for you and your customers and finally, why do you think you're going be better at doing SoCs than the QUALCOMMs the TIs and the videos or frankly even the Intels over the timeframe that you're talking about.

Mark Papermaster

Great question. Why don't I take the first stab at this and it's really fundamentally, this capability that we described to you here today provides a multitude of benefits. The first and foremost of why we're going to this approach is that it speeds our delivery. So it's internally focused. We have a set of platforms that we've been targeting that we want to speed our delivery of our solutions into the marketplace. So first and foremost, it's about execution efficiency because it drives as you architect that IP, I called it the treasure trove of IP that we have here at AMD is your architect that to be a clearly defined IP block, with clear and rigorous interfaces of how those IP blocks work with the rest of our AMD IP, it speeds our time-to-market. So that's first and foremost, the benefit. But it just adds up from there because again, it opens up the flexibility for us to even more, tailor a solution for our customer needs, gives us the flexibility to bring in third-party IP. Third-party ecosystems are out there so we can have better solutions to the market and then to be honest with you, once you've done all that and you've sped that ability to meet market needs, you frankly, you've taken your IP and you've architected such that it creates yet further monetization opportunity. If you chose, in certain cases, to license that IP, we would have that flexibility as well. So it's an additive benefit. But first and foremost, it's about allowing us to be flexible and quick again in IP to market to meet those customer needs. And I look out at the competition, there's a whole range and degree of how this is implemented. But what we're doing is what we believe is the best of breed implementation. It's about rigorously applying a methodology and a set of interfaces that speeds that IP to market, that's what we’re doing.

Rory P. Read

I want to build on that all a bit before we go deeper into it. Think about what Lisa talked about on our core businesses, right? She talked about graphics, she talked about clients, she talked about servers. That's our core space. We build solution, platform solutions that integrate software, they integrate graphics, CPUs into a solution that allows players to play in that space and to win. Whether it's the OEM or the ultimate customer. What we want to do is we're going to focus and stick to our knitting in that segment. That's the core business, that's in the tactical timeframe, 2012, '13, et cetera. But as we architect that architecture, we become more agile, and we create a standard architecture set from say 50-millimeter chips to 100, 125, 150, 250, whatever. That architecture then we'd be able to deliver it more efficiently, create more value on it, integrate more to get better low-power and more efficiency and then if there is adjacent spaces, game consoles into perhaps medical or communications that are very much aligned around the model that we're using, we can expand the TAM and also leverage that IP on a broader set of solutions. But make no mistake, the focus and the course to use this architecture to win in the market that we're there. Lisa, did you want to add anything?

Lisa Su

Yes. Maybe I'll just add because I think your question was why do we think we can differentiate in this space and the differentiation is the IP. I mean, the IP is tremendous. The SoC methodology is a way for us to get faster products to market. It's really a way to reuse all that IP in a more efficient manner. So it's not like a trade-off of hey, you have to have an SoC factory or and IP factory. It's really kind of marry those 2 things together to get more products to market faster.

Unknown Analyst

Do you have the SoC expertise in house yet or is this what you're hiring? Because it looked like on your product roadmap, you won't be really have the full capability to do this until 2014 and beyond, which is an eternity in this space. Is there a danger that if it takes you that long to get to the point where your time-to-market is quicker than it is now, quicker than competition that the market sort of moves out from under you in the meantime?

Lisa Su

Yes. Now what -- we're actually doing this as we speak today. So if you think about our 2013 roadmap, that Kabini and Temash points, those are full System-on-Chip. So single-chip solution, we've integrated the I/O on there and then we'll do more as we go into the future. So this is not a -- were going to wait until 2014. It is -- the SoC expertise is in house. It's re-architecting the solutions such that we get it into the products in a timely manner.

Mark Papermaster

And that's part of the reason what drew us all toward AMD was that capability, the kind of engineering capability, the kinds of people that we have here that have this almost entrepreneurial spirit. They've got all the background, and as these inflection points occur, we are really trying to marry that capability where the market's going. It's not that we're going out and hunting. Sure, we'll look for some pieces of IP to add to the portfolio that builds out our capability to answer that solution. But that's what brought us all to AMD because the engineering capability, the talent, the piece parts are here, now it's to capture this wave and weave them together in a model to win.

Unknown Analyst

Thomas, in your 2012 outlook, you highlighted ASPs as a driver for both gross margin and earnings. ASPs from what I understand have been fairly stable the last couple years. Could you be more specific on how you see it trending in 2012?

Thomas Seifert

Yes. So for us, ASP performance this year is very much a matter of product mix improvement. I think it's without any doubt, clear that we have room to grow in the service segment and this will have potentially, a significant tailwind for the overall ASP performance of the company. We think that the launch of the Trinity platforms, bringing this into the Ultrathin segment is going to help us tremendously improve the product mix, and with that, price ASP performance. And we also have opportunities towards the end of the year when we think we will -- can also see strengths in the gaming segment.

Unknown Analyst

So would you say that ASPs had enough buyers [ph] In 2012?

Thomas Seifert

I would say there is opportunity for that, yes. Especially in the second half of the year.

Unknown Analyst

A couple of questions. Lisa, you said your goal is to get the Ultrathins to about $600 price point. I'm just curious, what needs to happen for the pricing to come down so much? Is it as simple as just offering a cheaper CPU? Or is it anything else that you can do to lower the overall pricing, the limit [ph] Pricing. And the second question, maybe for Thomas. Thomas, you said in your forecast you expect the GPU market to grow at about 5% this year. Given the optimism about the APUs and Ultrathins, what gives you confidence that market will even grow this year?

Lisa Su

Yes. So the first part of that question. I think we see tremendous value in what Trinity is able to give us. If you look at the power performance characteristics, the fact that we can double the performance of Llano and performance per watt of Llano and get into a 17 watt price point. We've been working with some of the Taiwanese ODMs to really put together reference designs. So the one that I showed you was from Compal but we've been working with a number of them to really get the optimized footprint, so the Z footprint is by going to a BGA solution and then really getting the overall bill of materials costs optimized. I think there's a lot of anticipation and belief that once we can get the Ultrathins down below these higher $1,000 price points, we can really drive volume in that area, and that's the reason that Trinity has so much capability and we're putting so much into the work with the ODMs and OEMs.

Mark Papermaster

I also spent a lot of time at an OEM over the past 5 years. And if you look at what drives the whole cost of the end solution, there's really only 5 parts and it's disproportionately slanted to 2, operating system and processor in terms of that cost. We believe with the processing capability, the experience you can create with the Trinity APU based on the workloads, just like we did in Brazos, we'll be able to introduce that solution at a more effective valuable price point and what's important that we want to share with the OEMs is you can use that processing capability and you can redistribute the profit pool in the model where it is today. And we think with Trinity, that could actually help them differentiate but also redistribute in terms of those 5 major product sets within the device, part sets where the profit is generated. And we think that doesn't always have to be at the lowest price sell. We've used Trinity across a series of price sells and help move around that profit pool. That's very attractive to OEMs.

Thomas Seifert

With respect to Mark, I think it's a good question. There's considerable debate on where the unit volume is going to go. We used an external reference, market research reference that we think makes sense at this time. I think you can put a lot of arguments around it either way, where are our tax rates going to take us, how is the market going to develop, what impact are going -- APUs going to have on the street level of traffic. From the visibility we have today, I think that is a number that make sense and that's why we have used it.

JoAnne Feeney - Longbow Research LLC

This is JoAnne Feeney from Longbow Research. You spoke earlier about the old AMD way of trying to do extra design complexity to make up for a process lag. So how does your decision to, in some sense, abandon that or at least lessen the importance of that mesh with the need you have to continue to push for the lowest possible power and highest performance in the server space? And I'd imagine there would be spillovers from those innovations back down to the SoC space. But perhaps, you could enlighten us on what kind of lag that would be and how much spillover you do anticipate?

Mark Papermaster

I think the first part of that, and that is when you look at what we're doing across low-power, the technology, semiconductor node is certainly a piece of this but what you're seeing is we PC solutions together across the different elements of IP, we're driving performance higher by bringing this together. When you look at the very basic premise of the APU bringing the CPU and GPU together and you saw on the slides for instance, what Lisa showed, the change in slope as we bring the different IP on this together, it's significant. When I talked about the elements we're putting into the design for power management, it turns out that those elements have dwarfed the power savings that we get from technology node. Technology node remains very important. But if you look at I'll say, the piece of the pie in terms of giving us the overall performance per watt, that overall experience that we can deliver to the customer, it turns out, it's a smaller piece of the pie moving forward than it has been. It remains very important, we focus on it. We talked earlier about partnership that we have with our fabs to make sure that we're optimized around each fab, we're optimized around each transition but it's about bringing the total solution together. That's what we've done successfully with Brazos, that's what we're going to continue to do. Lisa, do you have any other comments?

Lisa Su

Yes. Specifically around the server market. I think it's very clear that you really have to choose where you're optimizing your portfolio. And if we're going after frequency and single threaded performance, yes, you got be on bleeding-edge process technology. If you're going after these new workloads, which are really much more tailored and can benefit from acceleration and benefit from everything we can bring with heterogeneous computing, it's a lot more about architecture than it is about process technology. And so, I think we want to marry sort of the drive for performance. We're continuing to invest in the server cores, and you saw our investments out to the Excavator generation. But really, marrying that system-level view of how do we get the total cost of ownership, how do we get the new workload, how do we use multi threading, how do we work with the software and the tool vendors to make sure that we're taking advantage of all that to give us server performance advantage.

Rory P. Read

And then if you look at the work that Shekiv [ph] and his team are doing in the disruptive space, now we are looking again even further into that workload variation that's going -- that's already occurring and going to accelerate and the dense type of solution becomes a very different power envelope and target kind of solution in the way we stitch it together with the fabric. We think that's going be disruptive to the old models that have been in place for some period of time and why we're playing in that. But we're going to play in terms of leveraging the Cat cores and the dense kinds of servers as well as on the traditional evolutions of Interlagos into Abu Dhabi. We think those are the right game plans and to really target it at a workload level. It's really kind of the focus, it's about the experience at the glass for the client. How do we integrate it? What's the graphics performance? At the server, it's around the workload. We don't want a one-size-fits-all. For us to win and for us to create value for our customers is to really focus on creating that differentiated experience like we did on Brazos, like we're doing with Interlagos and the work that we're doing in some of the traditional workloads and where we're going in the disruptive place. Again, tailored to the unique experiences and workload, we think there's a huge amount of scale and there at the beginnings of that tipping point that move in those direction. That's how we're trying to skate to the puck.

Unknown Analyst

Yes. The theme of tailoring has been very prominent today, and there are some concerns that, that means you're going to be going for a higher mix, lower volume set of output. How do you plan to manage that risk?

Rory P. Read

Sure. Again, as I said in the beginning, there's 2 lifts to create value at AMD. The first lift is around execution, creating execution engine that can deliver and really drive on its base. This business that we have is a good business. If we execute better on it and drive scale, we have a high scale client mobility business, we can continue to extend that. And we can build the level of the foundation of trust across partners, suppliers, et cetera that allows us to grow that base. As we move forward, what we really want to do is we want to make sure -- where was I going on that one? I'm going to do a Texas thing. There were 3 things, but I'm only going to talk about 2 of them. What was the question?

Mark Papermaster

The purpose built.

Rory P. Read

Think about it from the standpoint of you focus on that, you execute, you deliver it and then purpose built, what we want to do is create the architecture that allows us to -- we don't want to create huge amounts of SoCs. We want to create derivatives off of a base. So there is 50 millimeter chip, 75 millimeter, 100 millimeter, 125, use those and then what you do in the architecting of the space is maybe there's a section where there is IP block. It's almost very much like what we do in a game console. The amount of the game console chip that we create, the vast majority of this is standard IP, standard AMD capability and IP advantage and then what we do is we wrap a certain set of segments with specific solutions like Cray. And what we want to do with Lisa is drive those adjacent spaces, those towers that gives us that lift in terms of TAM and return. We don't want to go spread this thing. I'm a volume guy. I've always been a volume guy. I was a volume guy where I came from. Volume is your friend. It's a good thing especially to scale it on expense, and you can execute the heck out of it. It's a good thing. But we've got to expand because this market is beginning to change, and we think that's how it opens up. Think about it one step further. What if you get a large player that plays across tablets, maybe even smartphones, across laptops, desktops, into smart televisions, into appliances and you create a architecture using our chip solution with some of their specific IP that allows them to knit that solution together in this converged age. That might change the way people buy processors in the future. It would help them differentiate and that's in the future but we need to begin that work now, fix the execution engine, get the scale, make sure you deliver on the cost model and scale that cost, then make sure that if you can create that different model, if it's truly changing which we all believe it is, that's the opportunity. But don't spread your supply lines too thin and get yourself spread across a wide base. And remember, the way we architecture the organization, Mark drives all of the technology. He's driving across technology and engineering with his team. Where we make some investments into some customer phasing and support levels to open up some of those spaces is in Lisa's business around go to market and then around the sales coverage should be consistent across regions. But that's the one area where you invest and you wouldn't spread it across. You want that synergy across that common base. That's the engine that creates the innovation. I hope that helps.

Thomas Seifert

One comment here. I cannot let this go. Of course, it will be a good business case around every decision. We will not [indiscernible]. It was important that we lay the groundwork especially in the supply chain in order to prepare ourselves on that. I think that is a much bigger challenge, managing your supply chain through such an approach. And we've done good work. The rest is a matter of business cases and trade-offs. But now we look at this space today, we think there is significant volume around customized products around specific workloads or around specific customers that make a lot of sense for this approach.

Unknown Analyst

Suji Seashell [ph] with Gartner. As we're looking at moving towards the SoC, obviously in the consumer products, you have those IPs, within the server room HPC, you have those IPs. Could you give us some of the colors around the other set of IPs you have in the other markets? Given the market natures, each market is very different from the IP requirement, and many IP owners today are actually acquiring the processor cores to build their own SoC. How would you compete in those markets?

Mark Papermaster

Yes. It's a good question. So first of all, it's about unlocking the value of our IP. As I showed you today, across our CPU, across our GPU, rich multimedia, rich interface technology, very high-speed interface technology, we've been leveraging in our graphics process. It's been key to the type of leadership that we've been providing and when you say, how do you compete? Other people are going to a vertical integration. Every time you see one of those vertical integrations, what you're seeing is others getting locked out from being able to play. And there is a very rich ecosystem out there of IP. So we're going to continue developing differentiating IP. If we're going to look and have enabled with our approach to be able to bring in what I'd call IP that's complementary. We see a rich opportunity to bring that in as appropriate. And then the third source is actually where in unique cases, is a deep [ph] partnership with the customer that actually has their IP, that will fold into our ecosystem going forward. So there is a combination approach. It's going to be a very dynamic world out there. We're at a great starting point because we have a treasure trove of IP, and we see great complementary IP out there that we can add to it.

Rory P. Read

Lisa, you got a lot background in this base.

Lisa Su

Yes, no. I mean, I think Mark covered it well. I think the IP that we have is tremendous. I think there are also a lot of people who are looking to partner and really marry that IP with the strong ecosystem that we have. So I think we see a lot of opportunities as we go into these targeted verticals. Not just to go by ourselves but with partners and with other customers with our customers.

Unknown Analyst

So a question on -- 2 questions. First, on competitive landscape. You mentioned user experience many times as a differentiating factor. So one of your big competitors is soon going to have 22-nanometer products. And I know that, that by itself is not differentiation but they will also have X11 Graphics support which was missing before. So Rory, from the background that you have looking at AMD from outside and then somebody comes to you with 32-nanometer products with certain experience and somebody comes to you with 22-nanometer products within experience, what are the decision criteria that you'll go through in selecting what you will use at the high end?

Rory P. Read

Sure. A business unit in an OEM is going to take the time to really go through the analysis of the experience. They're going to break it down in terms of the workload, how does it connect, how does it perform in terms of graphics activities, what's the product targeted for? And then based on that, they'll do a technical assessment that matches up. We do well in that. That's why you're seeing the strong design wins. Then they take and boil that down in terms of the market opportunity. They'd look at the landscape across their set of portfolios. I was in the global OEM, so we looked across 12 regions. We'd see how that was going to positioned, then we saw where we can put it in terms of pricing, what average pricing could we put it at. We'd look at that compared to competition and then, we'd run through to see what the return on it. Again, remember, there's only 5 parts in the unit itself that really drive the whole cost. I mean, we were -- the way you make your choices on those 5 parts, hard drive, memory, processor, operating systems, and class, those are the 5 things that really matter and if you would look at that competition, run it through the numbers and you look for the return because you're running on a couple, a few points of return, you want to get the scale, you want to beat competition with a differentiated solution that gives you a return. And so that's how we would break it down. I think we look very well in terms of the capability, and then when you add the value proposition in, it's a really powerful ancillary, assuming you execute with precision and you can deliver on the commitments. If I'm the COO of a big OEM, I want to make sure that I'm going to deliver the volume in the quarter. I'm not going to risk 3, 4 points of volume if a company can't really execute. That's why our focus on execution, that's job one. Every day, meeting that commitment opens up so much better opportunity. Hope that helps.

Unknown Analyst

Just one follow-up, if I may. For Thomas, trajectory of gross margins and how we should think about that this year. I think you've kept the same 44% to 48% band but you have a few events coming up, you have the wafer price negotiation. And then you will also have more 28-nanometer products later on for the year, so some positive, some negative vectors. Just, conceptually, how should we think about gross margins?

Thomas Seifert

How it's going to develop over the year? I think that's a very fair question. I tried to give you some color on it in terms of the headwind and the tailwind we've have. I think it's -- it will be a development that's going to pick up towards the second half. We started off well with the guidance we gave for this quarter and we take it from there. But we should accelerate as we move through the year.

Ruth Cotter

Great, well that concludes the question-and-answer session. We're going to wrap up now to hear from Rory. If the other speakers would like to leave stage. And we'll just remove the chairs.

Rory P. Read

Okay. Let's move to the closing slide. I think what's interesting and I think we saw today as we went through it, I thought you get a sense of how the organization and team is coming together. Here's a set of leaders. First, with Thomas, taking us through what we've started and how we've continued to execute, and deliver on it. Those numbers, the financial improvements, the way -- you can go to the next slide, the way we move to the commitment, the way we've built that base around the financials, it's strong, it's consistent and we're continuing to move forward. As he mentioned, we are already been identified, in terms of a credit rating watch for improvement. That base, that sound foundation is something we have to continue, to finish what we started. Then we heard from Lisa, we see this focus on the market. We move away from an environment where we've been only focused on an incumbent or an internally-focused technology race. It's not about that, moving forward. There is a lot of processing capability already there. It's the experience that we can create and how we can open up this market to the 4.5 billion people across the planet that aren't connected today. Lisa talked about a roadmap that delivers this year and next, but also opens up the opportunity to tackle the technology inflection point that are occurring in front of us. And then you hear from Mark. Mark talks about how we move to an architecture that lowers our cost. Because I'm definitely a scale guy. I like volume. It's our friend. And we need an architecture and a solution that allows us to deliver it consistently, every day. Our job is around execution, doing what we say and owning what we do. It is about ownership and a commitment. And when you saw the team here today, I hope you got the sense of the alignment, the united kind of thinking. And it's not just one person's point of view, it's a synergy across the organization. We have some of the best and brightest capability in this company. And they see these inflection points. And we're trying to unlock the capability this company can move forward and differentiate our capability to deliver value to our customers. That's what it's about. It's about low power. It's about video and graphics. It's about that next 2 billion customers. And we're going to innovate. That's how we create sustainability. A company built to last. We've created a foundation with our focus on the financials and we'll finish what we started there. We'll also create a culture that unites every AMD-er across the planet in a singular focus and seize this opportunity that's in front of us. When we align that team and we create an organization that is committed and passionate across every commitment, every customer, every interaction of every single day, every commitment matters to this company. Every one of us owns that commitment and we've got a live and breathe it and we've got to drive that. If we capture that kind of drive, passion and energy, and think about AMD, it's got that entrepreneurial spirit, it's got that can do, it wants to fight that battle. And if this inflection point begins to merge as we are suggesting, which it's going to, and the proprietary control points that have been in this space begin to break down, we are uniquely positioned to take advantage of this. This is our opportunity. It is our time in AMD to step forward and capture this inflection point. We've seen them before and we've missed them. This is our opportunity right here, right now to begin to change our execution, to become an execution model that delivers and be able to deliver on every customer commitment and then to use the innovation, the creativity of the minds that we have in this team to unlock this future. This is our time. This is a different AMD. And this will be a different outcome as we move forward and write a new future for this company. Every one of our 10,000-plus employees across the planet are coming together around this concept of this opportunity. First, starting around execution, build on our innovation and capture this inflection point. I think we're uniquely positioned to do this and I want to finish the way I started. I want to thank you for taking time out of your day to come here and spend with us, in person and across the Internet.

This is a different company and a different time. And the industry is changing and we embrace that change and we want to facilitate that change. I hope you enjoyed the discussions we had today and the beginning of the conversations we'll have over the coming months and years as we embark on this journey together. I appreciate the time and I hope you enjoyed this time together with this AMD team. And remember, it is our time to seize this moment, today. Thank you.

John Docherty

Right at the point of that we're about to get the revenue. So we've been really pretty focused this last year trying to deliver that. So I'm going to talk about the fabless strategy today. I'm going to talk about our overall manufacturing capabilities, I'm going to talk about how we fit with the technology teams and drive forward into that.

So you heard earlier, it's no longer about speeds and feeds, very important. Where does it start? It starts with our product roadmap. What does our product roadmap do? We need to understand what technology it takes to drive that. And then what, of course, we get back to then are our overall manufacturing strategy, manufacturing partners. And then of course that leads to foundry partners. Foundry partners, of course, are only part of that and the foundry partner becomes the large pole in the tent. Everybody understands the cost of wafer fabs these days and the importance of wafer fabs, the capabilities that they bring with their technologies and so on, are a very important aspect of this for us.

So I want to talk about the semiconductor supply chain in general. So a very generic view of the semiconductor supply chain broken down into 3 major components.

The first part is the part where we get engaged, the start of the partnership. And I mean by partnership, there's a difference between supplier and partner. Partners have skill in the game. Suppliers only supply products, it's very different. So I'm very serious when we talk about partners. So the foundry, our manufacturing partners selection process, the fab process itself and we'll break it down into fab front end of line and assembly test, back end of line. And bump and sort is kind of fungible. Those that know about the manufacturing floors know that the bump and sort can be a part of the fab side of life or it can be a part of the assembly test partner side of the house. And probably we should stop here just to think about fabless. Fabless works for AMD. I'm going to repeat what you heard again this morning, 30 million Brazos, 10 million Llano, albeit it was late in the cycle, but it was 10 million that we produced last year, and the world's leading GPU.

So fabless works. Fabless -- AMD also works for our fabless partners. What the AMD give our foundry or our manufacturing partners, we give them access to world-class R&D resources. We've actually got people stationed in every major suppliers' location. So we partner on a daily, weekly basis with all of the key partners we've got. On top of that, we've got access to the leading device analysis labs anywhere on the planet. We've got phenomenal test chip capabilities that we design and run through as a snow plow to fast learning for our foundry partners and technologies. So it's a two-way partnership. We get, and they get. And by the way, it's not always rosy. Those of you that think about this, it's not a date, it's a marriage. And anybody that understands that, knows it's not always rosy.

So talk about the foundry selection criteria, and for that you can think about any supplier-selection criteria. For me, there's 5 critical elements and it starts with technology capability. Does the foundry supplier have that capability today or will it have in its roadmap, such that it will meet our demands, such that it will meet our needs. If it doesn't have it, will it have it? That's the point in time ahead of the process where our R&D teams to get together and start working very, very intimately with our potential supply base. Then you've got the next 3 aspects of this, which are commercial, performance and flexibility. There's no use having the technology if you can't get the price. There's no use having the price, if you can't get the factory performance. There's no use having the factory performance if you can't get the capacity or the flexibility. So commercial.

You want commercial, the best commercial arrangements over the life of that product, of that technology. And so it doesn't -- I mean obviously, you want the best price, but it doesn't always mean that. A lot of leverage you get in this relationship comes out of yield or time to market. So you really got to structure your commercial agreements accordingly. And that's pretty important in all of this. And again, factory performance. It's no use having the best yield and the best price -- sorry, the best price and the best technology, if you don't have consistent yield, consistent output.

Flexibility. There's no use having all of that if you don't have the capacity to deliver. The flexibility to move your schedules. I mean, many of you guys are involved in forecasting. And one thing that we all know about forecasting, it'll never be right, it's just a question of degree, how wrong they'll be and it's no different for us. Our forecasting, as of much as we try to make them as accurate as possible, they'll never be there. So what we want the right foundry partners to create that flexibility in our strategy and the ability to move in new products, the ability to do revisions on cycles. So all of that comes together to make sure that we produce the products at the right time. All of that is important.

Then we come to relationship, the one that surrounds it all. And you heard today about the deep relationships we've got with our suppliers. They are deep and they're not always pleasant. Everyday calls for a long period of time. Intimate, sitting down, often, frequent, everyday drumbeats. We are on calls, our team are on calls, or are involved in hands-on face-to-face meeting with our supplier base every day, 7 days a week and that's what it takes. And again, I repeat, they're not always pleasant meetings. Sometimes they are. In fact, let me assure you, they've got more pleasant in last quarter than they were the last previous quarter. And you can work that one out. So part of that partnership is risk sharing. I said earlier, skin in the game. We share long-range planning. We share technology roadmaps. We've got skin in the game, we've got people involved, we've got IP involved, we got risk sharing. We have to lean into certain things to meet milestones. We share the risk, we share the dollar cost. Many, many aspects of that in order to ramp around this whole thing. So you can call this foundry selection but it's the same type of selection criteria for most of our suppliers today. And foundry of course is a long pole in tent because of the time ahead that we've got engaged with our technology teams. But they're not engaged just at the start, the relationship and partnership goes all the way through, till we get the product there, till it's debugged, till it's working, till it's functional, till it's in the platform, till it's actually making revenue for AMD and our customers. So this is a very important aspect of what we do.

The front-end, the easy part. That was a joke. It's the Scottish humor, I'm sorry. Technology gets more complex as the [indiscernible] is increased, very obvious. And many of you guys know about wafer fabs, you'll know that as technology increases or gets tighter, we introduce more layers. Introducing more layers, means more complexity and lithography. Anybody want to guess what the latest immersion lithography tools plus tracks cost today? I'm guessing, but I know it's in the region of about$50 million per tool. So start to multiply that, as you look across the wafer fab floor from the start to the end, front end of line, middle of the line, back end of line, every part of that fab is governed by lithography. And by the way lithography, it's just camera work, it's just cameras. Extremely expensive cameras, but they are cameras, nonetheless and that's what this drives.

Most fabs today are designed to have the lithography tools as their bottlenecks because you really want to keep them feeding the rest of the fab and keeping them fully occupied all the way through the process. So as you go through the process, front end of line, middle of line, back end of line, you actually start integrating the process flow, you develop the characteristics for the device itself, for the SoC, you've got the high-k metal gate at the front end, you create a lot of the performance characteristics for the middle of line and then you got the metallization process in very dense [indiscernible] in GPU in the back end of the process. So all of that has to be integrated and that's a challenge in the wafer fab. Wafer fabs today have thousands of processes. Wafer fabs have evolved to that really, really sophisticated automation to the point where the wafers are actually flying in overhead gantries. It's almost like air traffic control, how you move these parts from one part of the fab to the other. And it's really an incredibly sophisticated activity. Now when you think about the cycle times, think about the cycle from the front end to the back end. And you can see by the rough calculations there, that can be anything from 11 weeks to 13 weeks. A lot of technologies that show up that are even more complex technologies that can be longer than that. When you think about that, can you imagine if you have a problem in the front end of the line? You don't find the out till 13 weeks later. You can think about the damage you're doing in that fab, on all of that inventory. You can have 3 months inventory. So guess what? No surprise, the fabs have developed and evolved very sophisticated capabilities to track in-line, to understand what's going on. Literally, thousands of SPC charts tremendously -- and by the way, they're automated of course, tremendously sophisticated camera inspection tools where they are inspecting product as they progress through the line. So obviously, at every stage of that line, fabs have got control and balance, in checks and balance before processes step to the next phases of that activity. And of course, there are other techniques. Short loop learning, test chip methodology, and all of that with our partners, AMD is involved in it -- every step of the way. Then when the product gets out, the debug, the understanding of the design characteristics, is the design good? Where can we tweak or move the design to make it more functional. Where can start to do different things in here to help this whole floor and help [indiscernible] but obviously help AMD through this.

The back end of the line, more commoditized. So the front end of the line is longer in cycle time, the back end typically is 2 to 3 weeks, depending on how you want to play it. And this is where AMD truly differentiates. So what we have in the back end of the line, we have our legacy internal factories where we do assembly and test, coupled with the factories that used to -- coupled with the test and assembly partners that came with the ATI acquisition. We blended them. Virtually every product that ATI -- that AMD makes today is dual sourced. We have the capability to run in 2 sites and we've got the capability to run over at least 2 test platforms. So incredible flexibility. Our major -- our own major manufacturing sites are in Penang and then in China. And we've actually reconfigured the furniture to actually integrate them. One of them used to only do assembly, the other one only used to do test. We've moved the furniture around to make them capable of doing assembly and test. What does that give us? That gives us a phenomenal advantage. And it gives us the potential of 4 days cycle time versus 3 weeks. And obviously, we stage inventory. So part of our postponement or part of our inventory management is collaboration with our customers, that allows us to manage the inventory in-line and the end of the line so that we can pool for them. Now you can start to see from die bank to the customers' dock. It can be as short as 4 or 5 days from the time we produce it. And so in die bank, we do pool, and for some commodity product, we push all the way to finished goods so you can see the light blue colors there, are inventory activity points where we will postpone or we will pool accordingly depending on the customer, depending on the customer differentiation that we have. Then we get the product to pack out. So you can start to see how this whole thing starts to come together between the fab side of things and the back end of the line here.

But really, the other thing I should say is that this is also a tremendously good cost activity for us. Thomas talked earlier about the CapEx that we spend. We have not spent virtually $0.01 on any test platform in the last 3 years, and we're very proud of it. It's just efficiency and productivity and moving furniture around and changing test programs and changing test patterns such that we move them around and the integration is giving us a real differentiating factor here.

So when we look at those end to end, excluding the foundry selection part, which we know can be quite far ahead, the foundry and the bump sort could be overstated, can be associated to do the foundry or it can be into the ATMP side, we've got die bank, then we've got assembly test, the whole thing end-to-end can be 16 weeks. But as I said earlier, strategic partnerships and strategic collaboration with our customers allow us to put material in play that we can move much, much faster. So well that's on a per service level agreement and that's all part of our collaborative approach.

So, characteristics. As Rory said, you've heard it time and again, execute, execute, execute, meet your commitments. This activity helps us do that. So from a customer perspective, meet our commitment. Create the flexibility that allows us to accommodate hiccups. And we will have hiccups. Anybody that's ever been forced to a fab environment or a test and assembly line know that something happens and it will happen and this allows us to mitigate some of that activity, that capability in the back end, the positioning of strategic inventory. And by the way, if you've been watching our inventory, our inventory is in very good shape, so we management accordingly.

Internal. So we've got to manage our own capabilities. We manage what we're doing in terms of managing inventory. We've got processes -- we got consistent processes that allow flexibility, that's the key there. And the last one of course is back to the drumbeat. So we don't want to have drumbeat meetings with our manufacturing suppliers, we've got drumbeat meetings with our customers. We've got drumbeat meetings internal where we talk about different customer needs and where our pool points to stay ahead of that. So we've got the right set of metrics and that we're focused on them day by day, minute by minute and driving this stuff very, very hard.

So I'm at the end of this, it's probably good time to get some Q&A at the end of it. We've got a balanced strategy. You've heard that we're partnering with the giants of the industry in terms of foundry. We also partner with the giants of the industry in terms of the back end. Our material suppliers are also the biggest guys out there. So we've got relevance with our supply base, that matters a lot when you want to accelerate things and move things faster or create flexibility, push-pull, faster lots, new product introduction. And of course, at the end of the day, we've got to manage this to meet our strategic intent. So making sure that we meet our customers' needs.

So bottom line is, for me, execute, execute, execute. You've heard that time and time again. And with that, I will stop. And are we going to do Q&A now? I'm happy to take questions, yes.

Unknown Analyst

Just an underlying concern about [indiscernible].

John Docherty

We're delighted with the Radeon experience. Straight out the shoot, our products come out.

Unknown Analyst

[indiscernible]the question, John.

John Docherty

Sorry, I need to -- okay. How do I feel about 20-nanometer -- I'll try and paraphrase but I hope I get it right here, how do I feel about our 20-nanometer experience on Radeon, so graphics products. And my response to that is that, feel very good. We've come out the shoot running. Our yields are banging on where we predicted them to be and we're supplying product to the market today.

Unknown Analyst

[indiscernible] You mentioned hiccups in -- during your talk. During the fourth quarter, AMD alluded to a problem with 45-nanometer yields that affected some of your deliveries. Can you give us a little bit of color as to what happened there and how you corrected it and why it's not going to happen again?

John Docherty

Yes. We had a -- there were 2 things. There was a huge demand on 45. Normally, we would have had inventory, but the demand but was very significant, very quickly and for the channel business. And what happened was, there was a misprocessing event in our foundry partner's space. Good news is that's been corrected. Good news is that we've recovered fully and that we are running back to normal. And because we now have a better view on the forecast, we have actually built some strategic inventory there to mitigate that in the future. But that was an unprecedented demand, that spike on an older technology which surprised us.

Unknown Analyst

[indiscernible] Explain -- could you shed a little bit of light as to what -- how they screwed up on the 45-nanometer?

John Docherty

If you guys know wafer fabs, you know that there's multiple opportunities to screw up. Multiple, every day there is something new. It was a misprocessing event. I'd rather not talk about that, but it was a misprocessing event by our partner and we don't like them. We forced -- our quality teams are all over it, we forced 8 Ds to be driven down. We go back to detect, prevent, avoid, do all of that stuff. But the main thing for us is to learn from it. And that's -- to some extent, the cost of learning. But it did hurt us a lot in Q4, but there's been a full recovery already.

Unknown Analyst

Can you talk about, as your GLOBALFOUNDRIES business and TSMC all converge to CMOS, what are the rules that the different foundries and the libraries put constrain on your design capability? In other words, are they truly foundry neutral? In other words, as long as high-performance 20-nanometers, it's really doesn't put any design constraint on your APU design teams, how to design the different chips, whether to use for GLOBALFOUNDRIES or TSMC?

John Docherty

I'm not sure I'm really following your question.

Unknown Analyst

I was wondering, since the 20-nanometers between foundries aren't exactly the same, what constrain does it represent for your design team?

John Docherty

Okay. The constraint there is one of time. But I've got my colleague Chekib is here, and he can help us answer this. But the constraint really is one of time. Where this is part of the partnership selection process, we are upfront, you really -- you start connecting very quickly and that's part of phase 1. Is the technology available and when it will be available? And is it what I'm going to design to. And so we start that process with and, obviously, we want to know where the transistors' targeted in and so on. And so we start that process significantly ahead. It's painful because the standards are different, as you say. It would be great if -- and I know, in older technologies, people consider the technology to be T-like and that's become an industry norm. In advanced technologies, that's just not possible. So we've got to structure our design resources accordingly.

Chekib Akrout

Yes, I think what I would add. Every single suppliers have their own specific process technology and library. So we cannot truly have one unique library where we can develop all of our APU and our design which will work naturally in all these different foundry. That's the nature of optimizing the design with the specific libraries. We don't call them constraint, we call them like input as the process technology ground rules and libraries we're getting from these different suppliers.

John Docherty

Thank you, Chekib.

Unknown Analyst

Just to dovetail on that question a bit, more on the lines of the assembly and test. And correct me if I'm wrong, you said that you guys had not made any spending on assembly and test equipment for the back end support, was that a correct...

John Docherty

I said no test or platform. Where we do specific SLT activity, yes, we put a small amount of CapEx in for that, but it's very customer-specific.

Unknown Analyst

And so as you go forward in levering IP libraries from your foundry partners, do you run into any issues with design for test limitations? And do you see a potential for having to increase or to add some spend to accommodate for your foundry partners, maybe doing design for test on more advanced equipment than you guys are currently deploying on the back end?

John Docherty

On the contrary, I'll actually think that design for test will help us with CapEx in the back end. I actually think that what you're asking for, is what I'd love to see from a supply perspective. We actually put more features into our designs itself. So I actually think that work to the advantage of what my CapEx spend is in the back end. Of course, it's everybody's intent, in the back end, to push the failure -- I mean, product is going to fail after you produce it in the wafer form. We'd rather we caught it at wafer form and saw it that package it and have it fail there. So you're right, the more DFT features you can put into that, design for manufacturing, get all that stuff caught and swapped, it just means all of the downstream activity gets less. And what I don't -- what AMD doesn't do today is, we don't sort our own material. We leave that to either a combination of the foundry, in some cases, or the back end test and assembly guys, in other cases. Our product in that respect is very fungible.

Unknown Analyst

To follow up on that, you guys -- forgive me, you guys, purchase of well wafers or do you guys purchase finished goods?

John Docherty

We purchase finished wafers.

Unknown Analyst

[indiscernible]

Mark Papermaster

Yes, for sure. In fact, it's different models. So it's different models for different ways of doing it. But right now, of course, you have people talk about last year's WSA was actually finished goods from GLOBALFOUNDRIES. And a lot of suppliers that -- actually we buy wafers that are expected to yield a certain percentage, okay? Next question at the back.

Unknown Analyst

John, can you talk a little bit about GLOBALFOUNDRIES building a state-of-the-art facility in upstate New York and what are you planning and how that's going to affect the supply chain?

John Docherty

I really can't -- I've haven't had the opportunity to go there yet but GLOBALFOUNDRIES is a long-term partner of AMD. We will be producing some product there at some time in the future, as to what, we haven't sat down with them and discussed that. As part of our ongoing discussion, as you've heard earlier in today's discussions, but it's inevitable we will be in that multifactory at some stage. So we're obviously, keeping very close tabs on what they're up to and we'll watch that closely but we will be in there sometime in the not-too-distant future.

Unknown Analyst

This morning they'd mentioned that they are going to have BGA version of Trinity. And I'm wondering what impact, if any, that will have on your back end operations.

John Docherty

Actually, not a lot. We've already prepared for that. Even to the point where we go different package, z-height. So we already understand there's 3 different packages coming out of Trinity. Our teams are prepared for all of them, even to the thinnest package, we can produce. So we're actually very excited by that. The potential of bringing these form factors are really strong for us. And our package partners, in this case, are in Japan and we're working very closely with them as well.

Unknown Analyst

This might be a little bit out of what you normally deal with, but obviously, your OEM customers have to take the BGA chips and integrate them into their manufacturing flows much earlier in the production process than they do for socketed products. So how are they dealing with that?

John Docherty

I don't know the answer to that, at this point. But we do know that we've created samples already that are in our customers' hands. So, I mean, we can follow up on that question for you, but I don't know the specifics of that at this point.

Unknown Analyst

Can you talk about what your plans are to get to the 22-nanometer node? And I guess, for this year, whether -- there have been some talk on the blogs about 28-nanometer yields at TSMC not being so great, does that push the plans to 22 by a certain degree?

John Docherty

I think you had Mark Papermaster talk about that earlier on that we're really are looking to -- as part of our strategy. I'll let Chekib talk about that in a second. But let me just reset to the question that was asked earlier, I'm very happy with my 28-nanometer GPU yields. We are supplying product to the marketplace. And so, sometimes blogs are accurate, sometimes they are not.

Chekib Akrout

So I can add a little bit about it. After 28, by the way, we are looking to 20, we are not looking to 22. That's the new kind of step nodes, between 28 and 22, there's not enough -- have the nodes for us to justify that move to that 22-nanometer node. 28-nanometer, as a node, actually is a very good node, in terms of performance, power. The challenge we have in 20 is not what's happening in 28, it's the 20-nanometer per se, in terms of cost and in terms of performance and density it's going to provide us. It does require double patterning, as you know, that's costly. So we are really very carefully looking at, what kind of product could go to 20 and how we can leverage and take advantage of 20 to really synchronize ourselves, the right timing to jump on 20.

Unknown Analyst

2 questions. One, just on your GPUs on 28, is that high-k process? And second question, can you tell us how long -- how should we be thinking about CapEx spend over the next few years? $200 million the last few years seems like a lot for back end for a fabless company. Can you walk us through? Does it stay at those levels? Why is CapEx so high and not lower?

John Docherty

Sorry, I can't see you speaking a bit -- oh, sorry. Yes. So our CapEx is total company CapEx. It's not just [indiscernible] And so it's including IT and other things. Thomas can tell you about the total spend. That's not back end. Back end CapEx is very low. As [indiscernible] and total to bring up new packages and I'm going to give you a number that won't be right but it's in the region. I think of about -- last year, about $60 million. And a lot of that came from customized, what we call, SLP build. So we build a hardware to do secondary testing of back ends and we also take the hardware to bring in new packages. So that's where that comes from. So that's $200 million build didn't come from the manufacturing side, there's other aspects of that. And I'm not going to quote our suppliers' technologies, sorry.

Unknown Analyst

Question kind of comes back to a bunch of questions, I guess in that it takes a long time for you guys to partner up with your various suppliers. And clearly TSMC is different from GLOBALFOUNDRIES on a number of levels and in a number of ways. To the degree that one partner isn't able to meet your needs, call it, GLOBALFOUNDRIES not being able to make it below 32-nanometers, for instance, and you guys need to make that move and you need to make a shift away from GLOBALFOUNDRIES, how long, in advance of that transition, do you need to engage with TSMC to transition your designs that were originally architected at GLOBALFOUNDRIES, to make that move over to TSMC, or vice versa for that matter? Because it's sounds -- from you're describing, it's a very involved partnership and it's very specific in the architecture. And it seems that it's maybe not as simple as many of the blogs have made it out to be that somehow later this year we're going have follow-ons to Interlagos coming out of TSMC.

John Docherty

Like I said, don't believe all the blogs. So there is, in the front end of the process, the design gestation period where you're looking at the targets, the simulation targets, that the foundry creates that you want to match to. And that can be anything -- and again, Chekib is the expert in the design side, not me, but I think it's anywhere around about a year time frame. But that's something that you really don't want to do because changing is expensive. Because at that point, you've committed an awful lot of effort and that's why it's the partnership up front that's so critical to really understand where you're headed and that when I said it's not just about the technology, it's about all the downstream capability, the speed of run, the capability of NPI, the actual involvement and drive of the test chip methodology. So you get test chip methodology as a snow plow, so you can actually see what we're doing. And we've engaged that in both our foundry partners today. So the test chip methodology is actually very much appreciated by our foundries. So I don't know if you want to add some commentary to that, Chekib, about time up front?

Chekib Akrout

At least for the development part, or the design part, it's roughly between 9 to 15 months. And the reason why it's a range is because it depends on the design style. If it's fully synthesizable, you just kind of drop the new library, that can go very quickly, that's 9 months. And that's 9 months, by the way, just the design side. There's another part, which is -- John is referring to in terms of getting the test site, getting the manufacturing ready, and it could be up to 15 months, when you talk about very highly-customized unit micro processor, for example. So that's really where the range comes from. But it takes a while. These are not simple things you can just decide: tomorrow I'm going to another foundry. It's deep partnership, it's a complicated process technology, it's a very heavy system-on-chip kind of product, so it's definitely a lot of effort and a lot of work.

Unknown Analyst

In moving below 2-watt -- or the 2-watt and below power and below, what process technology do you guys feel is going to take for you to get there? And what percentage of the reduction in power is going to be from the process technology versus from what you're doing on the architecture side?

John Docherty

You're stepping outside my area expertise. I'm going ask Chekib to join me up here. That's going to make a lot more sense. But we'll try and answer your questions, nonetheless.

Chekib Akrout

I think below 2-watt, I think you have to be a little bit more precise in terms also again in any other product, but if you refer to where we are heading in the tablet, going after 2-watts, we're definitely going to need much more appropriate process technologies, specifically to control the leakage, I mean if we're having transistor with higher VTs. So any kind of process node, let's take, 32 or 28, there is a lot of variance in terms of different process technologies in the same node. One, which will allow you have a very high performance so you will accept some more leakage in terms of the transistor but it gets you much higher performance. And then when you go to the very, very low power, you really have to have much less leaky transistors and you have a different flavor of that same technology on that node. In order to get the 2-watt, they call it the low power technology, LP or LPE, depends on what kind of really -- suppliers you are talking to. Are you sure you want me here?

John Docherty

No, no. You stay there, you're doing well. There's one more question down here.

Unknown Analyst

[indiscernible] You mentioned staging inventories during stages of manufacturing -- you talk about inventory staging at various manufacturing stages, can you talk about how you think of that in terms of percentage run rate business and how much you want to put in each bucket and what you're comfortable putting into bucket and how that may change as total demand goes up or down? Can you give some color on that?

John Docherty

Let's take a step back for a minute. Think about product life cycles and the question we're asked on 45 earlier on. So you've really got 3 cycles: the ramp cycle, in which case, you're kind of, you might be in a risk situation trying to understand really the pull on your product and you know it's what's good for -- you'll build more and you'll overhang. You've got the static state, which again, you want to say, hey, this is pretty good. I can build inventory for die bank or finished goods depending on how commoditized it is. And then you can, at the end of life. Where do you really want to go with that. So that's 3 cycles. But when you aggregate it all together, what we are really looking for is a total of 6 inventory turns a year, that's our healthy operating numbers today. So that's where we want to be and we've got to manage within those 3 cycles. And obviously, product by product is very different. Customer by customer, we actually have die bank, which is really [indiscernible] And we can pull from that. And as I said, for key customers, we can do that in less than 5 days and we'll keep products at finished goods, which is post test, which is ready to pull straight to, either it can go into a box or it can go straight to our distribution centers or it can go to a customer owned hub. And so there is a number of different plays that we make, by product, by customer, and we differentiate the service levels for our customers that way also. I mean, you see that in anywhere you go today, there's differentiated levels of service. Coach, business, first-class. And we have customers that fall into those categories.

Unknown Analyst

I'm going to try and re-ask the question on the GP 28-nanometer to see whether or not it's high-k or SiON because TSMC actually supplies both. And then a second question has to do with your roadmap this morning laid out 28-nanometer in 2013 across almost all your products. Will that be -- will you lean toward more high-k, more SiON or mix?

Chekib Akrout

Okay, since you asked twice, you're going to get to the answer. All of them are in high-k metal gate. Across the board. That's it.

Ruth Cotter

[indiscernible]

John Docherty

The one over here.

Unknown Analyst

Curious in terms of your 2 foundries right now. How long -- or do you anticipate a day where the 2 will be completely interchangeable for all your products? And what are the roadblocks for that?

John Docherty

I don't ever see that happening. I think they're very different platforms, very different setups. The GLOBALFOUNDRIES is partner for the ISTA [ph] activity, that's GLOBALFOUNDRIES like Samsung, IBM, ST Micro and a few others. And then you've got the TSMC platform. I don't see that we'll ever reach that collaboration, they're competing against each other pretty seriously. I mean it would be wonderful for us and the industry if they would. That would make our lives so much easier but I don't think that's ever going to happen.

Ruth Cotter

[indiscernible]John, thank you very much. [indiscernible]

[Break]

John Taylor

Hello, welcome, everyone, to the client and graphics breakout session here at the 2012 Financial Analyst Day. A big welcome to those of you here in the audience at One AMD. Also a great welcome and thank you for joining to those of you coming via webcast. I'm John Taylor, I lead Product Marketing for AMD, working in Lisa Su's Global Business units. And I have a cohort on stage with me as we present to you demo theater here for the next hour.

Matt Skynner

I'm Matt Skynner, I'm the General Manager for the GPU business unit at AMD. As John says, this is an action-packed session. I think we've got 9 demos, which maybe a record for a short session like this. So we're looking forward to that.

First thing I did is, I want to talk a little bit about the demo you saw as you came in.

The most interesting thing about it is, that's not a video. That's a demo. That's a demo that's rendered in real-time, driven by the world's fastest GPU, the Radeon HD 7970, it's on Eyefinity. So it has 3 -- driving 3 displays there. And what's interesting about is, that was created by our in-house demo and research team. And maybe, I don't know, a few years ago, each frame of that would take minutes to render and now we're doing this in real-time. And it's an example of the type of experience that we want to deliver to every screen with our graphics technology. And if we stepped into the demo mode, we'll take a little bit of a closer look and look a couple of other cool things about the demo. So as I said, it's real-time. You can walk around in the demo. So we're just moving around through the day, you went through the wall there, that's why...

John Taylor

Essentially, Matt, we're looking at objects, textures, characters, that are complex as what's used to make great CG film today but rendering it, instantaneously.

Matt Skynner

Absolutely. And you can also start the show and it will -- you can walk around while the demo is running, which is pretty cool. It's a very complex scene. If you put it on wireframe mode, you'll see it's running about 60 million polygons a second. There is about -- every pixel has about 30 light sources being rendered on it at one time. Take off the wireframe mode. Lighting is very important in this, if you notice that our hero here, Leo, he is setting up a shot, the perfect shot with his puppets. And he's painstakingly made all these puppets and the clouds and the castle, so lighting is very, very important. This demo uses Compute to calculate those lights. If you turn on the lighting mode, these are all the lights in the scene. And in this scene, we can do up to 2,000 separate lights. So obviously, very complicated -- a lot of compute going on and then it renders. So, it uses GPU for computing as well as the rendering. You also get bounced light. So one of those light beams can hit reflective surface, for example, and it will bounce and create some global illumination. Let's takeoff the light. So another effect that's in here is called depth of field. And depth of field is a technique that's used in animation whereby you draw attention to something, draw focus on a character or something. And then you do that by blurring the background or blurring other images in the scene. And so we can turn on depth of field. Oh, it was on, now you turned it off. Turn it on and you see that the dragon in the background blurs out fairly complicated yet subtle but makes more immersive, makes it more of a better experience. And the final... go ahead.

John Taylor

I was just going to say, and when Rory talked earlier today about -- it's about that end-user experience, you can think about this as relevant to the next generation of content-generation capabilities and what the next generation of interactivity in games, the kinds of characters and things that you'll able to manipulate in that kind of world.

Matt Skynner

Yes the other thing you see here is very complex, very complex materials. If we zoom in on, for example, this cloud here, it’s made of -- Leo's made that hand stitched it, you can see out of silk. And you can you can see the sort of the shimmer of the fabric. Take a look at the grass made of felt and sort of a canvas. Oop, where's he going now? Oh, you're looking at the background there. And then zoom in here on the knight. Knight's very shiny and if we get at the right angle, we'll see the rest of the scene reflected in his armor. You can see it there on his helmet, on the shield there. So what's happening is it's being reflected. The rest of the scene is being reflected in there. So very, very complex stuff and very -- a little bit entertaining too, I hope. Thanks, Jason.

John Taylor

Okay. So this being demo theater, we could not resist starting out with the demo. We'll jump in now and give you guys a sense of exactly what Matt and I will cover over the next 45, 50 minutes or so. So we'll pick up some of the themes from earlier in the day but try to bring them to life for you in a meaningful demo form. And a few additional data points that you perhaps didn't see in this morning's program with the executive staff. We'll start out talking about those major trends in the industry that you heard Lisa Su speaking to, and then we'll showcase how our technology leadership can be parlayed into opportunity and differentiation for AMD against those major trends that are developing. And then we'll really showcase more and more again, Rory, as he hammered the theme about execution, he hammered the theme that it's about what the end-user experience is. So we'll give you a sense of those next generation of experiences that we're enabling. And finally, we'll close with something that really serves as on-ramp, if you will, when we talked about accelerated applications, show you a few of those. Then the third breakout session, and final breakout session today, with Phil Rogers and Manju Hegde we'll really deep dive into how we will continue to generate new generations of accelerated applications through our GPU and APU architectures going forward.

So this should look familiar. This is what Lisa spoke to earlier. But we -- In general, we have some demonstrations for each of these major trends. So let's start with natural user interface. Some of you might be questioning what does natural UI have to do with all this work that AMD has been putting into parallel processing and APUs. And you need to think about something, let's take something familiar like the Xbox 360 Kinect technology that's now having a lot of interesting activity with developers happening around it, that process of a Kinect with a depth camera and a traditional type a Web or HD camera looking out into a room, if you ever looked at some those videos of how those are hacked to see what the world looks like to a computer through the eyes of an Xbox 360 Kinect, it's thousands and thousands of points of light. And that active processing, what's changed in this field, where are the faces, where's the arm, how far is it from me, how far is the person from me, perhaps I should change the way my user interface is represented, whether they're 10 feet away or 2 feet away, all of that lends itself to acceleration through parallel processing. Similarly, if you think about even something like all of our multitouch devices that we use now, and think about something like a 4- or 5- or 8-megapixel photo and you want to manipulate it with multitouch, having a true graphics capability, represented in even those very thin types of form factors, gives you a better overall more responsive multitouch UI.

Matt Skynner

Another trend is moving to higher definition movie to more pixels on the screen and more displays. You've seen that in consumer TVs, going quickly from standard definition to 720p, 1080i, 1080p. And you're seeing this also from a PC point of view. If you looked at displays, maybe 4 years ago, you get a 22-inch 16x10 monitor would be about maybe probably $300, $380 or so, at that time frame. Now you get a 24-inch 19x10 monitor for about half that price. 20% more pixels, half the price, so driving more pixels. And we're seeing this trend continue. The other thing is from a multi-monitor display point of view. One of the things that we think that's driven that a little bit is our own technology called Eyefinity. But having more displays for productivity and further activities is becoming more and more important. And if you look even, practically, if you buy a large display like a 30-inch display, that's still over $1,000, but you can get the 3 by 1 set up for about half that with the 19x10 monitors.

John Taylor

So the third, working from the top row here, social games, of course it was a big year for gaming in 2011. We expect it again in 2012. We had the world's most successful, in terms of sales, game ever when Battlefield 3 was introduced, that's a AMD gaming-involved partner. But also So many of the headlines went to the developers of these new kinds of social games, think about the games where you connect with your friends on sites like Facebook or other social networks. And we'll delve into a little bit more here and show you some demos of -- we've seen really the first act of social games. And those games have been rendered by CPUs and not by GPUs. And therefore I would argue not rendered to their fullest, not the kind of experience you could get, say, from a game console. So we'll talk a little bit today about how you can get the best of a social experience with these games that so many mainstream users really have gravitated to by taking it to next level, in terms of the visual experience. And then when it deals with cloud again, we have demos related to that, but from a cloud perspective, the things that, for example, an APU or a better graphics capability can deliver in that system, of course, there's opportunities like compression and decompression, or if you think about how much time to spend looking at something like streamed content on a cloud device, a tablet or convertible or a hybrid product. With our APU technology, you have the ability to do things like dynamic post processing of a streamed video feed from Dailymotion or a Balfong [ph] or a YouTube, as you are watching it. So we can differentiate and make that experience even better even in a very thin client-type cloud form factor.

Matt Skynner

The final one is collaboration. We all want to work together more closely. There's many examples of this and it sort of relates to some of these other trends with social games, that sort of thing. But if you look at just even video conferencing, I have a friend of mine who lives in Munich. And every Sunday night, he has dinner with his parents who live in Switzerland and he does that via videoconference. We all travel a lot, but what if we could have -- what's pictured here is a 6-way videoconference with really great quality, be able to save those air miles, save those travel dollars. That sort of thing is really something that we can take advantage of with our technology.

John Taylor

Definitely. So this gives you -- I talked about a little bit about where applications today can be accelerated. This is a look using IVC Research at the most popular applications that end users are engaging with today. And you'll see some very familiar ones at the top, they're kind of old-school, like e-mail. And while we don't have that highlighted something that can be accelerated by the APU or doing something like your personal finances, you will see that all of these areas that you can accelerate, are places where you can really differentiate visually, where the end user or the consumer might have problems or frustrations with what we experienced just like today, how long it takes them to do a certain tasks or how complex they are.

Matt Skynner

What do you use your computer for?

John Taylor

Hard-core gaming, Matt. That's it. And a little bit of YouTube.

Matt Skynner

Excellent. A man after my own heart. I use it for lots of things, gaming is one of them, but also video. I think my daughter had an assignment at school, she took a bunch of videos with her cellphone, brought it home and we had it stitched together. It's amazing how quick she was moving everything around. And that experience can get better. When you render the video, it takes longer than I like. I want that to be instant. And so that's an application that we could accelerate.

John Taylor

Yes, absolutely. And video was one of the most dominant workloads, whether you are looking at it from the cloud side or the client side, going forward. So we talked a little bit about the trends, now we want to talk about technology leadership.

Matt Skynner

So it's important you have these trends, we want to capitalize on these trends. AMD has led from a technology point of view, both in GPU and with CPU technology. From a GPU point of view, we were the first with an AGP graphics card, first with PCI Express. These are major inflection points in the industry and when we lead in those inflection points, we win. And in fact in both those situations, we went to #1 market share. From a display point of view, we were the first with DVI, first with DisplayPort 1.2. From a process point of view, first to 90-nanometer, first to 80-nanometer, first to 65-, 55-, 40- and now 28-nanometer with our 7970.

John Taylor

Absolutely. And if you look back similarly from the CPU perspective, you can talk about AMD having the world's first 1-gigahertz CPU, the first to extend the X86 instruction set to 64-bits. Then extend it to multi-core, adding HyperTransport for much faster IO. And then things like, integrating the memory controller for much faster speed, lower latency in accessing memory.

Matt Skynner

And speaking of a memory, from a GPU point of view, first with DDR2 on a graphics card, first with DDR3, first with GDDR4 and first with GDDR5 on a graphics card. It's important to lead these inflection points and when we lead, we win. We're first to DX9 and first to DX11. We shipped more than 100 million DX11 products, that's a huge number.

John Taylor

It is a huge number, and it's important to know that the APU was a big part of that as well. Matt and I believe in teamwork, so about 1/3 of those are represented, as you heard earlier today, with all of our APUs. We wanted that first-generation of APUs to support that DirectX 11 capability. In 2012, we rolled out a new generation of DirectX 11 products. So we believed in it, it's a defining of the industry standard for the experience today and for compute as well.

Matt Skynner

And we're not stopping there. We're not first to DirectX 11, we've just launched the 7970. It's the first graphics card with DirectX 11.1, ready for Windows 8. It's the first 28-nanometer GPU and its fastest GPU in the world. Performance is important. Take a look at this chart. We're looking at 1.3x performance, 1.4x, 1.5x, up to 1.6x performance. This is various games at high-resolution. Pretty impressive performance here. But it's not all about gaming performance. We're really starting to focus on compute also. Engineers have put a huge focus on compute. And with the 7970, we're nearly 4 teraFLOPS of compute power.

John Taylor

Now picking that up and the core theme from earlier today about how quickly we can leverage that investment and that high-end graphics IP into a differentiated set of capabilities and an APU product that you can take to the full mainstream value segments, this represents how we were able to, if you look at that blue line of the bottom, that's your conventional x86 CPU GigaFLOPS performance, right out of the gate, Llano, as a desktop product, you see something in the order of nearly 3x the amount of available compute. With Trinity, we stepped up another 50% to more than 800 GigaFLOPS, and we think that third-generation APU, that Kabiri product is looking at more like the world's first 1 teraFLOP APU. And again, you'll get, I think, a sense today from us, how many developers have already worked to unlock that unique capability in our products. And then Manju and Phil will take you through, after our session, how we are making it easier for even more and more developers to access that level of compute. So speaking of compute, I think we've got a demo here that we might want to switch to. What you can do with all those ...

Matt Skynner

Yes. We have a demo. We have a demo, this is called ComputeMark. This is actually a video running side-by-side -- put the demo up -- a side-by-side of -- a run of ComputeMark. ComputeMark is a DX11 compute benchmark. We have AMD on the left side and our competition on the right side with their fastest GPU, it's our fastest GPU on the left-hand side. And you see some pretty impressive performance for us. This benchmark has about 5 tests for fluid, ray tracing, et cetera and it really taxes the GPU. You'll see it go, we'll let it run a little bit, go through a couple of things. These are called, Mendel charts, I believe.

John Taylor

And while Matt is demonstrating this from a discrete graphics perspective, I'd love to be able to do the same thing in the APU market, unfortunately, as I understand it, our competitor's APU-like product is not able to run this benchmark, we'll see in 2012. I think they'll going to be able to step into the arena and will be able to deliver those kinds of comparisons for you later to showcase the total available compute in our Trinity APU versus the competitive products.

Matt Skynner

Let's go to the benchmark results. There's the results there. So our score was 2927 the competition was 1756. I did the math. We're 66% faster than the competition.

John Taylor

Good job, Matt.

Matt Skynner

Thank you.

John Taylor

You're welcome.

Matt Skynner

I didn't actually engineered that myself, but...

John Taylor

No, I meant the math.

Matt Skynner

Oh, the math. Thank you. Well, it's not all about performance though.

John Taylor

No, it's not.

Matt Skynner

It's also about power. So we need -- I think you heard Lisa talk this morning, I think you heard Mark talk this morning about performance per watt. When we introduced the 7970, we also looked at our power technology. And we introduced a feature called Zero Core Power. And what that allows us to do is when the system goes into long idle, it allows us to shut down the core of the GPU, reducing the power at that time to less than 3 watts. If you look, historically, even a few years ago, in 2008, in that situation, our high-end CPU would be drawing 90 watts. We took it down significantly in 2009, 2010 into the 20-watt range, but a massive drop, less that 3 watts. And the reviews for the 7970 were very, very good and we are very pleased with them. But maybe it surprised us a little bit about sort of the raves we got for the way we handled the power in the Zero Core Power technology.

John Taylor

Definitely. And similarly from the APU perspective, we are equally focused on power. We are also very excited about what Trinity does for us. And especially these low power Trinity parts with that BGA package that we talked about during the earlier sessions in John Docherty's breakout on supply. So we've got a couple of different views here just to underscore what we're looking out from a power perspective for Trinity. We are making the first comparison to our Brazos technology. Brazos is really what put us on the map, worldwide, in terms of power, right? Brought us to TDPs we had not operated at before as a company and put us in a leadership position in battery life in the form factors where we introduced the Brazos technology, which we expect to continue in 2012. But Trinity, which represents a big step up in performance from that Brazos technology, at 17 watts, is actually lower total thermal TDP, lower silicon power, lower total system power. Now on the right, we are comparing to Llano, which is actually what Trinity replaces, so this is that A series APU in market today. And here what you're seeing is that when you compare performance per watt, Trinity actually represents a doubling of performance per watt over Llano, stated otherwise Trinity, at 17 watts, delivers all the performance of Llano, at 35 watts, as a dual-core chip.

Matt Skynner

Talk about immersive experiences. So my definition of immersive experience is something that pulls you in, that draws you into whatever you're watching, whether you're playing a game, whether you're watching a video. That you're really immersed in it and then really the frame of the screen sort of goes away. I think one of the biggest changes from a immersion in PC in the last several years has been Eyefinity. And it's been a differentiator for us in the last 2 years, but we didn't stop there. We want to continue to improve that experience and to really draw you in even more to that experience. So with the 7970, we introduced a new technology. It's called DDM, or discrete digital multi-point audio, kind of a cool name, I call it DDM, for short, because I can't always remember discrete digital multi-point audio, but it's a feature that allows us to take the sound and direct it to an individual screen. So picture in this case here, we have a videoconference going on. And I think that's the grandparents, a baby and some other grandkids. The baby could coo, the sound would come out in the middle screen. Grandparents says hi, it comes out in the screen where they are, makes it a much more natural experience.

John Taylor

Yes. I'll just chime in. For those of you may work for some big company that has these high-end HD videoconferencing solutions, you know that those are, what, I think, six-figure price tag-type solutions to help you connect multiple campuses or, with a certain key, very important customers, that you can get that sense of, without spending the money on that plane ticket, that you're in the room, you can read each other's body language better, this is essentially delivering. But you know what, a version of that... yes?

Matt Skynner

Why just talk about it?

John Taylor

That's a great idea.

Matt Skynner

We can demo it.

John Taylor

In demo theater. Yes.

Matt Skynner

Yes. All right. So what we've set up here is a 5x1 Eyefinity. So this is being driven by a single 7970 graphics card and we have 2 guests here. And the way -- you know Eyefinity is not all about gaming, I'll show you that later, because it’s pretty cool. But here we're doing more of productivity thing, so you could have a videoconference going, and have several productivity, you got your e-mail going here and looking at some pictures there. So we've got 2 guests here. On the left-hand side here is Carl Wakeman, do you want introduce yourself, Carl?

Unknown Executive

Hi, I'm Carl Wakeman [ph], I'm a principal audio architect here at AMD.

Matt Skynner

And so, what does that mean? What did you work on lately?

Unknown Executive

So I work with the team to help define what this feature is, we're calling DDM audio now. And take a very simple concept of immersion and extended to audio. When you're having a videoconference like you guys are saying, you're sitting in a table and the voices are coming from the faces of the people you're talking to and so why not do that with Eyefinity and have the voices of any of your videoconference participants come from actual screen where they happen to be located. And not only that, we're going to make it very simple to set up.

Matt Skynner

I’ll just stop you there for a second, I hope you had noticed that the sound is coming from this side of the room. That's the graphics card directing the sound to that speaker. On the right-hand side...

Unknown Executive

Now only that, it happens automatically.

Matt Skynner

Thanks, Carl. Do you want to come out here. No, not -- on the right-hand side, we have Justin Hensley.

Unknown Executive

I'm Justin Hensley [ph], I'm a principal member of technical staff, in the office of the CTO and I work on graphics research and development.

Matt Skynner

So, Justin here worked on that demo you saw earlier. Why don't you tell us about the Leo demo?

Unknown Executive

So one of the really cool things about the Leo demo is that it allows 4 vendoring systems to use lots of lights. So it allows game developers to us to leverage this sort of -- the awesome horsepower that's in our latest GPU to give cinematic quality rendering to game players, so they can actually be immersed fully in the game.

Matt Skynner

Awesome. And again note the sound is now coming from where Justin is speaking and naturally, you turn your head to look at Justin and if Carl said something...

Unknown Executive

Okay, I'll say something.

Matt Skynner

I would turn and look at Carl. So it's a much...

Unknown Executive

We can talk at the same time and turn to whichever one you want to listen to.

Unknown Executive

Yes.

Unknown Executive

Yes.

Matt Skynner

I don't know what to do now. Anyway, thanks, guys. I really appreciate you helping out. Thank you. So a much more natural experience and it really draws you in.

John Taylor

Absolutely. So as you guys can see for yourselves from the slide here, we've got an announcement that we're sharing with you, which is that our upcoming Trinity platform

Matt Skynner

Wait a second, you’re saying you can do Eyefinity on Trinity?

John Taylor

I'm about to say just that. That's exactly right, Matt. So as we've said throughout the day today, it's about experience, it's about differentiation. Eyefinity enables that for us. So we have our first onstage demo that we'll need to track from the cameras in the back. It's actually a 2-in-1, or a bonus demo, if you will, where -- what we're powering this with, and I'm come over here and talk about this set up from this laptop on down, this is actually the same reference design compile Trinity Ultrathin, Trinity BGA, that Lisa held aloft during her presentation earlier, and what we're showcasing, of course, is that, yes, it has Eyefinity capabilities. For the sake of this demo, we are not showing you a single large surface. We're showing you similar to the demo that Matt just did, we're using each display actually to show you a different piece of content, because -- or application. Because what we're doing here is your showing that with a single cable out, so think about it, a mini display port cable out, you would get the best of both worlds, or all worlds, depending on how greedy you are, you could have your Trinity, that is sub 3 pounds in that form factor, so thin that you slip it in your shoulder bag or backpack and you're not sure if it's really there, you've got to double check. But then when you get back to your home office or you get to work, you plug in one cable and it's attached to our new docking station solution that we are working with our OEM partners and retail partners right now to bring to market in the second half of 2012.

Matt Skynner

So this is, Johnny, is an example of leveraging 2 of our technologies. DisplayPort 1.2, AMD was first to market with that. Eyefinity first to market. We're leveraging them both in this.

John Taylor

Absolutely. It's all about how we draw that technology leadership through the rest of the product portfolio. So that one cable is giving you power, which is great, we all want that when we need to recharge after a trip, and from the cable, we are going out to a docking solution that, in this case, what we are showcasing actually on the Trinity laptop is we're doing USB 3.0-based file transfer, so it's got native USB 3 support, and then we are also connected to a peripheral here, which is a Blu-ray optical drive, so we are playing back Blu-ray content on the second display and on the third display, of course, we've got PowerPoint presentation going or a Microsoft Office application that gives a little more details about the overall capabilities for this setup. So you get Eyefinity and then with these docking solutions, you get the ability to go one cable, power and light up an entire office. And we think that's a pretty cool experience.

Matt Skynner

That's pretty cool.

John Taylor

So this gives you guys a little bit more of the details of how that single cable would come out and what that experience could look like supporting, in this case, up to 4 displays in a single large surface. Here, I'm just doing the 3 on the stage.

Matt Skynner

So we want to continue to improve Eyefinity, as I told you before. Another addition to Eyefinity with the 7970 is Eyefinity 3D. So then that's Eyefinity, 3 screens, 3D. There is actually a demo in the demo area down there that you can go and see, I don't have enough glasses to do the demo here, otherwise I would, if we did. But it looks pretty cool. And again, just pulls you through into that gaming experience.

John Taylor

Absolutely.

Matt Skynner

But we don't settle for just 3 screens with Eyefinity, we like 5, too. And playing Eyefinity on 5 screens is a really immersive experience. And instead of talking about it, John, maybe we should show it.

John Taylor

Yes. Let's do it.

Matt Skynner

So we're going to start actually with this single screen to get you an idea -- how do I get to play this? So this sort of a single game experience or a single-screen experience. Whoa. Dirt 3 is an excellent game and that's pretty good, but let's add 2 more screens. You get the sense of your peripheral vision with the 2 extra screens and it's a more immersive experience. Thing to know here is that we added pixels. We didn't just stretch that middle monitor over the 3 screens.

John Taylor

The conventional view would just be the middle screen, you don't get the rest shrunk down to that display. It's all bonus content that draws you in.

Matt Skynner

But because we don't settle for 3, I want 5.

John Taylor

I remember spent some time going around the world last year with a Llano-based laptop, showing many of you in this room how Llano could play this game, what that experience looked like on a single display. So I got used to this game from that perspective. Now I remember the first time I played this in an Eyefinity 3 or an Eyefinity 5 set up, it was a completely different experience where you actually felt more of a sense of impending danger and your heart rate goes up, as you go into those hard corners. You can't get that feeling from a single display, but when your peripheral vision is engaged and it feels much more like you're actually operating one of these Dirt 3 vehicles, it completely changes it.

Matt Skynner

Anybody else want to play? You want to play? I'm going to crash now.

John Taylor

What you're seeing up here, when you play this in a 16 or a 19x10 monitor, on the screen you're going to have 10 million pixels, 60 million pixels per second, pretty impressive, driven from one 7970 graphics card. All right. Thank you. Is that kind of cool? Yes.

Unknown Analyst

Do you play the video? Do you guys like video of that?

Matt Skynner

We are absolutely playing the game, my friend.

John Taylor

I'm so glad you asked that. It's a nice setup for the next section.

Matt Skynner

We can go back and give you the controller if you want.

John Taylor

So we'll -- our final section before we do some Q&A will be about accelerated applications. There's still actually a lot of demos packed into this grouping. And again, I don't want to steal any thunder from Phil and Manju and what they'll do in the final session, but we did want to bring some of the demos to life. This is an area that AMD -- while we were truly, I would say, a pioneer in general purpose GPU computing going back to our original ATI Stream initiatives, 2011 is really the year where that caught fire. Many of you joined us at our Developer Summit. You saw how it was sold out. You saw the announcements from Microsoft, OpenCL partners, tools providers, how there is a whole incredible set of energy and growing ecosystem around leveraging special purpose silicon, leveraging the GPU parallel processing capabilities, all under that banner generally of heterogeneous systems architecture.

So what's the proof? What's the evidence of that momentum? Well, as we introduced Brazos and then Llano in 2011 -- for instance, I know we have a fair number of the technical review press in the audience and they know, we met with them as we walked through that technology, we had a handful of things we could showcase them, for example, that dealt with GPU acceleration. And we really, by our account overall, we think we've got about a tripling of the amount of applications that we have to showcase our GPU leadership, our APU leadership. You'll see there's a few of -- if the letters are too small, TBA is on here, that's to be announced. And of course, the way these things work is that the software developers have their own roadmaps. You'll see some new applications come out between now and the introduction of Trinity and with some who are doing some partnerships to bring new applications out specifically timed to that Trinity launch in the midpoint of the year.

Matt Skynner

Before you leave that side, I want to make one point. Very important that we work closely with developers to help bring this kind of content to market. So we can have the best hardware in the world. We can have technology leadership. We can have Eyefinity with 5 screens, but you need content. And so it's key that we work closely with developers to bring, be it games or other applications to market to complete that experience.

John Taylor

That content is the experience.

Matt Skynner

Absolutely.

John Taylor

A chip is not an experience. The content is. All right. So let's -- speaking of content, here really we're talking about a platform for all kinds of other developers to create incredible experiences. So this is the new Adobe Flash 11 with their Stage 3D capabilities. So remember the key trends we talked about, the idea of social games.

And I said that most of the social games, and I think something like 8 out of 10 of the most popular social games today are based on the previous Adobe Flash platform. And that world, much like the world of browsers pre-2011, was one in which when you wanted to do something visual, graphical that felt like an application, you used the CPU. You didn't use graphics to do it, and it looked like it. It paled in comparison to what game developers were doing who where accessing the GPU.

Well, Stage 3D and Flash 11 changes all that. We've been working very closely with Adobe on this, probably again better to show rather than tell. So we're going to pull up there a Tanks 2 demo, that Adobe -- it's going to be on the main screen overhead -- that Adobe is using as a proof of concept of how we're going to be able to deliver what the next generation of social games or even games like this as being more of a combat game, a game console-type experience with that new generation of social games and Internet games compared to what previously was for any of you who's played, I won't name games. But some of those current popular social games today that has pretty limited 2D kind of an experience.

Matt Skynner

And the key here is Flash is -- it's almost a ubiquitous application.

John Taylor

Absolutely.

Matt Skynner

And every PC'er -- I think it said on the slide, 98% of the PCs connected to the Internet have downloaded Flash. So it's really accelerating with the GPU a mainstream application.

John Taylor

Terrific. Okay. Thank you, guys.

Okay. So we just took a look at DiRT 3. And saw that was the game that we were playing -- DirectX 11 game with Eyefinity 5. And you just got a little bit of data on how our Llano products currently perform versus the competitor.

Matt Skynner

Wait, so you're telling me that the APU can play DiRT 3?

John Taylor

Yes, Matt. You're a great straight man. Yes, I am telling you that, Llano could play DiRT 3 without any difficulty, the A-Series APU. But there's been a -- I'm going to paraphrase Bono for a second.

Matt Skynner

Bono?

John Taylor

Yes, Bono. I'm surprising you with this one.

Matt Skynner

Yes, you are.

John Taylor

So there's been a lot of talk about this next demo.

Matt Skynner

We didn't rehearse [ph] this, John.

John Taylor

Yes. No. So I'm going to show you guys an F1 game, which is by the same engine, the Codemasters guys who do DiRT 3. This demo for some reason got a lot of attention at CES, not from AMD but from another player in the industry. So this is the Trinity notebook that I have up on stage. And during the break, we can come and play a little bit more. It's at 13x7 settings. That's the settings on this display. We've got the settings dialed up to max though. So you've got the maximum capabilities. So this is our Trinity product that we're launching in -- I'm not sure if I'm in the way here of the -- are you guys seeing things up there?

Matt Skynner

Yes, it's good.

John Taylor

It's time to drive, isn't it?

Matt Skynner

Yes, it is.

John Taylor

So I'm not quite as good at F1 as I am at DiRT 3. But I think you guys can know I would not drive like that and record it for video. So that's a good example of what we're doing with the DirectX 11 capabilities that come with every Trinity APU that we'll ship a little bit later this year.

Matt Skynner

One thing we've proven here today, is clearly I'm a better driver.

John Taylor

That's challenging. I'm standing up here, the game's down there. Let me do DiRT 3 again. Okay, so it's not DirectX 11 and those DirectCompute capabilities that came with it with Windows 7. As Matt indicated, with Windows 8, we moved to DirectX 11.1 and some additional capabilities. This is a very innovative company called Unlimited Realities. They're based in New Zealand, a fantastic set of software innovators. They're very interested in the user interface.

So they're using DirectX 11 capabilities. Think about something like a big all-in-one in your kitchen or your home office or even a tablet device the you might be using. And what you see in the top visual is that using DirectX 11 when you're doing something like playing, in this case, a children's educational game on that type of device, you can have so much more brilliant effects and lighting and visuals out of the same content when it supports DX11 versus when it doesn't.

And on the bottom from a compute and physics standpoint, you think about with multi-touch, all the ways we play games that are enabled by or enhanced by touch. This a very cool demonstration that shows when you try to manipulate an object on that screen, it physically reacts to you. It compresses the harder you press on the finger. You pull back, it snaps back into shape and wiggles like Jell-O. Pretty cool.

Matt Skynner

Very nice. Sony Vegas Pro. So Sony, I talked about video editing with my daughter earlier. Sony Vegas Pro is an enthusiast professional video editing application. It does some very cool things from editing 3D, stereoscopic video to other special effects. It's now got OpenCL acceleration for GPU.

And so what does that mean? It means that when you do a video preview, for example, it's 5x faster. So I talked earlier about doing this with my daughter, we got to wait for the thing to render -- bam -- 5x faster because we're using the power of the GPU and worked with Sony to make that happen.

John Taylor

That means getting technology out of the way of the video content creation experience. You want to make a change, you want to see the effect of a new filter or lighting. You can instantaneously see that rendered versus having to do with and go away and come back and take a look at what you've achieved.

Matt Skynner

Yes, absolutely.

John Taylor

So we're going to do another onstage demo. So we'll be going to the camera up overhead. This is a company that we have been partnering with for, I'd say, going on about a year. They're a company we've invested in through our Fusion fund, which again Manju will talk a little bit about more. But here, we can cover a little bit of territory at once with this demonstration. So I've got a Z-Series APU of that Brazo's architecture, powered Windows 8 tablet.

This is the Windows 8 developer build that Microsoft issued at the Build conference last year. So you can see I'm in a familiar Windows 8, now becoming familiar Windows 8 Metro User Interface. We're very excited about Windows 8. With think of it as the world's first graphics accelerated or APU accelerated operating system. What's more when you think about the universe of applications that will be built on top of the Metro UI and Microsoft's app strategy there, if you think about tools like HTML 5, those lend themselves again to graphics to GPU acceleration. So we expect to deliver really a stellar experience on all of those different types of new generation and Metro apps that we'll see later this year.

But I'm talking about BlueStacks right now specifically. And I'm sure many of you use Android smartphones. Some of you may use Android tablets. You have some applications that are your favorites. BlueStacks operates as an emulation layer between the Windows operating system and those Android applications. So again, kind of like the demo earlier, it's the best of both worlds concept, where through their launcher, you can maintain all the different devices where you want to access Android applications. You can maintain clear order for one of the most popular applications you want to use. So I'm going to start the Bluestacks launcher. I've moved into their launchpad. I'm going to click on one of the more popular Android applications called Pulse which a news reader, news aggregator. It comes with photos. So boom, I'm now using a very popular Android application. You saw how responsive it was on top of Windows 8, response to multi-touch, et cetera. So that's a great example again of focusing on the experience and where we can get technology out of the way and ecosystem issues and focus on what the end user would really like to see.

Did you guys see that okay? Did I hold that steady enough? Good. Okay. So that must have been demo #8.

Matt Skynner

It was.

John Taylor

Because this is demo #9. So this is Andy's [ph] Steady video. And most of what we've shown so far have been software experiences delivered by independent software partners that we engage with. This next one is an example of what the thousand plus AMD software engineers, so we have a very strong software engineering community at AMD, what they did when they were inspired by the APU. So it is a technology that comes with our drivers for our APUs and our discrete graphics. You'll find it in both our radio and graphics family and in our APUs. We will continue to iterate on this product and improve its capabilities. So let me show it to you, and then I'll describe it to you a little a bit.

Again, it is a demo that's made possible through that GPU computing, that massive amount of parallelism. So the other day, I can tell you that a weird weather formation came over my house in Austin. And I was all alone, I wanted to capture it. I thought it was very cool and looked kind of creepy. So I held up my smartphone and I tried to shoot a video of it. I tried to hold my hand as absolutely steady as I could. And when I got back and dumped it into my PC, the whole weather formation looked like this the whole time. I defy you to try to shoot a steady video by holding a camera in your hand. Unfortunately, the Internet is filled with just these incredible moments that people have captured with these handheld devices, but we have to suffer through watching these things shake and stutter and do all these things that are very distracting. It can even give you a headache.

So what we're showcasing right now is how we, in real-time through the driver, as you are watching these videos, automatically sense the shake and stabilize it. So you have a much more enjoyable experience. And think about earlier when I talked about the cloud computing trend. This is an example of the kind of processing we can pack into form factors like tablets and convertibles to give you a better overall experience when you're just streaming that content.

Matt Skynner

And the key is -- and when you're doing something like this, and obviously the one on of the left is not steady video. It's shaking.

John Taylor

Yes, it's very obvious here.

Matt Skynner

And on the right, it's automatic. And so it's something you don't even know was there, but you have a better experience.

John Taylor

So we've jumped back out now into I think it's Internet Explorer 9.

Matt Skynner

That's correct. You're talking too long.

John Taylor

Yes, so we jumped back out of Internet Explorer 9. We wanted you to see this was in a canned video that were actually working from watching, in this case, a YouTube video. Support for this, you will find both in Internet Explorer 9, Google Chrome and Mozilla Firefox, as well as your local content that you're playing back so that you can get rid of that the shaky experience, for example, when you've just loaded video in that you shout through the smartphone.

Okay, great. Thanks, guys.

Matt Skynner

All right.

John Taylor

Bring us home, Matt.

Matt Skynner

9 demos. It worked.

John Taylor

Yes, I'm happy.

Matt Skynner

It looked great. So we talked about trends. And we heard this morning, Rory talked about going to where the puck's going. And I always like when Rory does hockey analogies because I'm Canadian. But we need to look at those trends, and we need to take our technology and use that to capitalize on those trends.

We have technology leadership. We have technologies that we can use to take advantage of those trends and really shoot ahead of the puck.

We want to enable immersive experiences. We want to draw you in to whatever you're doing, whether it's a game, whether it's an application. We want you to forget that, that video was shaky. We want you to just see its move, not even know that anything's going on. We want to immerse you in that. When we're doing video editing, we don't want you to worry about how long that's going to render. We want it to happen instantaneously. That's the type of experience that we want to deliver. And we have to have applications, and applications really puts the bow on the whole present. It brings the whole thing together. We have to work closely with our developer partners to make sure we bring these new applications, taking advantage of this technology and capitalizing on the trends.

John Taylor

Well said, Matt. So we still have a little less than 10 minutes before we transition to the third and final breakout session, which means that we have time for Q&A. I think we've got our microphones and our team setup to bring those through the audience. And while we're waiting for the team to get set up, I just want to thank the AMD demo team who are backstage and Eric [ph] up here, off of what is that stage left. Those guys basically had 30 minutes to do all that setup because none of those could be on stage before, and make those transitions and did a fantastic job. So I wanted to thank them.

Matt Skynner

Nice job, guys.

Unknown Analyst

When I look at your roadmap down the future, this is going to be a more of a general purpose computing platform, this GPU thing. And I was wondering what product generation would you be supporting things like error correcting memory, those sort of things that you would expect from a CPU engine or servers? You need those -- your green team across the street has sort of been doing that since the last generation.

John Taylor

There's a reference to green team, of course. We have high-end CPU capabilities, low-power CPU capabilities. So we have that. But since you said a green team, do you want to take that one, Matt?

Matt Skynner

Well, I think, obviously not talking about specific features, we're going to look at the requirements in the markets where we're going into and add those features as required. You mentioned that it becomes more of a GPU thing. It's really -- and there's going to be a discussion later with Phil about HSA, and it's really, this heterogeneous compute is the architecture that we see going forward, and that's really the GPU working closely with the APU. And what will happen with that architecture going forward is they'll be sharing memory spaces and that sort of thing to make them work together more closely and really even improve the experiences further.

Unknown Analyst

So talking about natural user interfaces, you gave the example using cameras and Kinect for example and so forth. As we look to improvements in voice, as we look to increasingly mobile devices with tons of sensors increasingly going into them, lots of potential inputs to leverage and then ultimately, of course, in the cloud to leverage as well to serve up the better experience. But how do you see either AMD's APU or specifically the GPU at either accelerating some of where we're going with user interfaces or enabling altogether? Because I think where we are today, there's a really good stage one for a lot of these new UIs, but it's got a long way to go. So it would be great to get some more color on how you see that evolving.

John Taylor

Yes, I'll take a first stab at that. I think what you just give is the situation analysis for what we're looking at for the design point of our future APUs. So the elements where Lisa Su closed today and she talked about kind of the beyond 2013 horizon, all of those are examples. Remember, she showed all those next-generation of user interface type capabilities and security and those types of things. We view many, if not, well everything that she represented there, we view as acceleratable by our technology. The UI element is a very obvious one. There's a number of things with security as well. So that really is governing our approach to designing the next generation largely also of our low power and ultra power -- ultra low power products as well.

Matt Skynner

I think you said it well. I think that the bottom line is the GPU is good at accelerating those types of activities. And so by putting those technologies together and working closely with the CPU, we'll see that we'll get to that time. When all those things like face recognition, biometrics, all that stuff will be out.

John Taylor

And I think one other thing to add, you might remember from Mark Papermaster's presentation, where he showed the CPU and the GPU blocks and then he talked about both other custom AMD, hardware acceleration, he even made some reference that there could be third-party hardware acceleration in there as well. So the CPU and the GPU may not always be the best answer to every one of those new types of workloads. You might be able to dedicate a fraction of a die square area to some kind of a special purpose hardware acceleration to get the job done even better. And you see us doing that today as well. We do that with video processing, and we'll do that with some additional types of hardware acceleration with these future APUs.

Right here, [indiscernible] next to him.

Unknown Analyst

Can you help us understand a little bit how the ultrathin form factor is positioned against the Ultrabook? I heard the $600 to $800 price point mentioned this morning. One of the biggest pushbacks with ultrabooks is the fact that the BOM cost is too high. The specifications are too strict. So how should we think about the ultrathin form factor as opposed to the Ultrabook?

John Taylor

That's a good question. I don't want to get into speaking to our competitor to Intel's specs for their Ultrabook platform. I can talk about what our strategy there looks like. And you guys might remember, those of you who come to a number of these Financial Analyst Days, I think it was at a Financial Analyst Day in '09 perhaps, where we first talked about the big push we were going to make in the ultrathin products. We did that. We started creating what we then termed Athlon VGA products. We won the best CES award with HP for an ultrathin notebook. Now those were aimed more in that $400 to $500 price point. And we'll continue to service those opportunities, and we think we'll get more of them with the waterfall from some of those high-end premium ultrathins to service with Brazos. Llano, as you guys know, we did not do a VGA implementation of Llano. So participating in the more premium segment there has not been something that we've been in a position to do. But I think you got the sense from how much time we spent on it today, we think we have a fantastic value proposition for premium ultrathins with Trinity. To step in to a form factor where you expect to hear great things about battery life and know that we're going to match or perhaps exceed what the other guy is doing with battery life, to be leveraging from Matt's business IP that comes from discrete graphics, special video processing capabilities, new kinds of innovative solutions where you can go from the best of portability to the best of productivity, if you will. So we love our positioning. And in terms of price points, yes, I mean we think you'll see Trinity silicon that will be below $600. We think you'll see some designs that will be certainly above $800. But we also know very well where the volumes are in the marketplace today and where, let's say, the data tells us, the data for many of you in this room that where consumers expect to buy a Windows PC, a Windows device in retail or in e-tail and that is where we really need to target on hitting those designs in that $600, $800 window. As you know above that, things really trail off in terms of the opportunity.

Other questions? No? Is there one back there. Sorry, I was in the blind spot.

Unknown Analyst

So in the past, with discrete graphics it was excellent. PC graphics, chipset integrate graphics was mediocre at best and then phones didn't have graphics. The way you've shown things today, it sounds like it's -- the distinction is getting less and less going forward between what you can do in the mainstream product and these discrete graphics. And also, you guys talked about a 2-watt product down the road. And what kind of graphics should we expect in those kind of products going forward versus the past? Just help us frame the delta, whether it's getting smaller or not, please?

John Taylor

Yes, that's a great question. I'll take the first step for that, but I think you should [indiscernible].

Matt Skynner

I think there's a question on the 2-watt, you may want to comment.

John Taylor

Well on the 2-watt, I mean, obviously I can't yet characterize something that was on Lisa Su's roadmap in terms of what its graphics capability will be. But I think what you could overall expect is that as we maintain this position of leadership in graphics, anything we do even at ultralow power, that will still be core to how we differentiate. And then again, to setup of the next session, the idea of heterogenous systems architecture means that whatever of that die we devote to the GPU capabilities, we'll have more and more applications that will take advantage of it because of our thought that it's not just about the CPU. So we got an application uplift benefit from it, and we'll get an overall visual experience uplift from it. But then at the high end, I think the idea of the distinction between the APU and the discrete GPU, you heard it loud and clear today, the roadmaps continue unabated for high-end graphics IP. Who knows how many displays Matt's going to come back with? [indiscernible]

Matt Skynner

And let me follow-up -- build on that a little bit. When the talked about the gap, it is narrowing. And so there's 2 pieces to our graphics strategy. One is we want to provide the ultimate, the best experience, and we're going to continue to do that from discrete graphics point of view. We've got the 5-way Eyefinity. We have Eyefinity 3D. And we're going to continue to push that. But we're also looking at it a little bit differently. And how do we enhance that platform? And so one of the features we have is dual graphics. And so we work with the APU and the GPU together, and we get a performance boost when they're put in the same platform together of say, 1.6x, 1.7x, 1.8x. So really, looking at how do we enhance that platform with the discrete graphics.

John Taylor

Absolutely. And in fact, we talked quite a bit about emerging markets today or high-growth markets. You think about the China market, that has been, I'd say, perhaps our #1 market for dual graphics and has really helped us differentiate there. Okay. I think we are about out of time. We need to make time for Manju and Phil to come up. So thank you, guys, very much. Thank you for the great questions.

Phil Rogers

Good afternoon. In this world of convergence, consumerization and cloud, the workloads are changing and they are changing rapidly. We see the workloads being dominated increasingly by media, images, photos, HD video streams, multichannel audio, 3D graphic stereo. People are going to want to interact with these, with their media, with their data in new and exciting ways that lead to a fully immersive experience. This is the exciting opportunity ahead of us. This means an explosion in parallel processing and the need to run that parallel processing efficiently.

Today, far too much parallel processing gets executed on processes that were not specifically designed for that process. What this means is a waste of power, and wasting power today is unforgivable. We're the tipping point in parallel processing. And the architecture I'm going to describe to you today, the heterogeneous systems architecture, we also know it as HSA, is a game changer. It's going to push us past that tipping point and into a new era of power efficient platforms.

As we were designing the heterogeneous system architecture, we had several major goals. The primary one was to make the GPU, the parallel processors, in the system easier to program, make that unprecedented processing capability of the APU as accessible to programmers as the CPU is today. This means high-level languages and not having to write special code for the GPU.

As we do this, it dramatically expands the APU software ecosystem, and this is a virtual cycle but in turn spurs new categories of applications to be developed, consumer applications that are best experienced on this APU and are unleashed by the power efficiency of the new platform.

So I'm going to walk you through the start point of the APU today, the future of this heterogeneous platform, the heterogeneous system architecture, HSA, what it means, what its features are. We're going to take you year-by-year through the roadmap, how we deliver the features, what the different bundles of features mean and the value they bring to the platform as we bring them to market.

I'll talk about how we program HSA, how application developers will program it and why it's so different to the way programs for the GPU and the CPU are written today. This leads to a new command in data flow that's so much more efficient. And then the software ecosystem that grows around this platform, my colleague, Manju Hegde, will come up and describe to you when I'm done.

So the APU, the Accelerated Processing Unit, has already arrived, and it's a great advance on previous platforms. You've already seen in the demand and the sales of our platforms over the last year and few months, with Llano and Brazos that combining this scale of processing of the CPU with the parallel processing of the GPU and bringing it all into high-bandwidth memory adds value to the platform and it creates demand.

But how do we make it better going forward? Primarily, we have to make it easier to program. Once we make it easier to program, it also has to be easy for the application developers to optimize and load balance their code, get very rapidly to the close to the peak performance of the machine. And every year, of course, as we bring out new APUs, we'll make them higher performance, and we'll reduce the power they consume for a given workload.

As we look at bringing out a new architecture, it's informative to look at the history of how we got there. And what we see is there have been 3 major eras of microprocessor development. The first one, the single-core era lasted decades. It was a golden era where ever year or every other year when we got a new silicon node, single-threaded performance just went faster. Things got better without having to change the software, but the free ride came toward an end. We say we hit the power wall. What this really means is as we added transistors, when we added up micro-architecture improvements, the power limit in the system meant that we could no longer increase the single-threaded performance. And if anything, this curb is rolling over as the GDPs, the power operating points of the platforms are coming down as consumers and everyone else wants lower power systems. They want them smaller, thinner, cooler, lighter. So single-threaded performance was no longer the answer.

That led us to the multi-core era where with Moore's Law, we were still getting more transistors on each new silicon node, and now we could use some for additional x86 processors and run multi-threaded workloads. And within a multi-core SoC, we ran the architecture that have been proven in multi-socket systems, namely the SMP architecture, symmetric multiprocessing. This is an era that was relatively short-lived. We rapidly hit the power wall again where we could no longer increase the performance by adding more cores and trying to scale threaded performance. There were also limitations in how rapidly parallel software came to the multi-core processors, especially in the client space.

That leads us to today. Today, we're in the heterogeneous systems era. We're still getting more transistors on each new silicon node, and we want to bring them to bear in a way that makes a difference in the experience to the end users. The key here is to exploit the abundant data parallelism in these media workloads. And we do it by using the power efficient parallel processing units that are part of the GPUs. And this is a high-leverage play because the GPU has to be there for graphics and does a fantastic job at the graphics in the immersive graphics experience. But those same shaders, the parallel processes in the GPU, are also the ideal processor for parallel computation.

Now you see in this era, we're still very much at the beginning of the era. There's plenty of room for growth, plenty of room for delivering higher value and more experiences. But we're temporarily constrained by today's programming models and the communications overhead that's involved in starting work on the GPU, transferring data back and forth and getting results back.

The good news is that HSA, the heterogeneous system architecture, blows away both of these barriers and let us ride this curve into the future. It's also instructive to look at how software has been developed in each of these eras, and we see a common pattern from era to era. And the pattern is that at the beginning of an era, the initial programming method is most suited to experts and capable of getting the full peak performance of the machine, but very few people have the capability or are willing to spend the time to program in that model.

And then over time, the programming abstractions get better, which draws in more programmers, which builds an ecosystem, which draws in more programmers and then we get another advance in abstraction and it gets better still. So in the single-core CPU era, we went from assembly language programming to structured languages, objectory oriented languages, managed languages. At each age, you give up a little bit of performance for an enormous step function in productivity and a growth in the ecosystem.

Similarly, for multi-core programming or SMP programming, we went from programmers programming in threads, which is extremely difficult, to abstractions like OpenMP and task parallel run times.

In the heterogeneous systems era, we're still early in terms of the software abstractions. People started by programming shaders, and they went to proprietary APIs, then to standard APIs and as I'm going to show you in further slides, the abstractions will get better until people are programming directly in high-level languages, the same languages they use on the CPU.

So now if we look into the era of heterogenous computing, we can break it down into 3 eras again. It started back in 2002 when we first put floating point shaders into the GPU. And some adventurous early programmers realized that they could get those floating point engines admittedly for a 3D API and do things like matrix multiplication. They succeeded in that, and that started off the trend of using the GPU for general purpose programming. In this era, several companies produced proprietary APIs to enable programmers to get a compute without having to deal with 3D graphics terminology by triangles and textures and geometry. So this was a great start and a great proving ground, a proof of concept that the parallel processing is going to move to the GPU, but as long as the interfaces were proprietary, then the software ecosystem can't take off.

Next came the standard driver era. This is a much better stage because now there are APIs like OpenCL and DirectCompute where an application developer can create an application and run it across multiple vendors of hardware and they have access to a much bigger range of platforms that their application is going to run on.

But we have to recognize that even in this era, it really still takes an expert to program the GPU. The languages are subset, so the high-level languages subsets of C and C++. There are multiple address spaces, separate address spaces for the CPU and the GPU, and data has to move back and forth, and perhaps the programmer has to decide when that data should move and when it should come back. Also, there are thick stacks of software between the application and actually running the code on the GPU, which leads to inefficiencies unless you have a very large batch of work to off-load.

That leads us to the architected era. And this is heterogenous system architecture. And in this era, we make the GPU into a peer processor to the CPU. This opens up the platform to mainstream programmers. We open up on the GPU the full capabilities of C++ and other high-level languages, no longer a subset. We introduced a unified coherent address space. This means that the CPU and the GPU are using the same addresses. Memory can be allocated on the CPU, a pointer can just be passed across to the GPU, and the GPU can access the same memory.

In this infrastructure, task parallel run times, as we were talking about on multi-core processors, can now be extended across the whole platform and run queues on the GPU as well as on the CPUs. We introduced you some mode dispatch. This means the application is able to give a task, dispatch a packet directly to the GPU without going through a driver stack or waiting for the operating system. And finally, as we're adding all of this capability to the GPU to act as a peer processor, it needs to be managed and time sliced just like the CPU is for a fully interactive experience, and that's achieved through preemption and context switching.

So how do we bring all this to market? There's a lot of features here, and we delivered them over several years. Last year, we already delivered the first stage, which was the physical integration of the CPU and the GPU in silicon with a unified memory controller. And you're very familiar with that in the Llano and Brazos products.

This year is optimized platforms. A big part of that is bringing out the full C++ language support in the south islands a series of GPUs and GPU cores. We also have the ability to do a user mode scheduling, where the application can dispatch directly to the GPU. And a very big feature is bi-directional power management between the CPU and GPU. This means we can actually slash power from one side of the chip to the other. As the workload is heavy on the CPU, we'll let the CPU cores take all of the TDP of the APU. And then as parallel work gets dispatched, that power flows to the GPU. As several of the CPU cores shut down, maybe one is left waiting for results, and now we can use all of the power available on the GPU. This dynamic motion of power within the chip is critical to finishing tasks early, shutting down and saving power.

2013 is a very exciting year for HSA because this is the year we round out the application features and complete the architectural integration. We unify the address space between the CPU and GPU, so they can share pointers or addresses with each other, take results from each other with no copies. The GPU in this era is able to use pageable system memory directly. We no longer have to pin memory down or have a special area of memory that the GPU operates in. It operates in all of physical memory. And we make the memory fully coherent between the CPU and GPU. This enables new software models like pipelines where data can flow freely between the processors without any need to manually invalidate or flash caches.

And then in 2014, we round out the architecture with the system integration features. These are the features that the operating system needs to do the time slicing and the quality of service guarantees on the system. The GPU compute engine is now able to context switch, and we're able to preempt the graphics in compute.

This has been mentioned several times today. We're committed to open standards. We drive open standards and industry standards. We're very active on the standards bodies. We recognize the open standards are the basis for large ecosystems. And standards win over time over proprietary systems because that's what the software developers desire, to have large markets for their products.

HSA is an open platform. As we were designing it right from the start, we were always committed to opening it up, publishing specifications. The specifications are currently out with partners, and we're taking feedback. The suspect for the virtual ISA, for parallel processing, one for the memory model which controls how the threading works, and one for the system specification and how it all comes together.

We're inviting partners to join us in all areas, hardware, operating systems, tools and applications. And we're forming a foundation to guide the architecture and evolve it forward. This diagram compares how the software works in the driver model where calls have to go through multiple layers of software to reach the hardware.

In the HSA model, we still have a run time in a Kernel Mode Driver, but they're not used in the high traffic data parts. Instead, the high traffic data parts allow the app to go directly to the hardware or through a domain library.

There'll be many programming models that run on the HSA platform. These are 2 of the initial ones. Khronos OpenCL is the premier programming environment for heterogenous computing today. We're a key contributor to OpenCL, and we're very active at Khronos in guiding its future direction. HSA has features and the architecture that simply make OpenCL more efficient. We eliminate the copies. We allow the points of passing. We minimize the dispatcher overhead.

Microsoft C++AMP is a very exciting development that Microsoft announced at our Developer Conference in June of last year. It's integrated in Visual Studio and available as part of the Windows 8 Metro build. It addresses the huge population of Visual Studio developers who develop for Windows today, and Microsoft has also declared that they're going to make this an open standard beyond Windows.

I'm afraid I'm getting over a cold and this is giving my throat in trouble. I'll see if I can at least make it to where I hand over to Manju. A key part of our C++ AMP is it's a very elegant extension of C++. It introduces just 2 new key words: restrict and array-view. And this makes it very simple for developers who already have large bodies of C++ code to just modify particular methods and target them to the GPU for parallel processing.

And HSA provides a natural roadmap for relaxing the restrict keyword and opening up all of C++ in future years. So this is the future of heterogeneous computing.

The architectural path forward is quite clear. We're going to take the programming patterns that have been established for SMP systems and simply migrate them to the heterogeneous world. In SMP systems, multi-core, multi-socket systems, data is shared every day between cores in coherent memory. We're extending that to GPU cores as well. We do it as an open architecture, with published specifications, and we open source the execution stack in order to jump-start the industry. This gives us a system with heterogeneous cores working together seamlessly in coherent memory. And this knocks down the barriers that was there before that was slowing down parallel processing moving from the CPU to the highly efficient parallel processing cores on the GPU. We have low latency dispatch, which means that it's very low overhead to kick off a task on the GPU, and that makes it possible to run smaller chunks of parallel processing there than is ever been possible before.

And then finally, no software fault lines. This is really important. Software developers put a lot of investment into their code. They don't want to be told they need to curtail that up and do something new to move to a new platform. For OpenCL developers, we're providing an easy path for them to move as new features and extensions get added to OpenCL. And even if they don't make use of the extensions, the existing program just runs faster. And for C++ programmers, it couldn't be simpler. They identify the parallel areas of their code. They use the new keywords, recompile, and they have an HSA program ready to go.

So that's the architecture. I'd like to invite Manju to come up now and talk to the ecosystem.

Manju Hegde

We'll do questions right after my part. Phil will come on. Okay, Phil, you want to...

Phil Rogers

[indiscernible]

Manju Hegde

Yes, because this is a one combined presentation. So you can ask him all the questions you want. Okay, thank you, Phil.

So before I go on to my presentation, I want to punctuate Phil's presentation a bit so that you would -- don't miss the importance of what he just said.

HSA is a large idea. It's also a very audacious idea. Why so? Consider how software programming standards are typically done. And this is true for almost any common industry API, whether it's OpenCL, OpenGL, any of them. So this is how the process goes. It's always hardware first. So you -- people suppose that the hardware as immutable, you cannot change it. They look at all the platforms that are available. They come up with an API that can address all of them. And in that process, they usually put a few unnatural things into the API. Consider GPU computing today, whether it's good OpenCL, it doesn't matter. You have to manage 2 address spaces. You have to manage 2 memory subsystems. So what does that mean? You can't use a pointer. Every time you change your workloads depending on the nature of the job, you have to incur the penalty of memory copies and that's slow. And so as a result, despite several years of GPU compute and many, many documented cases of 10x, 50x, 100x improvement, there are only between 100,000 to 200,000 GPU compute developers. And if we look at that in perspective, it's a fraction of the 20 million or so software developers worldwide.

And this a source of much personal frustration to me, I actually joined the AMD 1.5 years ago, I came from NVIDIA where I was GM of CUDA. So for the last several years, I've been having to promote the GPU computing ecosystem. But frankly, I don't think it can go much further without a radical change. So this HSA paradigm actually appends that model. In HSA, what did we say? Software first. So they're looking at the software developer and saying, keep doing what you do. Maintain your best practices. If you program in C++, continue doing that. Phyton, Java, keep doing that. We will change the hardware. We'll put coherent memory within CPU and GPU. Now you don't have to make those copies just like a shared virtual memory. We'll make the address space unified. So a point is a point, and the GPU can be referenced as pointer.

And one by one, we're just getting rid of all the aggravations, which are the obstacles to software developers embracing heterogenous compute. So now you can see why this is a bold move. But bold as it is, that's not bold enough to call it audacious. Because that's the second part of what we are saying. Phil mentioned, we're going to open the specification. Let's think about that. We spend millions of dollars on this, multiple man years. And yet, we're going to open the specification and we're going to go to partners, we're going to go to the entire ecosystem, we're going to go to competitors and exhort them to adopt this architecture.

That's what we're going to do. That's what we just announced. And now look at it from the viewpoint of those 20 million odd developers. So here it is, we are reducing all those niche [ph], all those aggravations they've had. So we're giving them now real access to a very performant [ph], very widely established platform and in addition, we're making it easy for everybody to adopt it. What's not to like? And in fact, that's what we are saying. We are talking today to ISPs, and I can tell you there's unanimous approval for this model.

And so that's why we think those partners, those competitors, we will be able to build an ecosystem where they're a part of this HSA foundation. And so that's why it's a bold idea. And that's why I think the promise of HSA is in its audacity.

So with that, I want to come to OpenCL. Why? OpenCL is here today. We are engaged for the last 3 years in developing the OpenCL ecosystem. Granted, it's not perfect. I mean, no GPU compute model today is perfect. But it does appeal to a lot of applications that end users want. It's a little painful to do it, and that's the pain that we're going to get rid of. But it's here today, and I know many of you here today are financial analysts and you have to caveat all your reports with that famous phrase, past performance is no indicator of future success. Luckily, I'm no financial analyst, and I'm not so constrained. So one reason why I'm presenting this is, I want you to judge me, to judge us by the success we will have in the OpenCL ecosystem because, as I mentioned, the HSA ecosystem is going to be much easier than this.

It's not going to be easy. Nothing desirable is easy, but it's going to be easier. So one of the reasons I'm presenting this is for you to see how you can take an ecosystem. And when you have an open industry standard, that's a card that's priceless.

So OpenCL, you can go into the technical detail of OpenCL. I think the most important thing about OpenCL is that is multi-platform. And it's truly multi-platform. It's not just lip service. But all those companies from the last line, they have announced in the last year or so, I think, if you look at the last year, there's been a spate of announcements. So if I'm a software developer and I'm a reasonably expert software developer, I know I can write one code base and it will at least function on all these platforms. True, today there's some optimization that's platform-specific but I know it will run, and de-leverage and that has resonated with the industry. And since you have so many analysts, I want to present to you recent data.

This is from a survey done by Evans Data Corporation a few months ago, June of 2011 to be precise. It's a fairly substantial survey. Hundreds of developers in each of 3 geographies: EMEA, North America and APAC.

And look at the results. We highlighted the OpenCL results. The numbers are not small. The question asked was, "If you are doing a new project which requires parallel programming, which parallel API would you use?" And that's what these percentages are, a fairly substantial sample size.

And if you look at it, it's not #1. It's #2, #2 and #3. But look at the API above it. API is a much all OpenMP. That building blocks, these are not really heterogeneous APIs. These are for symmetric or multi-core processors. So truly, if you look at the heterogenous API, OpenCL is #1, and has done this in the last couple of years. I mean, OpenCL existed only for about 3 years, December 2008. And really, we have been the main company promoting it the longest, and there's definitely other partners and companies promoting at NVIDIA. But if you track this, this is a couple of years worth of effort.

If you look at actual application in the market, these are applications that span the spectrum, consumer applications, some workstation applications, all of these use OpenCL. And there are some pretty good brand names out here. There's Sony, with their latest Pro, I think you saw a demo earlier. There's ArcSoft, there's CyberLink, there is MAGIX, the most widely used or one of the most widely used editors in Europe. There's Dasso [ph], a workstation and a CAD software application. And this just all I could fit on a page. Today actually, I honestly don't know how many OpenCL applications there are, that's probably close to 100. But that's a good sign because last year I could have told you, I could have counted on 2 hands. And that if you tracked when these applications actually came to market, it's the standard curve. I mean, I think on last April, it was like this. And then it took off, kind of coincident with the general perception that OpenCL is actually going to be sticking.

So OpenCL and HSA, I mean in HSA, as I said, we don't want programmers to have to do anything unnatural. So they can use whatever language they want, and the ones that we're going to come out first with are C++AMP, Microsoft's contribution to heterogeneous compute and of course, OpenCL.

And the good thing about OpenCL is that if you have an application today that you've optimized at OpenCL, say, some of the applications that I showed you on the previous slide, then this is one of the few times we can promise you a free lunch and deliver it. The advantage [ph] of this, copies will go. We'll take care of that in the run time. The low latency dispatch, that'll help all these applications. So these applications, without any change from the IC, will benefit. But the beauty of that, OpenCL itself is evolving in Khronos and its evolving to a space where it's becoming easier for the software developer to use. So for those developers who are used to, and those expert developers using it today and the others that'll come up, they can use HSA really well. The hardware features that we are talking about are universal. So by no means you should take HSA as comparative to OpenCL. It's the underlying platform on which OpenCL will shine and shine much brighter than it has to date.

So then what is the vision for HSA? People always ask, "What are the killer applications?" And so if you look at many things today, I think somebody asked the question the previous session, "What about user interface?" New user interface, yes, it's there today. Microsoft, with this Kinect, you can keep waving your hands and make things move on the screen but it's hardly where it could be. And so the beauty of that is with HSA, it can be much better.

If you've taken all these biometric recognition, augmented reality, you saw a snippet of what video contents can be like, and all of these you've analyzed it through -- engineering team has analyzed it, and they all will benefit from those features that Phil described as part of HSA. I'll give you a couple of examples.

Let's take gesture recognition. I mean gesture recognition, you can wave your hands but there are several companies working on just moving your fingers. So a fine [indiscernible] in gesture recognition. You don't want big gestures, you want to just move things with your fingers especially with say, tablets. So there if you look at it, one is to get the map of the hand, and that camera will do and there are other technologies coming up. The next thing is to do the motion estimation, and motion estimation is a very good workload for HSA. But let me concentrate on the third thing. If you really want to show your fingers off, then all you have to do is, what they call, the convexal. It's a geometric term. And that's not easy to speed up in heterogeneous compute, but it has been done. I mean, there are papers showing you can get 10x to 15x but it's just not easy, so very few people do it. Why is it not easy? Because it's one of those things where the load -- sometimes it's good for CPU serial, sometimes it's good for GPU. And that's exactly the problem HSA solves with the -- in a coherent memory. You don't have to worry about that. So therefore, that becomes an enabling technology for natural user interface, where you have to go to the granularity of fingers.

And as an example, biometric recognition. Face recognition, everybody has been dreaming of getting face recognition done well in these lower-power plans. So now in face recognition, the standard algorithm is something called a Haar algorithm. It's a multistage algorithm. And so what it does is it looks at whole scene, zooms out about 10% each time. It's about 20 to 22 stages. But the first few stages, really padelizable [ph], because you keep a lot of data. So there's enough to keep the GPU busy.

As you get towards the end, you whittled it down, so you can't keep a GPU busy. So now you're going to use a CPU. In today's face recognition and we worked with a few companies, that having to transfer their workload is the killer. No overhead with that. Again with HSA, the coherent memory subsystem, you get it for free.

So these are -- we've analyzed all the use cases in these categories of applications. So these are the applications which really become viable, which really become friendly to the user rather than just have a prototype, which is how I see the space today, and these will be what drive software developers and the entire ecosystem to HSA.

But that's going to be a couple of years out. We have Windows 8 coming out this year. And why I think Windows 8 is really good for HSA is because it's a strong statement from Microsoft that GPU graphics and heterogenous computer is interesting. It's important to them. If you look at the Metro UI, it's natively activated by the hybrid graphics. If you look at C++AMP, that's going to be even available on Windows 8, and that's Microsoft's big play in heterogeneous computing. In fact, similar ideas. They are saying, "Do heterogeneous compute transparently to the developer?" And it allows [ph] the functional take care of the hands to the compiler.

And as Phil pointed out, offers compiler for the HSA architecture is going to be C++AMP. Then HTML file, again, natively activated on GPU. So having an industry player of the statue and the power of Microsoft supporting us this year is going to be a strong message very well aligned with the direction we want to take the industry in.

And then last year, we had our first AMD Fusion Developer Summit. It was extremely successful. In fact, you may not see it, but Phil and I are still basking in the afterglow of that. I'm very looking forward to this year's Fusion Summit. And I wanted to give you a sneak preview of some of the top names that are coming. We've got the chief software architecture of Adobe, Tom Malloy, with our Phil going to lead up the proceedings again, and he'll get into much more depth this year.

And we have the CTO of Cloudera, Amr Awadallah. He -- there was a nice article on him, I think in the Wall Street yesterday. Read it, a very interesting guy that came from Yahoo last. And then our Mark Papermaster, who joined us recently and was on the stage today.

There are going to be a few more keynotes. These are the ones that we've got the details about their speaking and over the next few days, maybe the next couple of weeks, we'll be announcing a few more keynote speakers. But as you can see, there is definitely a hunger and awareness in the industry that this is the direction that we all have to go.

And so to conclude, I want to reinforce my messages. It's game changing, it's audacious, it's large. GPU compute, even with OpenCL encoder, is reaching a tipping point but this is a thing that we believe will really tip it over. It'll scale from the couple of hundred thousand to a few million. And that's the rising tide that'll lift all of us. It's applicable not only to the traditional PC space to all spaces. I assure you some of the application, even to low-power space, and as you can see from the kind of membership in the Fusion Developer Summit, we are embarked upon winning the hearts and minds of developers because in 2012 and beyond, if you want to be a hardware, a viable hardware business, you better win the hearts and minds of the software developer community.

That's what I wanted to say. So now let me invite Phil onstage, and while Phil is coming up here, let me just tell you -- one comment before you ask your questions. I mean, in my job, I'd like to characterize in many ways; in one way, I'd like to characterize it, especially with to respect to HSA is -- it's very simple. My goal is to make Phil a rock star. And I think I've got a good material. I mean, he's tall, he's slender, he's good-looking, and he is British.

Manju Hegde

You want to go back?

Unknown Analyst

Why?

Manju Hegde

You said you want slide...

Unknown Analyst

Well, maybe you guys can do this from memory but, Phil, do you remember you had 3 columns of boxes? Each column had 3 boxes, '11, '12, '13. And that's where you introduced the coherent memory model and the pointers and so forth.

Phil Rogers

Yes.

Unknown Analyst

So that suggests but you never said the words that there was going to be a load balancer somewhere because otherwise, how -- damn good work. So how do you set up these pageable systems and this coherent pointer and not do load balancing? Or is the programmer still going to have to explicitly assign what processor gets what stream?

Phil Rogers

Okay. So the expectation is still that the application developer knows their application better than our run time or our driver. So we're not looking to automatically load balance. We're not looking to create a system that can examine the characteristics of the program while it's running and move it from, say, a CPU to a GPU or from a GPU to CPU. We are still looking for the developer to say, okay, this method has parallel work. We'd like to make it a candidate for the GPU and we want it to execute there. So we actually think it would be a mistake for us to try and build a thick run time that does that automatically, whereas we're still relying on the developers to do the right thing. We just to make it extremely easy for them so they effectively just mark the method and then we take care of the rest. We're making the program a guide for compiler.

Manju Hegde

Yes, because today if you look at it, we really make it difficult, and not only is it difficult, but it's very inefficient. So they're getting rid of those 2 aspects, but ultimately the programmer knows his program better than driver of the run time.

Unknown Analyst

I want to follow-up on the HSA foundation that you mentioned. And in particular, what's your intent there? Are you going to hand off -- is it going to be like a standards body that's going to be governing all this and AMD then will have to do whatever that foundation comes up with? Or is it going to be a Board of Directors that's going to guide AMD? Or how does that -- how do you envision that working?

Manju Hegde

Yes, if I wanted you to know that level of detail, I would've put it in my presentation. And so -- but my point is, they're not trying to invent anything new there. I mean, there are tons of foundation. In fact, we picked the word foundation because if you look at success within a foundation, it patches [ph] up foundation. These are thriving, very important software bodies that drive open standards. So they're going to use one of these models and we're looking at -- we're actually looking at a ton of them to make sure that it's very low overhead because we want to have many companies join it. And so it's going to be nothing unnatural. But it's just that today, we don't want to go into the details of that.

Phil Rogers

The important thing is, we want to make sure that the partners who join us in this have a voice in the architecture going forward, right? They would be foolish of people to join in on this architecture if we maintain full control of it. So we're getting the ball rolling. We're getting the initial specification together. We're taking a ton of input from partners. But once the architecture is complete, we want a body to take it forward where everybody has a voice.

Unknown Analyst

How do you define when it's complete so you can hand it off?

Phil Rogers

For HSA, we're looking at the feature set that you see on the screen as the 1.0 complete feature set. There will be compliance tests written to this and this is will be the 1.0 standard. And that's the natural place to hand it off. If you try and do any standard -- from a standards body from scratch, it can take you 5 years to get to 1.0 and can be something unrecognizable at the end of it. By taking it with stewardship through to the 1.0 level, we think we get the best of both worlds. We get to market quickly, we still take a lot of input. And then set up a situation where everybody has a voice in how it goes forward.

Manju Hegde

And also, lest you think that this is very peremptory set of features, we have taken input from a number of companies, all of which you probably know very well -- so their inputs have been incorporated into the current version.

Unknown Analyst

To that end, I'll follow up. How are you working with some of your competitors? And does this HSA model extend beyond x86 to other, for example, ARM cores along with GPUs? But first of all, specifically, obviously, the elephant in the room is Intel. Is Intel necessarily going to be that interested in this if this actually gives you, in the end, a competitive advantage. And if they aren't, then doesn't that put a halt to it early on?

Manju Hegde

Yes, so if you want something to be an open standard, it'll be foolish to ignore some of the ISAs that are popular. And the best argument, whether it's Intel or any company x, is the argument I made. In 2012 and beyond, you want to succeed in any place, you better get the developers excited. And the story, as I said, is very powerful. All the [indiscernible] that has prevented developers from accessing heterogeneous compute, we're resolving that. And by the way, this is not just our opinion. As we talk to partners, the most common responses, you solve our problems for us, and of course, we're making it open. So therefore, I don't want to address Intel specifically, but any company that wants to really lead in this space, they would have to solve the developer problem.

Unknown Analyst

Well, but there's solving the developer problems than there are business practicalities and I guess, that's kind of what I'm trying to get at, if you could comment on that.

Manju Hegde

Yes, so, I mean, when you make -- when take something into a standard body and make it open, what you're doing is you're saying that you will allow some innovation but the more important thing is standardization. And that's exactly the model we'll follow. It's not going to be a standard like an RFC in the Internet space. There's going to be room for innovation but fundamentally, we're going to make it easy for developers to access the power of the platform. Why do -- and why would we do it? The answer is simple. The rising [indiscernible] is solved. And we've got -- we shipped I don't know maybe 80 million to 100 million GPUs last year, most of them, except when those guys are playing games are lying fallow. Because there are only 100,000 people who can really do something with them. Let's take it to 5 million. That's a much better situation for us because we know that in the heterogeneous IP, the leadership that Mark talked about, that's a contest that we are very comfortable winning.

Phil Rogers

And if I could just add to that, the -- I tried to make clear in the slides. The way that the HSA specifications are set up, it is ISA-independent on both CPU and GPU. So yes, it will work another ISA. And yes, if I think through the list of companies we have dealt with already to take input on the specification, it includes many competitors. We're not in the position to disclose names of specific companies today, but competitors have provided input into the spec. We modified spec and it contains their input now.

Unknown Analyst

Phil, what is the memory ordering model for HSA? Would you describe it as strong or weak? And how do you plan to resolve the difference between instruction sets that are weakly ordered and strongly ordered? Because even if it -- even if you can use any ISA, some may not produce the same results or correct results.

Phil Rogers

Okay. So the way the memory model for HSA works is it's weakly ordered but with strong primitives for synchronization. So it's basically a load acquire, store release and barrier model, if you're familiar with those details. Now because it's weakly ordered with synchronization, any ISA that's either weakly or strongly order can be compliant. A strongly ordered memory model can always meet the constraints of a weakly ordered memory model, it's not the other way around. But we've been very careful right from the beginning to completely specify the memory model, so that as platforms come out from different companies or even the same company year-to-year, they are consistent. And they are compliant with that memory model and we'll have tests to that memory model. So we're confident that all platforms can address this. It hasn't been done in the way that favors one implementation or another.

Unknown Analyst

If it's weakly ordered, you can have different results. Like, you can get a different answer, if you run it through a strongly ordered system like an x86 versus weakly ordered system like say, ARM?

Phil Rogers

No, I don't believe so. A correctly formed program will give the right results either way. If the program is not properly formed, in other words, the thread is trying to access results before a synchronization object has been signaled, then it's possible to get different results on different implementations, whether they're weakly ordered or strongly ordered. But the correctly formed program would not give a wrong result on a different platform.

Unknown Analyst

The Intel has used a concept of using not massively parallel but parallel x86 small, x86 core as a co-processor in the server market. Is that an alternative approach to HSA of heterogeneous computing?

Phil Rogers

I don't see it as an alternative approach to HSA. I think it's solving a different problem. You can almost look at it as a different approach to parallel computing in a legacy model, right, where it's a different approach, for instance, than using a GPU. There is nothing that would prevent such an architecture, adding features, and becoming compliant with HSA. And in fact, that would be an interesting direction.

Manju Hegde

I think we're ready with [indiscernible], MIC, Money Integrated [ph].

Unknown Analyst

Two questions. First of all, just trying to understand the AMD Fusion architecture here and how that compares with someone doing using a dedicated video codec or DSP or touch controller. Are you targeting a certain class of problems versus those SoCs that are out there? And then does the performance trade off as you do that versus using these probably more power-efficient codecs and so forth? Second question is, can you envision...

Phil Rogers

Can I take the first one before I forget? Because it's a bit complicated question. Okay. So what we're defining here is a platform and a memory system and how the different processors in the platform execute and how they share data or interoperate with each other. Within these slides, I dealt with CPU cores and GPU cores. However, the platform naturally extends to other forms of processors like DSPs, codecs, fixed function units, they wouldn't necessarily use HSAIL [ph] because they're not compiling through that parallel ISA. But in terms of sharing memory with common pointers and coherency and contact context switching, all of those apply. And the platform naturally extends to embrace those kinds of processors. The other comment I would have is as we go forward with SoCs in a very power-demanding environment, we're going to see more and more black silicon. In other words, SoCs with their variety of processors, CPU cores, parallel cores and GPUs, fixed-function processors for particular codecs and other reason -- or other use cases, all of which interoperate in memory, and if they're not in use, their power gated off and that's what makes it black silicon or dark silicon. So it's a very natural way to go forward. When you have -- when you build a piece of silicon for one specific workload like a codec, the matched processor will always be the most power-efficient method. It becomes a trade-off for us or an OEM, of whether they want to spend silicon for one particular workload or whether they'd rather be able to run multiple different workloads across shared processors like the parallel processor in the GPU. Going forward, I expect it to be a mixture. So for instance, for the most popular codecs, you may have fixed function. And for newly emerging codecs or legacy codecs that are getting less used, they'll run on very power-efficient GPU cores. The next part of your question.

Unknown Analyst

Sure. And the second part was, do you envision systems that could be used in x86 and ARM processor? I mean, there are systems that do that but in terms of more integrated going forward?

Phil Rogers

So your question, I assume, is whether I imagine a system has both x86 and ARM cores in it working together?

Unknown Analyst

[indiscernible]

Phil Rogers

That's the question? The architecture supports that. I don't necessarily see a lot of business cases for sort of having match capability processors at different ISAs ping-ponging together. But conceptually, it would work.

Unknown Analyst

Yes, my question was would there be a value add to doing that versus the x86 for ARM separately?

Phil Rogers

I'm not immediately seeing the value of that. But as I said, the platform is architected to support it.

Unknown Analyst

Not making assumptions about the bittedness [ph] of CPUs that'll be supported within HSA, i.e., 32 versus 64-bit?

Phil Rogers

Yes, actually, we thought a lot about that and made some decisions on it. The platform will naturally excel with 64-bit processing. We are seeing burgeoning amounts of data and even in mobile devices, smartphones, tablets, I see people hitting the 32-bit barrier very, very soon. It would be wonderful to make this 64-bit only and simplify the architecture. But the reality is, as we talk to partners, there is a strong demand for 32-bit versions of HSA and we will support it.

Unknown Analyst

Now would you support both mixed modes or is there going to be a 32-bit version and a 64-bit version? I mean if I'm in ISV and I want to distribute something that I've architected for HSA, am I going to have to release one version for 32-bit machines and a separate version for 64?

Phil Rogers

You're not going to have to do that. I mean, you could, for instance, release just a 32-bit version of your application. That would run both on the 32-bit and the 64-bit platform. The likelihood is that application developers are going to want to take advantage of that 64-bit address space and in that case, they may have to do 2 versions. Any last question? No? Thank you very much.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Advanced Micro Devices, Inc. - Analyst/Investor Day
This Transcript
All Transcripts