Advanced Micro Devices, Inc.'s (AMD) Management Presents at Morgan Stanley Technology, Media and Telecom Conference (Transcript)

Advanced Micro Devices, Inc. (NASDAQ:AMD) Morgan Stanley Technology, Media and Telecom Conference March 2, 2021 12:30 PM ET
Company Participants
Mark Papermaster - Chief Technology Officer
Conference Call Participants
Joe Moore - Morgan Stanley
Joe Moore
I'm Joe Moore. I just need to read a quick Safe Harbor before we start. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative.
So, happy to introduce one of the real highlights of the conference for me, Mark Papermaster, who is a Chief Technology Officer at Advanced Micro Devices. Welcome, Mark.
Mark Papermaster
Hi, Joe. Thanks very much for having me here today.
Question-and-Answer Session
Q - Joe Moore
Sure. Thanks for coming. So, I'll just get right into questions. Obviously, AMD has had a really positive period of product development across both of its markets. Maybe we could just start with the importance of the breadth of AMD's product technologies, the importance of having both microprocessors and graphics under the same umbrella. You've emphasized you're the only company with both. How does that play out in a big picture standpoint? And so can you talk about some of the leverage points where that heterogeneous computing focus has helped you?
Mark Papermaster
Thanks, Joe. Great question. I mean, first thing I'll do is I'll just sort of back up from a point of context and look at 2020. And you look at it, it was really such an inflection point for AMD with higher anticipated -- higher-than-anticipated revenue growth. Our net income, our EPS more than doubled and we had record annual cash flow. And we're looking ahead to another strong year in 2021.
And actually, it is based on the strength of that CPU and GPU exactly where your first question is. And it's something that, from our standpoint, has been years incoming. We've been shipping microprocessor CPUs along with graphics processors for many generations. Those embedded CPU and GPU work coherently, meaning that they work in such a way that they operate on the same memory. And these APUs, as we call them, accelerated processor units have been in our fully integrated offering for PCs and embedded applications.
Now, we've been driving the industry and really heterogeneous computing to take the same approach that is active in those markets and so successful in those markets for many years, it's now broadly recognized as a key driver of performance going forward in the data center.
And so that has us really excited in our modular design approach that we implemented over the last five years with AMD really puts us situated very, very well because our IPs are very fastenable and reconfigurable how we can put them together and optimize how the CPU and GPU work together. And we do this with our AMD Instinct GPUs.
You saw last year that we won highly competitive bids, the world's largest supercomputers. Again those share data across the CPU and GPU. They optimize how the CPU and GPU come together. And so we're very excited to partner with HP in delivering the world's largest exascale-class supercomputer later this year with Oak Ridge National Labs and then later with Lawrence Livermore National Labs.
So, data centers are going to offer different configurations at CPU -- only CPU and GPU, other accelerators like FPGAs and ASICs. So, heterogeneous computing is here to stay, Joe, and it's I think really key to accommodating the ever-growing data center compute needs.
Joe Moore
Great. It makes a lot of sense. I mean talk about the design of Zen a little bit. And maybe just kind of in a little bit of a context five years-or-so ago you guys first started talking about this. You said we're going to take on Intel at the high end, and I'm like well, it seems like the right thing to do, but it seems hard to do that.
And first-generation Zen outperformed them on some workloads, particularly multi-thread and things like that. And it looks like with Zen 3, you're pulling ahead on kind of single-threaded on really every workload you're bringing in advance. Can you just talk about that evolution? And how much planning was there? When you were first putting out Zen 1, how much were you already in the planning process for Zen 3 and Zen 4?
Mark Papermaster
Sure. And Joe, your question is spot on. It's a long design cycle for these type of leadership high-performance CPUs. In fact, it's a four to five-year cycle. And really, one of the key aspects of AMD's resurgence in the industry is the change that we did make around our CPU design.
Lisa Su and I made the decision years ago to consolidate our CPU efforts onto a single base, a Zen family of CPUs. Not launching one design with the Zen 1, as you said, but actually a road map from the very beginning.
Customers needed to bank on us as a trusted supplier. They needed to know that we could not only deliver performance and value with a new Zen microprocessor, but with, in fact, each and every generation. So, that's what we set out to do.
And we set three generations from the very beginning of Zen processors. In fact, that original roadmap is now coming to fruition as we started shipping Zen 3 at the end of last year with the Ryzen 5000. That's now at the desktop and just later this month, with our third-generation EPYC.
And as you said, Zen 3 is incredibly exciting because it's taking the performance crown not only in multi-threaded where we already had a leadership position with our second-gen EPYC and our Zen 2-based products, but also in single thread. So, it's a very exciting time and we don't stop. There's -- Zen 4 is in completion phase and 5 is in concept design. And so it's really all about staying focused on road map, being there for our customers, listening to them, folding that in. And that's why we have hundreds of millions of Zen cores in the market today and growing demand for our products.
Joe Moore
Yes. It has been an impressive evolution. Maybe, if you can talk a little bit about manufacturing. I don't think, when you started this process, you thought you'd have a lead on process technology. Your strategy is to be an early adopter of TSMC's process, but not necessarily the earliest not on the bleeding edge and yet you've emerged as being in the lead. How much of that is part of your strategy? And how are you thinking about generally process technology? And not just with regards to Intel, but also when you see Apple introducing ARM into a PC environment on an even more advanced process geometry. How are you thinking about the speed of adoption of process?
Mark Papermaster
Absolutely, Joe. A very, very key element of our strategy. In fact, there's really nothing more important to our customers than that consistent execution and delivery of our roadmap. It’s how we earn the trust of our customer set, it's what's driving our share gains. And the process technology is such a key component of our execution track record.
So, what we do in really each of our CPU and GPU product planning is, we line up the technology node, the foundry node that's going to deliver the best performance and the confidence to hit our yield and supply demands and the reliability that we need to provide. So we put all of that together into our planning. It's not the same timeline as a smart chip. Given our chips go into applications that customers are banking their business on, we need a little bit of a delay for that technology node to have some maturity.
And the way we do that is, we work with leading-edge foundry in a very, very close partnership. If you look at what we've done with TSMC, it's what we call design technology co-optimization. And we started this mode over five years ago with them and we aligned to partner a leadership x86 processor to a server and high-performance desktop in 7-nanometer.
And we kept that going forward. 7-nanometer was huge for us. It allowed us to double our core density in the same power envelope that we had. It allowed us to bring very impressive performance for at games in our graphics, our Radeon roadmap. And 7-nanometer and 6-nanometer, it's derivative or what we call a long node in the industry. So they've just had a lot of staying power. And we're right on track with those in five. We have called out with Zen 4, we'll leverage 5-nanometer.
Joe Moore
Okay, great. And then in terms of supply and I got to tell you, we get questions from the webcast and this is the first time I've had dozens of questions on the same topic, which is when are we going to get more graphics cards? When are we going to get more supply? So obviously, we've been in a very tight supply environment for all foundry customers and certainly for AMD. What are the bottlenecks there? And is it foundry wafers? Is it substrates? And when do you sort of see supply catching up to demand for some of these products?
Mark Papermaster
Well, I'll tell you, it was an unprecedented growth in demand that we had in 2020. We had planned for higher demand and even that was outstripped by the real demand over the course of the year. So what we did was, we added incremental capacity throughout the year. As our businesses grew faster, you've got to be very quick on your feet. And we saw pockets of supply tightness largely at the end of the year in our low-end PC and gaming spaces.
And what we've done, for us, it's the same formula. It's about a very, very close partnership and collaboration. That's what we've done with TSMC as per my previous comments, but also the rest of our industry manufacturing partners. And when you're launching as many new products as we are, it's important to think about the whole ecosystem. It is not just the wafer supply. That is the starting point and the fundamental base, but it is indeed going that supply base across a substrate to the embedded components on the packaged device.
So demand continues to be very high across the industry in 2021. So what are we doing? We're ramping our overall supply. We're very confident in our ability to grow despite the tight supply environment.
Joe Moore
Yes. Okay, great. Well, like I said, just based on the questions I'm getting on the webcast, there's a lot of demand for your, particularly gaming products, once as that supply catches up. Maybe we can talk a little bit about servers. And obviously, there's pretty high barriers in the server business. You guys have done really well. It's been a journey as you guys have said. Can you talk a little bit about that process of entering the server business, penetrating to me really starting with cloud service providers, where you've done very well? And how are you seeing that business grow within the cloud service providers? And then ultimately how does that turn into on-premise server-type business for you?
Mark Papermaster
Great question, Joe. And you're absolutely, right. You can't just show up in the data center as a me-too and expect adoption. As we launched our first- and second-generation EPYC server processors, we took a very targeted approach to drive adoption with a disruptive performance. It was a higher core count. It was more memory. It was more I/O and putting all of that together in a way that was just immediately implementable in the x86 ecosystem.
And it's really this disruptive capability that drove our success, along with very strong, our customer support. And our momentum did indeed grow very quickly first with cloud. That's the fastest adoption of new technology. And we spent a lot of time working with our cloud partners to optimize performance, particularly in enrollment with our second generation as we saw this expanded deployment.
And the thing that you see is that enterprise is a little bit slower to move, but you already saw that pivot with our second-generation Rome products being offered across a broad set of enterprise OEMs and actually multiple product configurations. And the choice of EPYC configurations is going again with the third-generation Milan that will be launching later this month, but has already been shipping in select accounts since the end of last year.
And again, it's this combination of not only multi-thread performance, but single-thread of performance leadership, dropping into the exact same electrical socket and server configuration that we had in our previous generation. So it's incredibly easy to adopt. And so that's why you're seeing already with second-gen and you'll see it continue right into third-gen EPYC, this growth of adoption of our EPYC server platforms.
Joe Moore
And is it different – when you talk about cloud versus enterprise, it seems like there's some reluctance to transition architectures in the enterprise business it takes a little bit more time. And then even within the cloud business, where I feel like they're sort of the internal workloads maybe consumer-facing type workloads versus the Enterprise as a Service.
Are those fundamentally different? And I know you're doing well in all of them, but is it a different selling process? And this idea that when you sell it to the cloud service provider and they're selling it to their enterprise customer, are you sort of selling it twice? Where are we in that process?
Mark Papermaster
Well, what's different is certain internal workloads are actually quite large volume on cloud service providers, so given obviously the breadth and depth of these companies. And so those typically do require additional tuning to those specific applications simply to get the most out of those applications. But we do the same with enterprise.
So enterprise typically are leveraging very common applications for whether it'd be their workloads of online transaction processing, running their business, the analytics that they're running, et cetera. And so we optimize for both, the predominant applications of internal properties. The external properties in the cloud are quite often mirror what you see on on-prem. And in fact, Joe, you're seeing a lot of the cloud providers offering configurations that can be actually put on-prem to even ease the on-prem-to-cloud transition. Most data centers are going hybrid, a combination of on-prem and cloud.
Joe Moore
Okay. Great. And then in terms of competition in this space and maybe within on server, but broadly speaking with Intel. We get a lot of questions on the transition within Intel and the new CEO. Obviously, you didn't expect to have the luxury of Ice Lake being two years late. So your – what's your expectation? What are the assumptions that you guys make about Intel's innovation from this point forward and your ability to sort of compete with them on an even playing field with the strong position that you built?
Mark Papermaster
Our view of the competition really doesn't change year-after-year because what we always do is assume that our competition will deliver to their plants. It's such a fiercely competitive market that we have for computing across CPU and GPU in the market space that we simply have to be relentless in our pursuit of high performance and our AMD differentiation.
So when you look at the adoption of AMD products across enterprise, cloud, HPC and what you see is really a growing number of AMD instances and installations. And it's based on that road map delivery. And so you look at the second-gen road map that was designed to compete against Ice Lake. And here we are launching third-gen EPYC right now based on Zen 3. And so we're adding at even more performance on top of Rome.
In fact second gen and third gen will be in the market coincidence. And we're on track with fourth-gen EPYC to go-to-market in 2022. So our thinking about how to address what will always be a strong competition if someone stumbles they're going to come back that's generally our thinking. And so we typically approach everything is stay focused on the road map have a very, very strong offense and that's the best defense about whatever the competition may bring. And so we're just going to stay focused listen to our customers drive improvements very, very aggressively into our road map.
Joe Moore
I mean it does seem like though that their stumbles have created much more consideration of AMD from a different set of customers than might have been doing this early in your road map is that fair to say?
Mark Papermaster
Well, as we came in I'll say particularly in the data center Joe, as I said earlier you can't show up and be a me-too and drive rapid adoption. You have to have a considerable differentiation. And we planned a differentiation. We plan to have a performance a very, very clear performance value proposition that we brought to the table. And in fact it's been greater than we had anticipated. And I think that has been the adoption of AMD in the data center.
And the same thing you've seen across other markets where, let's say, desktop where we've -- just commanding performance lead. If you look at the mobile market also with the new Ryzen 5000 just phenomenal battery life and performance. You can run AAA game titles on those PCs. So it's all about what you bring to the table and the differentiation that you can provide in customers.
Joe Moore
Great. Yes, some really impressive products. Maybe shift a little bit to talk about ARM in the compute space. And I know you've had ARM products in your history. You elected to, sort of, focus on x86 in the server business which was obviously a decision that worked out well for you. But now there's a growing or a rebirth in consideration of ARM in the data center with some of the custom products that the cloud guys are coming out with and now you've seen ARM being successful in the client business through Apple. What's your thinking architecturally of x86 versus ARM? And do you have to think about different architectures for different ecosystems over time?
Mark Papermaster
Well at the highest level Joe it's a trend that I've been communicating for some time and that is Moore's Law is slowing. So we just don't get out of each semiconductor technology node what we used to yet the demand for computing growth is the same. It's that same exponential growth. So it is driving innovation across all of the markets.
And we've been a big part of that. We've been driving disruption with the products that we put out there. We have always believed that it's not a vanilla solution x86 homogeneous compute was a predominant data center architecture. And you're seeing now data centers move to heterogeneous compute farms.
We talked about a little bit earlier in this dialogue. And that's just a necessity going forward. There will be more variation of compute solutions. But the fact is the tent of solutions has to grow bigger because the demand for computing is so high. It's going to take tailored solutions and it's going to depend on the applications to depend on the workload.
PCs have been x86-based and you can see that our Ryzen 5000 I talked about a minute ago with it's commanding battery life and performance. Yet if you look at the Apple M1 and it's a very tailored device. It's tailored for the macOS and it's bringing a very, very strong performance. That's going to in turn drive innovation into the Windows ecosystem.
Now that's good for everyone. We partner very, very deeply with Microsoft. You can look at what we've done with game consoles, and it's going to drive even more collaboration in Windows to bring a broader set of experiences.
Again, everyone wins. And likewise in the data center, we originally were designing our first Zen family. We had a dual x86 on ARM, and we determined that x86 it had a full ecosystem and could drive the most rapid adoption. And we also determined that it was not about the instruction set architecture. It's about the overall performance and total cost of ownership that you can provide.
And so we're looking and we have anticipated our competition. There's been several runs at our -- in the data center. And we do expect for targeted workloads, you can tailor devices and provide advantage.
And again, our focus is to leverage going forward a very, very strong road map and to ensure that we with x86 with our CPUs and GPU combination, continue generation-after-generation to provide that broad set of value to our customers. That's what will fuel our continued growth.
Joe Moore
Great. So shifting gears a little, maybe we could talk about CDNA and your sort of data center compute -- GPU compute strategy. You're out with first-generation products now. And obviously, NVIDIA has gotten quite a bit of enthusiasm for its products in GPU in the data center for areas like machine learning.
And AMD has gotten traction in data center GPU as well, but it's been more areas like virtualization and gaming and things like that. But with CDNA, you're sort of now in the mix in terms of data center. Can you talk about that and where you see that evolving with over the coming generations of CDNA?
Mark Papermaster
You bet. Our data center GPU business is relatively small today, but it's -- in fact, it's a very strategic to us in the long-term. I talked earlier about the advantages we see when you can design from the outset of how CPU and GPU can work together, how we leverage our Infinity Fabric to bring the kind of advanced performance and coherency across the two in our road map. And it takes that kind of optimized system to address the new computing demands going forward.
In this exascale era of computing, it's one billion times -- one billion floating point operations per second. It's starting with the national labs as we talked about earlier, but really that level of computation is becoming much more widely required. Now you're seeing these high-performance compute instances, popping up now in the cloud service providers.
And what you're seeing is it's not driven only by traditional HPC workloads but AI. Artificial intelligence is driving such a huge compute demand, and it can leverage a heterogeneous computing very, very efficiently to drive more performance for end customers.
So, we look at this, and we're very excited about this heterogeneous CPU and GPU workloads of the future, and we're partnering with the industry. And we have a very, very strong CDNA road map. Our first launch was at AMD Instinct MI100 in 2020. And we're very excited with CDNA.
We're coming up for CDNA 2, which we'll be launching for exascale computing, and then our continued road map and investment tier for the future. It's going to be a very strong growth vector for us as we continue to grow our installation and grow the whole software ecosystem around our GPUs in the data center.
Joe Moore
And on that software ecosystem point, your competitor certainly is enthused about the proprietary middleware stack CUDA and other software products that they have supporting that business. If you guys have more of an open-source approach to a data center GPU, can you talk about the merits of those approaches?
Mark Papermaster
You bet. Yes. We are very committed to an open-source versus proprietary approach. You look at what we've done to enable our customers we use an open-source compiler for both our CPUs and GPUs LLVM. We have created the Radeon Open eCosystem. So, all of the enablement capabilities whether you're running high-performance compute libraries like oil and gas or these national lab installations or, if you're running AI and you might be running TensorFlow or PyTorch or ONNX.
For all of these we have created an enablement that's open source. And that's gotten a very strong reception from our customers. You look at our first focus of HPC in the data center and the preparation for these exascale computers the Frontier system, at Oak Ridge and El Capitan and Lawrence Livermore has really accelerated that software ecosystem.
And likewise, we're seeing the same momentum in development of our applications for AI. So it's a very strong development focus for us. We started years ago. And now we're in production releases, actually our third production release of the Radeon Open eCosystem.
Joe Moore
Great. So maybe if we could just wrap-up with sort of tying all this together, as our final question, if I sort of ask you about the future of data center compute, how do you see the technology landscape playing out? And who do you see as the winners and losers long-term?
Mark Papermaster
Well, I think, it's clear to all that, our computing has of course always been dominant. But now, it's the center of everything we do. The pandemic hit. And we've all been working remote. And we're relying on, our collaboration tools running off of the cloud.
You look at businesses thinking about their continuity. They simply need not only, that ability to run applications wherever they are which typically means that you run a digital transformation and become more cloud based in those applications, but it's also driven a growth in PCs.
The PC markets which had been declining rose significantly in 2020 and is projected to go even further in 2021. And you look at what else are going forward is that applications that are run in the cloud today, many of them need to be run closer to the source of the data. So we're seeing an acceleration of this trend of cloud to edge.
You're seeing a growth of the ability the requirement for the ability to move your workloads from your on-premise to the cloud, or to the edge. And so all of these, means, that if you want to be able to have workloads that can run seamlessly across platforms.
And again, that's key to us in our investment of not only providing this differentiated compute capability, but a software enablement which is open and can be deployed across these markets.
And so I think overall, Joe, we've been I think unique in calling out the need for high-performance technology. We focused our roadmap to bring the highest performance across our CPU and GPUs.
And yet, we've also called out that going forward there is a need for diversity. It is about not just the CPUs and GPUs, but marrying them with FPGAs, with SmartNICs, with ASICs. And we're also very supportive of standards that allow partners, and competitors, in the industry to play together.
So it's a big tent of compute demand solutions. The needs are growing exponentially, that's not going away. And so we believe, you're going to see more heterogeneous workload to optimize compute for our customers. And that's what we are focused on providing from AMD.
Joe Moore
Great. Well thank you so much. We're out of time. And I've got literally hundreds of questions from the webcast. So we didn't have time to get. I apologize to everyone I aren't got time to get to those, but Mark congratulations on your success. And thanks very much for your time today.
Mark Papermaster
Thanks for having me here today, Joe.
- Read more current AMD analysis and news
- View all earnings call transcripts