AMD's Technology Roadmap, And How It Relates To Investors

| About: Advanced Micro (AMD)

Summary

AMD's current technology will not allow the company to stop losing money.

Any AMD investor is betting on future technology to reverse the steady losses the company is facing.

AMD's GPU market share will increase meaningfully in 2016, with many technologies intersecting to make AMD's GPUs attractive products.

Many investors of AMD (NYSE:AMD) realize the company has no chance to turn around its financials with the current technology the company is selling. The Bulldozer line is simply poor, despite Carrizo's improved low power performance. The GPU line is not as poor, but is still behind NVIDIA's architecture, particularly when using the DX11 API. Consoles are now entering their third year on a CPU architecture that appears to be discontinued, with talk about the next generation of consoles starting. There is no doubt that the future of the company rests firmly on the technology roadmap, as the company will not stop losing money with the current product lineup. That much it has proven for several quarters.

I want to preface the rest of the article by stating this is speculative, and no one knows exactly how things will unfold. So view the article through that pane of glass, and base your decisions as to whether you agree with my reasons. If you do not, you may be right.

Let's first start where we are. AMD has essentially three technologies of significance, the best of which appears to be discontinued (Jaguar/Puma). There is a lot out there saying AMD must make inferior parts because it does not have the R&D budget of Intel (NASDAQ:INTC) and NVIDIA (NASDAQ:NVDA), and that the continuing cuts jeopardize their quality. If you see this, I would recommend taking any additional "facts" with a grain of salt, because it is not accurate. In fact, developing a processor, or GPU, does not take an infinite amount of cash, and more cash does not necessarily make for a better processor. Processors are expensive to make, but companies like AMD have more than enough, and would never compromise their most important products. The cuts come elsewhere, which we will explore.

If one needs any proof of this, consider badly designed, but well-funded, processors by Intel, like Pentium 4, and the current Atom processors. In both cases, AMD processors (Athlon XP/64, Jaguar/Puma) were better designs, despite AMD having been considerably smaller than Intel.

Where AMD's cuts do show up are in less essential technologies, at least according to the company. Perhaps the most painful is the death of the Jaguar/Puma line, which by all appearances will not move beyond 28nm. This is arguably the most successful product the company has released, and is probably why the company is still in business. Not only has this processor family been successful selling into the PC market, it has also been used extensively in thin clients, networking devices, and most importantly semi-custom chips used in the consoles. If Intel had technology this good, would they let it die? Clearly not, as they keep moving the inferior Atom architecture forward.

We can also look at AMD's, at one time, shining star, the K12. A few years ago, this processor was all the talk at AMD, particularly with how happy they were with the superior instruction set and the performance it would enable. Now, it has been pushed back to 2017, and is not mentioned very often, the focus being on Zen. This is yet another example of the R&D budget not being large enough to handle everything.

Our last example relates to AMD's GPU lineup. While the Fury line has produced small improvements in market share, it is still important to note that at last glance, it was still below 20%. This is unhealthy and unsustainable. AMD did not have the budget to both design a new 28nm architecture, as NVIDIA did, and put out a 14nm product in a timely manner. The company decided to leverage a technology it had already developed, HBM (High-bandwidth memory), and software (Mantle) to disguise the fact the architecture is very old, with some degree of success.

Zen and Polaris will not suffer consequences from the R&D cuts. They have been paid elsewhere. This does not guarantee their success, of course, but it also does not guarantee inferiority either.

It's also important to note where AMD has been expanding R&D, not just maintaining it. This involves primarily software, where the company has been historically behind, and has suffered serious consequences because of it. We will start with Mantle, which is by far the most significant of these developments, and the one that gives AMD the greatest benefit.

To understand Mantle, we need to look at the hardware it is intended to enhance. There is an old saying, computers are not smart, they are high-speed idiots. This is particularly true of GPUs, which lack the complexity and flexibility of a CPU. On the other hand, they do parallel tasks extremely efficiently, much more so than CPUs. So, in such workloads GPUs are far superior to CPUs. AMD's focus on HSA and consoles gave it a different direction than NVIDIA, the latter of which focused primarily on DX11 performance, while keeping size and power use minimized. AMD made more complex and "intelligent" GPUs, capable of greater of programmability and GPGPU performance. There was a cost, of course, and AMD GPUs looked highly inefficient when paired with a crippled API like DX11. But, it was highly beneficial in getting the console wins, and showing the benefits of HSA.

Mantle was intended to facilitate the use of the features available in AMD's GCN. Although it has largely been deprecated, it has spawned both DX12 and Vulkan, both of which can fully utilize GCN, as opposed to DX11. These are also much closer to console APIs, allowing for better migration back and forth. Since the GPUs on the consoles are architecturally very similar, this is clearly favorable to AMD.

With Microsoft (NASDAQ:MSFT) pushing Windows 10 so aggressively, DX11 has started its decline. AMD already has an architecture that is well utilized by DX12, NVIDIA will be scrambling to produce its first GPU that is. I have read articles indicating that AMD will be lucky just to hold market share when both new generations are out; these are generally uninformed opinions, based on the situation with DX11's performance. Given what we covered, it should be clear NVIDIA's advantage in DX11 will not help in DX12. However, AMD's greater experience in developing "smarter" GPUs that DX12 better exploits, may give the company an edge, as this will be its second generation optimized for newer APIs. It is also important to again point out that consoles do use AMD technology, so ports can benefit from that. This matters more for newer "closer to the metal" APIs. I would expect mobile graphics processors to benefit the most, as they will have somewhat similar capabilities to the consoles in raw performance.
Click to enlarge
In fact, if we look at the above image, we see it says "Console-caliber in a thin and light notebook." Clearly, getting console level performance on a laptop is a useful goal, and considering the architecture is very similar to the consoles, it does give AMD some synergy. Although this chip, Polaris 10, might not be the card enthusiasts want, AMD was careful to mention there will also be a Polaris 11, and this is very likely a very powerful discrete GPU. However, given the very favorable power characteristics for the 14nm FinFET process compared to 28nm planar, the biggest improvement will be in mobile scenarios, and there's little wonder why AMD's demonstration showed a mid-range chip. Whichever company can get to FinFETs first should have a very easy time in mobile while the other scrambles to catch up. Right now, it looks like AMD will be first, with the release in mid-2016.

So, as investors, what does this all mean? AMD will almost certainly gain market share in 2016 in the GPU side. Even with the same technology, AMD's software initiatives, particularly that which led to DX12 and Vulkan have already decreased any advantage NVIDIA has in GPU technology, and it will only get worse. My guess is AMD technology will be better than NVIDIA's, simply because the company has much more experience dealing with these more "intelligent" GPU architectures, and likely better understands the bottlenecks than a company coming from a DX11 based GPU. Also, AMD's experience in developing APIs, and the massive amount of console software make market share gains, and significant ones, part of the 2016 story.

AMD is also very vocal about mentioning Virtual Reality/Augmented Reality scenarios, believing this to have a TAM of about $150 billion by 2020. To me, it's just too soon to judge how this market will evolve, and how much of it AMD will capture. We have all heard how the Internet of Things was supposed to explode, and it has not yet. Or how 3D TV was supposed to be the greatest thing since cheddar cheese, but that really has not panned out too well. So, while I cannot say this will not become a profitable market, it is way too soon to invest based on this particularly segment. Hopefully that will change.

Does all of this mean AMD will pass NVIDIA in market share? Not at all. But, I would be very surprised to see AMD's share below 30% by Q4 of the year, and think 40% is the over/under from where we are now. Anything is possible, as always, but as investors, that never should stop us from choosing the most likely event as a basis for investments.

Unfortunately, CPUs will not benefit from the new GPU architecture until 2017, but that will be a topic for a follow-up article I will write later this week, where we'll discuss CPUs, and semi-custom as they relate to a somewhat more distant time frame.

Disclosure: I am/we are long AMD.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.