AMD (NYSE:AMD) recently hosted the company's Core Innovation update to share details of future chips.
In this article, I would like to explain how offering both flavors of chips, x86 and ARM, greatly expands AMD's TAM, how the company has managed to do this while keeping expenses minimal, share why my time frame on this isn't near term, and finally bring this back to AMD's semi-custom business model to show how pricing power is on AMD's side. This article can be thought of as the Part 2 to an article released several months ago.
Ambidextrous, SkyBridge, and Android
All slides taken from AMD's Core Innovation slide deck. Note that specific names for the x86 and ARM variants of the SkyBridge platforms have not been divulged, so I am using the SkyBridge term to describe both.
The short of AMD's ambidextrous strategy is that it will allow the company to address multiple markets in a way that no other chip vendor can, that is unless Intel (NASDAQ:INTC) decides to follow suit. AMD will initially offer ARM Holdings (NASDAQ:ARMH) based chips for servers in the back half of 2014, with SkyBridge ARM SoCs coming in 2015.
The importance of the ambidextrous approach is most apparent when viewing the available revenue for each of these markets. Above, you're seeing the x86 market crest and start to plateau as the revenues by the ARM Holdings players have been expanding. The majority of this growth is driven by the rise of smartphones and tablets.
AMD will be able to address both the x86 and the ARM market going forward. And, very importantly, AMD expressed an explicit interest in running Android natively. Prior to this, AMD's largest Android pushes have been via the BlueStacks emulator.
Project SkyBridge is AMD's first true Android platform and, importantly, is a platform that is HSA compatible. Other than Imagination Technologies (OTCPK:IGNMF), as far as I am aware no other hardware vendor has announced specific plans for HSA hardware support (source: AMD's APU2013 conference via EETimes). I will describe the immediate and potential benefits of HSA a little later. Given this, it's likely we'll see AMD first out of the gate with an HSA compatible chip in the Android space.
Supporting alternative operating systems is a huge departure for AMD. Rather than putting all eggs in the Windows basket, AMD is branching out and actively announcing support for a rival platform.
The downside here I think can easily be seen when you look at the nearly non-existent inroads AMD has made in the tablet space. There are only two x86 players, and as such the pricing pressure isn't as high between AMD and Intel. This allows x86 chips in general to command a higher price premium. AMD is used to competing on price with both Intel and Nvidia, but the low power ARM client space is a crowded playing field.
The ARM cores have also been designed from the ground up to be small (therefore low cost), as well as low power. Intel's contra revenues are aimed at offsetting BOM materials for OEMs to make up for the required platform differences between Intel's x86 chips and similar ARM chips. With OEMs surviving on razor thin margins these small differences in platform costs represent large percentages of profit to the vendor.
In aggregate, I feel this move will expand AMD's TAM, but competition against the myriad of ARM players that are used to competing on razor thin margins will be somewhat uncharted territory.
The above chart depicts specific use examples in the embedded market for the SkyBridge chips.
The good news is if you're AMD, Wall St. has set the bar pretty low by expecting CS (Computing Solutions) revenues that are contracting faster than the overall market decline. In this sense, it's a win even if Android design wins are only enough to keep CS revenues somewhat constant, provided AMD doesn't exceed OPEX targets during development.
AMD: Progress Despite Cutting Opex
By far I have been most impressed with AMD's delivery and execution while simultaneously scaling back OPEX. AMD has dropped operating expenses to a target level of $420M to $450M each quarter, and has hit these targets while simultaneously ramping both console chips and beginning development, from "blank sheets of paper," on 2 separate cores.
It was enlightening to hear Mr. Jim Keller finally speak. His explanation was that high performance computing is high performance computing, regardless of the architecture. In his words, "the ISA is actually important," but developing two chips from scratch doesn't necessarily require two times the resources. One AMD team, being led by Mr. Keller, is developing the cores in tandem.
From the sounds of the discussion, SkyBridge chips will be physically similar in layout, with the big difference being the actual design of the CPU core. For the SkyBridge platform, AMD will be reworking both the PUMA cores and the A57 ARM cores to fit in the same footprint on the same SoC. This will allow pin compatibility between the chips.
Mr. Paul Teich of Moor Insights has an excellent read on the importance of how and why SkyBridge makes sense. His article focuses on servers and OEM/ODM costs. A perfect quote from his article summarizes my statement that AMD has managed to cut OPEX while simultaneously pushing forward with innovation:
On Monday AMD clarified a lot of their direction for that strategy. AMD gave us an interesting look at forward-looking SoC and systems architecture and also at frugal use of design resources via core co-development, design element reuse, and common hardware and software interfaces. While these will reduce AMD's R&D costs, they will also have a beneficial impact on AMD's ability to rapidly customize products for specific markets and to leverage SoC peripheral software driver development between their x86 and ARM products.
To draw parallels between Intel's Android efforts, one of the advantages of Bay Trail is that it allows OEMs and ODMs to design one platform capable of running both Windows and x86. However, the issue here is that Android doesn't run natively on x86 without some additional work. For the record, the information used in this article is from ARM Holdings, but it shows the overhead associated with getting Android running on x86. Despite Bay Trail being launched last year, there is still very few, if any, Bay Trail Android tablets on the market, meaning it's hard to verify these results. If these results hold true, this means that Intel has to have a product that exceeds ARM's performance by an amount that makes up for the overhead that comes from running the code that has been translated for x86.
From AMD's standpoint, the company is developing an ARM processor that will be capable of running the code natively, negating the overhead that x86 faces. Given this light, I completely understand now why AMD hasn't tried to get Android running on the x86 cat core family.
So by utilizing one team to design two separate cores and developing a set of chips that are pin compatible, AMD is able to bring up two cores starting from blank pieces of paper, utilize commonalities between the chips to drive down developmental costs, and create a unique platform that is capable of running either x86 or ARM workloads natively by switching out the chip. The incentive here is that it saves OEMs and ODMs in developmental costs, while simultaneously allowing AMD to deliver a product capable of running workloads natively with no associated overhead.
Understanding the ARM Server Push
AMD demoed the first ARM server running the LAMPs stack. LAMPs can be thought of as a shorthand way of describing all the software required to run a server. LAMPs stands for:
All of this software works in concert to spit out web pages as we know them. AMD's demo featured hosting both a Wordpress blog, as well as a video stream.
One thing that should be understood is that there are inherent limitations in regards to executing software.
Amdahl's Law is a principle that is used to calculate potential speedup of code when parallelized (source: Princeton.edu). So for big data, multiple smaller processors may not be well suited for the task. And it's because of Intel's lead in high performance computing at low wattages, which has driven down AMD's server market share. For client devices, the differences in power consumption between AMD and Intel means you may have to recharge an AMD product an hour or two sooner. In the data center, it means much higher electricity costs and a more complicated power transmission infrastructure.
AMD's demonstration from the Core Innovation conference showcased the usages of blogging and video hosting that would be well served by the cheaper processors. These workloads are more simply passing data between locations rather than number crunching. The significance of this is that AMD demonstrated an end to end case of showing a website running on an ARM server, meaning that the software is in place for this to occur. Intel's parts look to have a performance/watt advantage over the off the shelf A57 cores AMD will be using, but I do not believe these cores are meant to actually make meaningful inroads into the data center.
Earlier this year, Mr. Feldman delivered a keynote at the OCP (Open Compute Summit) when he introduced the ARM Opteron chip. All slides in this section are taken from Mr. Feldman's keynote unless otherwise noted.
One of Mr. Feldman's favorite slides to show is the one above, depicting the differences in the number of smart connected devices between two separate Papal inaugurations. There has been a massive boom in smart connected devices over the past few years, and each of these devices rely on processors that are relatively weaker than those found in notebooks and desktops.
These devices are useful because they are pushed data from more powerful processors in the cloud.
Source: AMD via Slideshare
Video is one of the fastest growing workloads in mobile. Although the above slide is pertaining to H.265, a CODEC designed for higher than HD video, the graph usefully depicts the growth areas of mobile data. Smartphones aren't very smart if they're not fed by the cloud, and workloads like video are growing quickly. Photos and videos take up many times more storage space than text data, and as more devices are in use, the cloud will have to grow and expand to push data to these devices.
The rise of all this mobile data is a recent occurrence, spurred on by the advent of the tablet and smartphone. In Mr. Feldman's keynote, he asserts that dense servers will represent roughly 10% of the server market by 2017 and 25% by 2019. Also note that the keynote from January states that OEMs, ODMs, and large customers want ARM to win. I will come back to this point in the next section.
The types of workloads needed to support smart devices represent the chance for disruption in the server market. They do not necessarily need the highest performance.
Right now the ARM server infrastructure is in its infancy, so AMD is facing a chicken and egg problem. Rather than waiting to see which comes first, they've simply provided customers the egg and are working on the chicken.
In 2014, I doubt we see very much revenue, if any, from ARM servers. These first chips AMD is releasing have very little in the way of customizations and optimizations. The first chips seem to be mainly intended to be first to market and demonstrate to customers the feasibility of ARM in the data center - the egg.
In 2015, we're likely to see more customized ARM cores that could possibly support AMD's freedom fabric technology for servers - the chick. Notice that AMD distinguishes between the 2014 and 2015 A57 cores with the low power moniker. Also, an article from EnterpriseTech explains that AMD decided to forego freedom fabric on the first generation ARM Opteron and explains that the second generation product will include it. This is why I feel the second generation ARM Opteron will actually attempt in earnest to enter the data center.
In 2016, AMD will be releasing the custom ARM K12 cores, as well as a new x86 core - the chicken. This will be a custom core that will likely focus on higher performance than the standard A57 cores.
During the Innovation conference, Jim Keller referred to AMD as having "high performance" in the company's DNA. Most of the other ARM players have experience in creating chips, which operate in small power envelopes designed for mobile devices. AMD, on the other hand, has much more experience in high frequency and higher power designs.
Based on this, if we see any growth from servers in the near term, I expect very little in 2014, with the potential ramping in 2015, but this depends more on if AMD pushes the customizations on this chip. During the Core Innovation conference, AMD's CEO referred to this mentality as "catching a wave." Dr. Su added to it by pointing out AMD picked this specific time frame to announce these products as 64 bit ARM computing is becoming more commonplace. I am viewing the initial 2014 ARM chip as more of a developmental platform, and taking a wait and see approach to 2015 and 2016.
By attempting to spearhead this shift and providing the tools and ecosystem to push ARM into the data center, and AMD's prior experience in the server market, I think it's quite likely AMD could manage to get a much higher relative percentage of the ARM server market, provided ARM makes at least some inroads.
Taking an additional quote from Mr. Teich's article (linked to above) explains why AMD is better positioned to take advantage of any headway made by ARM chips than rival ARM vendors:
AMD has a long history of analyzing x86 server workload performance, and has assembled a competitively differentiating collection of test vectors over the last dozen years. There is not a similar body of server test vectors for ARM anywhere in the industry, which puts AMD is in a unique position in the ARM server competitive arena - it will take years for their competitors to generate similar insight into server workload performance and then translate that insight into an optimized processor core architecture and overall SoC design.
Why Would Customers Want ARM To Win?
The lack of competition has been stifling the traditional server market. AMD owned approximately 25% of the server market in 2006. Since that time it has dwindled to around 3% (source: ZDNet). Mr. Feldman explained that the Bulldozer core responsible for this cost several people within AMD, including the CEO, their jobs. Intel's offerings have been extremely good and Intel has managed to essentially create a server monopoly.
There is a great article on the WSJ that details Intel's ability to dominate the server market while maintaining high prices. The article states that server chip ASPs have risen 47% since 2007, and Intel explains this by stating customers are choosing higher end chips. To quote the WSJ:
Forrest Norrod, vice president of server platforms at server maker Dell Inc., agreed that "customers are voting with their wallets" for high-end Xeon models. On the other hand, he said server buyers would like more choice of chip technology. "The lack of AMD is felt," he said.
Customers are wanting more performance, but the lack of competition means that customers are paying higher prices for these chips. While Intel bulls typically rush in to throw cold water on rumors that Google and Amazon could be looking to design their own chips, the rumors are at least somewhat plausible on the basis of cost savings.
Because Intel's chips have been so good, they've essentially become the only ones that are really feasible on any grand scale. This has also let the ASP of server chips increase while consumer chips have decreased. To put this in contrast, in 2007 Intel's Q6600 was one of the higher end consumer chips. Its suggested retail price was $881 at launch, and was expected to drop to the mid $500s a few months later (source: TechSpot). Today you can buy an i7-4770k at $330. Server chip ASPs have drastically increased while consumer CPUs have dropped by roughly the same magnitude.
For certain workloads, but not all, ARM CPUs could make a lot of sense. It offers customers choice aside from Intel, and initiatives like AMD's semi-custom strategy could allow these players to differentiate away from each other by creating custom chips with specialized fixed function accelerators and DSPs to handle certain workloads.
How Does AMD's Semi-Custom Business Fit Into Servers?
Source: OCP Summit Keynote
The above slide details AMD's rather bold predictions for the future of the server market. As these are just predictions, rather than focusing on the numbers, let's simply focus on the "Custom ARM CPUs" data point.
The rumors of some of the bigger cloud players building their own server chips largely circle around details that are pulled from company hires. Most of these larger companies have hardware designers. Microsoft (NASDAQ:MSFT), for example, has hardware designers that worked alongside AMD's engineers on the Xbox One silicon (source: PCWorld). PCWorld calls the Xbox One APU a "massive, custom CPU." Another possibility would be these companies are attempting to design specific IP blocks to take care of routine workloads in a more efficient manner.
As of right now, AMD is the world leader in semi-custom silicon. This is not meant to be a sensationalistic statement, but rather simply a fact. The Xbox One APU is quite possibly the most complex piece of silicon ever built for large scale deployments. It features cache coherent memories, several fixed function accelerators, lots of cache memory, ESRAM to boost memory bandwidth, discrete level GPU, and an 8 core CPU, all on one massive and monolithic die. It was so complicated that forums across the internet were spreading rumors about horrible yields with the chips, stating the chip would be too complicated to build so the Xbox One would be horribly supply constrained. Fast forward a few months and we can see Microsoft wishing they were supply constrained. And the ramp of these massive chips, for both the PS4 and the Xbox One, went extremely smooth.
The other piece of the semi-custom puzzle is the extremely aggressive pricing afforded by the business model. The customer pays for the NRE (non-recurring engineering) expense, and in return AMD gets a steady stream of revenue on a guaranteed design win; sticky revenue, as Rory Read defines it. Because of the nature of the design win, AMD is able to ramp the product with minimal overhead, allowing the company to operate within the target $420M-$450M OPEX.
The benefit to the customer is that it affords them a differentiated solution at an attractive price. The customer shares some of the upfront overhead with AMD, and the design win is guaranteed so there is minimal MG&A expense. In return, AMD aims for around 20% operating margin in these designs. Compare this to Intel's ~40% to 50% operating margin in its Data Center group and you can see the customer's viewpoint from a pricing perspective.
Based on AMD's ~3% market share in servers, there is no cannibalizing to be had from semi-custom eating into more lucrative off the shelf products, so any design win would be earnings accretive.
HSA: More Than Just a Buzzword
Given the lack of support from Google (NASDAQ:GOOG), Microsoft, or Apple (NASDAQ:AAPL), I am not as bullish as I was on HSA for the near term. However, in no way do I think the efforts have been fruitless.
To explain, the term heterogeneous literally means "consisting of dissimilar or diverse ingredients or constituents." If you look at a modern SoC (system on a chip), you can see many different and distinct pieces of hardware.
Above is an example from Qualcomm that shows the high level of integration that has allowed it to dominate the mobile market. The chip features some connectivity blocks, a DSP for the camera, CPU cores, and a GPU. HSA is an initiative that will make a standard and uniform way to interact with SoCs as the SoCs become more specialized and feature dissimilar hardware blocks, and Qualcomm is one of the supporting members of the HSA Foundation.
In the near-term, AMD is marketing HSA as a means to tap into GPU compute power. However, given the first official HSA 1.0 spec still hasn't been ratified, we are likely a ways off before we see this concept gain traction on any grand scheme. And in this regard, the potential near-term impact of HSA, is where I see the term HSA called a simple buzzword.
To throw some cold water on this argument I will use some data points from Joel Hruska's article on AMD's Steamroller architecture along with comments made by Jim Keller.
Physically, the easiest way to imagine HSA is the glue that binds the different blocks together on an SoC.
HSA negates some unnecessary work within the chip to reduce overhead and should, eventually, simplify the programming model.
In the near term, as you can see by reading Joel's article, AMD's Steamroller chip actually functions very well in terms of inter-chip bandwidth and data movement.
During the Q&A session from the Core Innovation conference, Mr. Keller was asked if it was difficult to get the GCN graphics cores working with the ARM architecture. As he explained it, it wasn't incredibly difficult given that you just give each part its own access to shared memory, allowing the CPU or the GPU to do work independent of what the other processor is doing.
Tying this back to semi-custom, the fabric that connects everything together on a single SoC, the physical part of HSA, is already complete and works well, at least on Kaveri. Taking this one step further, HSA aims to extend to various processor cores and accelerators, not just CPUs and GPUs, meaning specialized hardware can be glued in beside the standard cores AMD has already supplied, with the glue being the HSA fabric.
Although I do not see HSA having any meaningful and direct impact on AMD's financials near term, HSA is one of the many tools that is allowing AMD to do more customization work without ballooning OPEX. Back to Mr. Keller's answer regarding AMD's graphics cores and ARM chips, HSA has made the process somewhat seamless. As HSA has been developed over a number of years, this represents a sunk cost into reusable IP that can help AMD integrate many different types of cores and accelerators into traditional CPUs and GPUs. It is a high level approach that can be applied in many scenarios, not one low level piece that applies only to Kaveri.
What a New x86 Core May Mean for Upcoming Products
Mr. Feldman described Bulldozer as an "unmitigated failure." While I don't take quite as an extreme view of the performance disparity between Intel and AMD as some, that does not mean I don't feel it's important for AMD to at least somewhat close the gap with Intel.
Jim Keller used the term "blank pages" several times when describing how his teams began designing these new cores. There was very little information given regarding the upcoming x86 core, other than the company was trying to take the best of Jaguar and Bulldozer, while at the same time designing a new core.
Although not as bad as the server space, AMD's client market share has come under pressure from both Haswell's efficiency and performance, Bay Trail's contra revenue, and the erosion of the low end of the PC market by tablets.
While I am not suggesting AMD should compete head to head against Intel, AMD and Intel are locked in a duopoly in the traditional PC market so some competition is unavoidable. Also, AMD's semi-custom business model is feasible because the traditional business segments fund the R&D to develop AMD's portions of the associated IP. AMD has to sell chips to make the money back on the R&D, and it's harder to sell chips that aren't competitive.
Author Joel Hruska explains why he felt AMD should dump Bulldozer. In essence, he demonstrates that Kabini is able to get as much work done as Kaveri if you normalize for clock speeds and core counts. This is significant given the size difference between the cores. He concludes his article by asking, "AMD already has a solution to this problem. Question is, will they use it?" In 2016 or so we'll see how much Jaguar DNA the new core receives.
Launching a potentially more competitive x86 core alongside a custom ARM core represents a pretty substantial growth engine for 2016.
Why These Announcements Should Be Looked At From A Long-term Point Of View
Notice the common thread of these announcements is that they aren't near-term catalysts. Rory Read stated during this conference that this is AMD's two-year plan. And of all the product announcements made, for the reasons given above I do not feel any will contribute appreciably to 2014 revenues.
And these products aren't without hurdles. Mobile is a massive market, but to be truly mobile a device needs connectivity IP. AMD has been largely absent from this market. While third party IP from companies like Qualcomm can be substituted in, this raises the BOM and solutions with less integration not as attractive. IDC has already suggested tablet sales are slowing due to pressure from phablets. And despite seemingly competent solutions in both Temash and now Mullins, AMD has been largely absent from tablets. For these reasons, along with the increased competition in the mobile space, I am not as optimistic for AMD and tablets. However, for notebook and AIO form factors where AMD already has a presence, ODMs developing one solution that can run various workloads natively represents an incentive to use AMD.
Servers become interesting beginning in 2015 depending on whether or not we start to see integrated freedom fabric and push to smaller process nodes.
ExtremeTech has several slides from ARM that show how performance is expected to improve between 28nm and 20nm.
Performance per watt is one of the most important metrics for data center workloads. Intel will be releasing 14 nm chips in short order while other vendors are attempting to ramp 20nm designs. A57 cores see greater than a 50% performance increase over 32 bit A15 cores, but the cores really shine at smaller nodes.
The 2014 ARM Opteron seems more like it will be used to seed the market. By 2015 we should see newer server solutions, and based on the reasons outlined above I feel these solutions are when we could finally see traction in the data center.
You'll notice the common theme of my article is that most of the major growth drivers seem to be more of a story for 2015 rather than 2014.
I'm pretty sure I read somewhere, possibly a fortune cookie that things will happen until they don't.
After bottoming, AMD traded in a range roughly centered on $2.50. After the console design wins were announced, AMD bounced back up and has been trading in a range roughly centered around $3.75. AMD has also been in an uptrend since November 2012, although it has been meeting some resistance around $4.25. You'll notice the lows have been getting progressively higher over the past year. This is likely the result of Mr. Rory Read's turnaround play at work.
But turnarounds don't happen overnight, and AMD has even been kind enough to lay out the company's 2-year plan for us. ARM and Android platforms, along with pin compatible x86 counterparts in 2015. We should see two from scratch cores in 2016, along with a more earnest push to reclaim some server market share.
The most bearish analysts have a price target of around $3.00, and looking at AMD's share price action over the past year shows that ~$3.00 could be a floor for the stock, representing a ~25% reasonable downside from current levels.
However, with the launch of the AM1 platform, mobile Kaveri, and Beema/Mullins APUs, there are plenty of positive catalysts for CS revenues going into the back half of the year. CSO Mr. John Byrne explained that there will be additional commercial PC design wins coming to market in the back half of 2014, and console volumes typically ramp in the back half of the year as well.
To make sure my meaning is understood, I'm not personally expecting an explosion in CS revenues. Rather, I'm taking a more metered approach. If bearish analysts are expecting CS revenues to contract, what happens if CS revenues don't fall? No growth CS represents a positive if the bar is set for falling revenues.
All the catalysts I mention specifically for 2014 are minor or are already baked in, in my opinion. The real growth drivers, if they materialize, will likely be dense server and the new cores that will appear in 2016.
Hoping to make a quick and large profit from an earnings pop could pan out, or it couldn't. But it will be almost a guaranteed headache due to the volatility and watching paper losses and gains.
For those with a longer-term view, a time frame of 2016 gives management a chance to complete the turnaround strategy, finish diversifying away from the PC space, and attempt to correct Bulldozer. There is the chance for some downside if AMD misses expectations. The company has been punished on seemingly good news during recent quarters, with the last quarter finally breaking the trend.
The other risk is the opportunity cost for having a position in AMD and waiting for the turnaround to continue. Back to my fortune cookie saying, AMD will likely continue trading in a range until real growth materializes or a large design win is announced.
But for those with a longer-term point of view, patience, and a strong understanding of the risks associated with AMD, monitoring the health of AMD's turnaround strategy and defining walk away points will lead to better investment decisions and fewer headaches from watching the volatility.
Disclosure: I am long AMD, INTC. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Additional disclosure: I own both shares and options in AMD, and actively trade my AMD position. I may add or liquidate shares or options at anytime. I may also trade short term options, both calls and puts, in AMD. I may add to or liquidate my position in INTC at anytime.