Nvidia: Why Oppenheimer's Downgrade For AMD Misses The Big Picture For Nvidia

Dec.20.13 | About: Nvidia Corporation (NVDA)

This week Oppenheimer "discovered" that the PC market is under pressure and decided to downgrade Advanced Micro Devices (NASDAQ:AMD), citing Intel's (NASDAQ:INTC) interest in the low end of the PC space. Taking this one step further, the downgrade stated these pressures would be felt throughout both the PC and graphics portions of AMD's business.

But the real kicker was these parts of the downgrade (Source: Benzinga, Barron's):

"Meanwhile, Nvidia's (NASDAQ:NVDA) competitive graphics position remains as dominant as ever."

"That, combined with low margins from its chips for Sony's (NYSE:SNE) PlayStation 4 and Microsoft's (NASDAQ:MSFT) Xbox One, promise to bring down overall corporate gross profit."

So, taking Oppenheimer's assessment on face value, let's pick apart the inconsistencies and factually wrong information. Somehow, Intel's interest in the low end space will carry over to GPU sales, but this will affect only AMD, not Nvidia. This is likely incorrect since most notebooks with Intel CPUs are paired with offerings from Nvidia, not AMD.

Playstation 4 is incremental revenue for AMD, and if you look at CFO comments, 15% operating margin is actually higher than the corporate average at the operating level. AMD's CS (Computing Solutions) and GVS (graphics and visual solutions) revenues until the most recent quarters have typically teetered at either a slight loss or profit, implying near 0 operating margins, with console profits actually driving the company back to the black.

Xbox 360 and PS3 sales peaked several years ago, and have been in a decline. AMD replaced the revenue stream with design wins in all three next generation consoles, giving a steady stream of revenues with positive operating margins.

Nvidia decided to go a different route and focus on other growth opportunities in order to try and drive revenues and bottom line growth.

Recently, I have seen several bullish articles regarding Nvidia's future, and they have all painted an extremely rosy picture. But these articles seem more based on opinion and a superficial glance at Nvidia's businesses, and don't really dive deep to figure out how trends could actually impact Nvidia's future.

Setting the Stage and Reviewing Financials

I'm going to "randomly" pick the year 2011 to start this section. In 2011, Intel introduced the Sandy Bridge architecture, and AMD released both the Llano and Brazos platforms.

And based on design timelines, 2011 is roughly the time AMD would have begun working on the next generation console chips. Regarding the next generation consoles, several articles were published regarding Nvidia's opinion on consoles, but the source of most of these articles comes from one specific quote. Here is the quote as posted on DailyTech:

"I'm sure there was a negotiation that went on and we came to the conclusion that we didn't want to do the business at the price those guys were willing to pay. Having been through the original Xbox and PS3, we understand the economics of the development and the trade-offs.

If we say, did a console, what other piece of our business would we put on hold to chase after that? In the end, you only have so many engineers and so much capability, and if you're going to go off and do chips for Sony or Microsoft, then that's probably a chip that you're not doing for some other portion of your business."

Lastly, in 2011 and 2012 the mobile landscape looked a little different. Tegra 2 had very good success at launch, and Tegra 3 showed much promise. Taking a quote from the Q2 FY2012 press release (keeping in mind Nvidia's FY's do not line up with calendar years):

"The future of computing is mobile and visual. With Tegra's momentum and our growing GPU businesses, we are ideally positioned to lead the industry forward," he said.

So based on this statement I believe Nvidia decided to not even vie for consoles in order to instead focus on mobile.

I have provided screenshots of Nvidia's financial data during recent history (note Nvidia's fiscal years do not align with calendar years):

Q2 2012

(click to enlarge)Click to enlarge

Q3 2012

(click to enlarge)Click to enlarge

FY2013 Total

(click to enlarge)Click to enlarge

Q4 2013

(click to enlarge)Click to enlarge

Q1 2014

(click to enlarge)Click to enlarge

Q2 2014

(click to enlarge)Click to enlarge

Q3 2014

(click to enlarge)Click to enlarge

See Nvidia's Quarterly reports for a detailed breakdown of various operating segments. For the purposes of this article:

  • GPU = Notebook + desktop GPUs + Licensing from Intel
  • PSB = Professional Cards (Tesla, Quadro)
  • CPB = Tegra + Consoles


Nvidia underwent an accounting change in recent history that broke out Tegra as a separate entity, rolled professional and consumer GPUs all into GPU, and put licensing from Intel in with "All Other." GPU currently still includes console royalties. Also from this point onward, I will be using calendar dates vice FY dates.

So in 2011, you have both Intel and AMD drastically beefing up integrated GPUs, AMD winning the next console generations, and Nvidia bowing out of the console designs to focus on other areas. Based on CEO comments, I believe one area of focus was specifically placed in mobile.

So How Did This Work Out?

Well, let's look. In 2011, Nvidia was having good success in mobile. In 2012, Nvidia's Tegra revenue was up almost 30% compared to 2011. So far in total for 2013, Nvidia's Tegra revenue totals ~$267M.

To put this in perspective, even if Nvidia generated combined revenues for Q4 equivalent to the sum of the first three quarters of 2013, Nvidia would still suffer a 30% YoY decline in the Tegra segment.

But to fully feel the gravity of where Nvidia is at, you must look at the actual financials behind Tegra as well. Summing the profit/loss from Tegra revenues using the earnings reports from the previous 3 quarters (taken from the 10-Qs) we arrive at an operating loss of ~$400M thus far in 2013 (FY2014) for Tegra.

So Oppenheimer dinged AMD for low profitability in consoles. These are the same consoles that Nvidia chose to pass on in order to focus on other areas. AMD's console wins had the first-order effect of generating positive operating margins and helping propel AMD back to the black. The second-order effect comes from the successful ramp demonstrating AMD's abilities in the semi-custom space, as well as creating synergies for GPUs in desktops and notebooks via projects like "Mantle."

Meanwhile, Nvidia is approaching a half a billion dollar loss in the company's Tegra division for FY2014. Measly 15% operating margins don't seem too bad in this context.

Digging deeper into the most recent 10-Q, it appears that Tegra GMs are actually a little lower than the corporate average. Assuming ~50% GM on Tegra revenues, and doing a little back calculation, given an operating loss of $132M on $111M in revenues, this puts operating expenses for the Tegra division around $175M or so. Nvidia likely needs to generate around $350M or so each quarter in the Tegra division just to get the chip to break even. This number may be slightly off, but at least in the right ball park. Granted, it goes down if Nvidia is able to stem the R&D spending. But considering the likelihood of this, you must take into account Qualcomm (NASDAQ:QCOM), the current king in mobile, and Intel who is aggressively targeting the mobile market. Enough said about Tegra.

Regarding GPUs, according to Nvidia CFO commentary exiting 2012:

"With the introduction of Kepler generation GPUs in fiscal 2013, our desktop and notebook revenue grew by 5.9 and 26.4 percent, respectively."

You can see from prior financial reports that consumer class GPUs make up a larger percentage of GPU revenues than professional class GPUs. Note that Nvidia has had good growth of these segments during 2012 and thus far in 2013, with Tesla revenues growing ~37% during 2012, Quadro only declining ~5%, and growth in both notebook and desktop GPUs, specifically with notebook GPUs growing ~26%.

To start off in 2013, Nvidia experienced lower notebook GPU revenues, and stated the company felt it was due to OEMs managing inventory prior to Haswell's launch. Well, Haswell launched in Q2 and was shipping throughout Q3, but Nvidia still has not seen a return to growth for notebook GPUs. The company made the following statement in Q3 CFO commentary:

"Within our GeForce GPU revenue, we continue to see strong demand for desktop and notebook gaming platforms while entry notebook GPUs continue to show decline."

To sum up the GPU situation, professional GPUs, along with high ASPs and good success in desktop and mobile consumer GPUs have been stable enough to drive growth for Nvidia, but now I believe we are starting to see the beginning of the trend where integrated graphics are truly good enough to start lurching higher up the performance chain. AMD's release of the new Hawaii GPUs forced Nvidia into a round of price cuts that are likely to affect the company's margins, bringing them off the record highs.

To give an idea of the impact of integrated graphics, look at various data points from Jon Peddie Research. Going back to 2007, we can see that total GPU shipments were around 80M/qtr (just a rough ball park for gross trend analysis). We can also see that Intel had the majority market share at this time (~38%), providing solutions capable of surfing the internet or using Office style applications; but these graphics chipsets weren't really useful for much else. These integrated chips satisfied the need for a good number of users, but most users still required more GPU power.

By the end of 2012, PC shipments are in a decline overall, and Intel has a ~60% market share in graphics thanks to the increase in power in integrated graphics.

Source: JPR

Click to enlarge

Click to enlarge

As a final data point, during the most recent press releases, total unit shipments have fallen from 120M during last year to ~111M units during Q3 2013, and Intel is now at ~63% market share.

Click to enlarge

Interestingly, JPR notes that as of the report linked to above, PCs contain on average 1.4 GPUs per PC, meaning that many computers with integrated graphics still use discrete cards. Also, Nvidia managed to grow GPU shipments an overall 2.3%, factoring in a growth of 8.2% of desktop GPU sales combined with a drop of 3.3% shipments in notebooks.

Using this data and some rough math, we can see that during the most recent quarter, Nvidia shipped 18M total GPUs, with roughly a mix of 8.5M from notebooks and 9.5M from desktops (system of equations to calculate size of notebook and desktop GPUs based on the weighting each contributed to the overall increase in shipments).

To double check this math, the most recent add-in-board report holds Nvidia at a 64.5% market share on 14.5M units, or 9.4M units. This same report also states that the attach rate of add-in-boards for desktops is down from a high of 63% in 2008 to a current rate of around 43%.

Notebook GPUs are More Important for Nvidia Than AMD

Getting out of the weeds and into the big picture, roughly half of Nvidia's shipments come from the notebook sector, where the shift to thin and light form factors, 2-in-1's, or tablets means that there is substantial downside here. AMD is set to release Kaveri in 2014 for mobile, and Intel will be releasing Broadwell, which will obsolete many more notebook sockets in 2014. The term SoC stands for system-on-a-chip, and a major function of these chips are integrated graphics. A shift in form factors, along with increased power of integrated graphics, are going to continue to squeeze notebook GPU sockets.

I believe (if my math is correct) that just under half of Nvidia's unit shipments come from notebooks. Even if this number is slightly off, I have still demonstrated why I feel there is still significant potential for downside in Nvidia's GPU sockets.

You can see that despite Nvidia having a much larger market share of the discrete market, AMD actually has a higher overall market share, and Intel is king of the hill in graphics due to increasing the capabilities substantially in integrated graphics. Nvidia is the lowest because the company has no x86 APU in the traditional PC race.

A quick look at Newegg will show how many more notebook GPU design wins Nvidia has than AMD. It will also show that most of AMD's notebook GPU wins are paired with AMD APUs. On Newegg, there are ~900 designs with integrated GPUs, and ~250 with dedicated GPUs. Looking closer at the selection with dedicated GPUs, there are only 6 Intel powered laptops with consumer grade 8000 series GPUs from AMD, whereas there are over 100 Intel powered laptops with 700 series GPUs.

Specifically for discrete GPUs, the majority of AMD's GPU revenue is derived from desktop cards, with a small fraction coming from notebook GPUs. Notebook GPUs represent just under half of all Nvidia shipments.

The Low End Space Is Where Intel Is Gunning For Market Share

To begin, let me reiterate that Nvidia has no x86 horse in this race. If Intel truly wants market share at the low end space, the use of a discrete GPU is excluded in this equation.

Intel's competition for the low end asserts direct pressure on the budget discrete GPUs in notebooks.

Keeping in mind Richland is based on an aging architecture that is soon to be replaced with Kaveri, a mobile Richland still bests all integrated graphics solutions from Intel, with the exception of the much more expensive Iris Pro iGPU.

Source: AnandTech

(click to enlarge)Click to enlarge

AnandTech also has a fantastic write-up on the evolution of Intel's integrated graphics.

(click to enlarge)Click to enlarge

In his write-up, Anand uses Grid 2 as a test platform because it scales reasonably well with Intel's GPUs. The interesting point here to bring up is the large jump between HD3000 to HD4000. This increase was brought about largely from the transition from 32nm to 22nm, allowing a larger thermal budget for the integrated GPU. Physics places physical constraints on what we can do with silicon, and as transistors get smaller and lower overall power consumption, we can use this extra power budget to punch up performance.

But the picture looks a little more drastic when we consider the pricier Iris Pro solution.

Source: AnandTech

(click to enlarge)Click to enlarge

(click to enlarge)Click to enlarge

Depending on games and settings, Iris Pro approaches performance of a GT 750m, and goes from being a budget platform to more of a mobile mainstream platform. Integrated GPUs aren't just for office anymore.

Anecdotally, look at sockets Nvidia recently lost with the MacBook Pros and iMacs during the recent refreshes. Discrete chips are available only on the most expensive MacBook Pro lines, and the cheapest iMac ships with integrated graphics.

And Intel's Broadwell iGPU is rumored to be about 40% faster than HD5000. While I typically do not easily subscribe to rumors, this one is easier to swallow given the fact that Intel has increased iGPU performance substantially with each CPU iteration, and the gains between HD3000 and HD4000 were around 30%.

As Intel fights for integrated GPU dominance, this fight escalates from low end GPUs up the product stack to the mainstream performance points.

Kaveri was demoed at APU 2013, and the most detailed video seems to be this one.

(click to enlarge)Click to enlarge

At 720P and default high settings (starts @ 2 min 30 sec mark), the iGPU was squeezing out ~30 fps in the opening hallway scene.

AMD's focus on integrated graphics will also serve to put pressure on GPUs further up the product stack. And although this will affect AMD's GPU sales, more powerful integrated graphics will make the company's APUs more competitive. Since Nvidia does not manufacture chips for traditional PCs, this can only be seen as a negative for Nvidia.

What Nvidia Bulls Don't Discuss: Economies of Scale

Investopedia defines the term "economies of scale" as:

The cost advantage that arises with increased output of a product. Economies of scale arise because of the inverse relationship between the quantity produced and per-unit fixed costs; i.e. the greater the quantity of a good produced, the lower the per-unit fixed cost because these costs are shared over a larger number of goods. Economies of scale may also reduce variable costs per unit because of operational efficiencies and synergies.

My emphasis was added to the last portion, because this is what I would like to focus on.

To begin, a quick discussion of how Nvidia builds GPUs is warranted.

Source: AnandTech, Nvidia

The basic building block of a current generation Kepler based GPU is called an SMX (streaming multiprocessor) unit.

(click to enlarge)Click to enlarge

Each SMX unit is a control logic plus a cluster of 192 GPU cores.

(click to enlarge)Click to enlarge

These SMX units are then clustered together and the necessary hardware is added to make a functional GPU. Think of the actual GPU cores as a car's engine, and the other stuff is what actually makes the car.

And to create different GPUs, Nvidia does things like change the total number of SMX units, the capabilities of those SMX units, or simply things like frequencies; the highest end Tesla card to the lowest end Kepler consumer GPU all use SMXs, tweaked to the specific application.

Because there is much R&D overlap between the low end parts and the high end parts via IP blocks such as SMX units, this allows the R&D costs to be amortized across a higher number of units, which allows each unit to be profitably sold at a lower ASP.

Put another way, regardless of the ASP and margins of specific products, higher volume of sales help absorb the impact of some of the R&D spending, which boosts operating income of the GPU segments. It isn't like Nvidia will magically be untouched if volumes start shrinking. One of three things will happen:

  1. Profitability would erode
  2. ASPs of the remaining products would have to be adjusted to protect profitability
  3. Expenditures would have to be cut

There would be a slight offsetting effect of a reduction in the cost of sales, but R&D is a more fixed cost than the actual manufacturing of products. If Nvidia sells fewer products, simply manufacture fewer products to offset that cost. But there is a given level of R&D that is required to build IP blocks such as the SMX units in order for Nvidia to remain competitive.

AMD bears often focus on how trimming OPEX will affect the R&D budget. And like Nvidia, AMD fights this by reusing as much IP as possible, allowing the higher volume of units shipped to absorb some of the R&D cost.

As an illustrative example (meaning these numbers are nothing more than best guesses used to demonstrate a point), Nvidia's R&D budget during the most recent quarter was ~$340M. Let's say that ~$250M of that was for GPUs.

In scenario #1, assume that even these lower margin, higher volume products are profitable on a per unit basis at the operating level. Each unit lost means fewer dollars go to operating income. The lost volume actually impacts the bottom line in this scenario. But this is not all.

In scenario #2, assume that the cheaper GPUs sell at a breakeven point at the operating level. To prevent from further eroding earnings, the ASP of the volume of products that is selling would have to be raised enough to account for the R&D spending, or OPEX would have to be cut accordingly.

Just because the cheaper GPUs carry lower margins than the rest of the GPU product stack doesn't mean they are unimportant and Nvidia will not be impacted by losing them. Although it may make sense superficially, the reality of the scenario becomes apparent when you consider the volumes of products sold at the low end vs. the high end.

Low-end, discrete GPUs are the ones that will be most under pressure from increasing graphics performance of integrated GPUs. Discrete notebook GPUs likely represent just under half of Nvidia's GPU product sales. In no reality that I can imagine is this a nonchalant situation for Nvidia.

Conclusion

Most of the analysis I see regarding Nvidia seems to be more opinion based than fact driven, with an emphasis on growth areas like GRID and Shield, while simultaneously downplaying the significance of integrated GPUs to Nvidia's future. These claims are made surrounding loose analysis with no discussion of TAM size.

Looking at Shield, sure it's a positive because of low development costs, but how much impact could it actually have for the bottom line? At $275, the Shield is about 70% of the price of a PlayStation 4, and is capable of playing smartphone games. There are cool features like streaming from a PC to the handheld device, but how many consumers will theis appeal to?

According to AllThingsD, Nvidia is declining to share sales numbers at this time. This Seeking Alpha article referred to sales as "great." "Great" is the term used by an Nvidia spokesperson. Further digging reveals that "great" means 20k units sold within the first two weeks of launch (source: PortableGamingRegion). Terms like "great" don't mean anything to investors without a frame of reference. Console sales are measured in millions of units. Shield sales are measured in thousands or tens of thousands.

GRID is another favorite. According to WSJ, Nvidia CEO Mr. Huang describes GRID as a potential $5B market. But realistically, what is the time frame for this market? Does this seem realistic given that AMD and Nvidia's combined GPU revenues for 2012 (including both professional and discrete GPUs) are less than $5B? AMD is also throwing its name into the GaaS hat.

According to the data I have shared above, Nvidia shipped ~18M units during the most recent quarter, and generated ~$875M in revenues, which would equate to an ASP of ~$50. Looking at this weighting means the majority of the volume that Nvidia ships is the lower tiered cards. Not acknowledging the risk to this business flies in the face of logic.

The argument that "good enough" doesn't make sense holds little water either. When Intel really began ramping up integrated graphics, we see them start taking market share quickly. From 2007 to 2013, Intel's graphics market share has risen from 38% to 60%+, and Nvidia is starting to lose sockets higher up the product stack (a la MacBook Pro). Intel's market share appears to grow proportional to graphics power, and next year we should see another large step forward in this department for Intel.

Nvidia is much more dependent on notebook PC market share than rival AMD, and this is where Nvidia will likely feel the most pressure from integrated graphics. And this is a huge market for Nvidia, unlike Shield or GRID. Although GRID does have potential for a decent future, the concern of integrated GPUs will arrive much sooner.

In the consumer GPU space, AMD's recent release of the new Hawaii GPU forced Nvidia into a round of price cuts. Note that when the next market share figures are released, the recent boom in litecoin mining will make it more difficult to ascertain the success of AMD's new cards to determine what GPU market share would have been without the added boost to AMD. But the R9 290X and 290 seemed to be selling well prior to this boom.

The most interesting thing I feel Nvidia has going for it right now is actually G Synch. If G Synch is successful, it will signify somewhat of a moat for Nvidia in the consumer desktop space.

Tegra will need to see substantial success just for the company's mobile division to get back to black. Tegra 5 looks pretty impressive, but it isn't out yet. When Nvidia demonstrated the chip, the company compared it against an A6X chip from Apple. The A7 has a much more powerful GPU than the A6X, and demonstrates the fact that the competition isn't sitting still waiting on the release Tegra 5.

Finally, Nvidia's PR people have put out contradictory information. In one breath they compare the 2W Tegra 5 chip to a 250W desktop GPU.

Source: UberGizmo

(click to enlarge)Click to enlarge

In the next breath, Nvidia uses physics and a comparison of wattage to explain why consoles cannot be as powerful as PCs. GTX Titan uses 14 SMX units, whereas mobile Kepler will use one. Normalizing for wattage, in the desktop Titan each SMX would be supplied with about 17W. So when you scale this architecture down to account for 1/14th the number of SMX units, and 1/8th the wattage per SMX, you wind up with something that's a fraction of the power of a desktop chip. AMD does something similar with its GCN cores. AMD builds mobile GPUs for the 4W Temash chips using 2 GCN cores (similar to Nvidia's SMX units), and a hefty underclock compared to desktop GPUs. If you would like to get a good idea of what to expect from mobile Kepler, look at AMD's low wattage mobile chips.

In the near and long term, I see significant headwinds building for Nvidia, and doubt the size of potential catalysts like GRID and Shield being able to make up for potential losses. Tegra 4 and 4i may be able to stem the losses, but the competition is continually evolving and improving their mobile offerings as well, and mobile is extremely competitive. With Intel's ambitious plans, this will only add more pressure to this space.

Disclosure: I am long AMD, INTC. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: I actively trade my AMD position, and own both shares and options. I may add/liquidate shares/options in AMD at anytime, or initiate a small hedge via puts at anytime.