Yesterday, Nvidia (NASDAQ:NVDA) turned in respectable if not stellar numbers for its fiscal Q1, primarily on the strength of its computer graphics business. Total revenue was up y/y by 3.2% to $954.7 million with GAAP net income of $77.9 million, up 29% y/y.
GPU Segment Prospers, for the Time Being
GPU (graphics processing unit) segment revenue was up 8.1% y/y to $785.6 million as the GTX line of graphics cards sell well and Nvidia continues to execute its strategy of expanding the role of its GPUs in general purpose computing. Nvidia Tesla GPUs can accelerate supercomputers, and its GRID GPUs perform remote graphics processing for servers available from Cisco (NASDAQ:CSCO), Dell (NASDAQ:DELL), HP (NYSE:HPQ), and IBM (NYSE:IBM).
Given the declines in the PC market in the first quarter of the year, Nvidia is doing well to show some y/y growth in its GPU segment, even though there was a slight sequential decline from Q4 of 5.6%. The longer term trends are not so favorable, however. As AMD (NYSE:AMD) and Intel (NASDAQ:INTC) continue to turn Intel Architecture processors into SoCs (systems on chip) through the incorporation of on-chip GPUs, the need (and market) for discrete GPUs decreases.
For the time being, this trend is mostly confined to processors intended for traditional mobile PC applications such as ultrabooks, but built-in GPUs are increasingly finding their way into desktop systems as well. While Intel built-in graphics for Sandy Bridge and Ivy Bridge processors have been mostly utilitarian, the next generation of Haswell (4th Gen Core) processors should once again boost graphics performance.
During the conference call, Nvidia management offered the opinion that the forthcoming introduction of Haswell would benefit their GPU sales. Often mobile GPUs have been incorporated into past Intel Sandy Bridge or Ivy Bridge based notebooks and desktops, as a way of boosting graphics performance. But it's not clear that Haswell's graphics will be as crippled as its predecessors, so Nvidia's expectations may not be realistic.
AMD has gone further than Intel in incorporating GPUs from its Radeon line of graphics cards into its Intel compatible CPUs, which it now refers to as APUs (accelerated processing units). These have graphics capability equivalent to a mid-grade discrete GPU, and provide much more capability than the current generation Intel Ivy Bridge processors, as was confirmed in a recent AnandTech review of an APU (the A10-5800K) intended for desktop use. The combination of performance and economy of the AMD APUs caused both Sony and Microsoft to adopt very similar AMD APUs for their next generation gaming consoles.
The process of consolidation of functions into a single silicon SOC that began with mobile computing is clearly permeating all aspects of computing, even desktops, so it's not clear how much life the discrete GPU really has left. One possible outcome for Nvidia is that they become a licensor of GPU designs in the mode of Imagination Technologies. Nvidia already licenses some patented IP to Intel, for which they received a royalty payment of $66 million last quarter.
It was strength in high end GPUs that drove gross margins higher, to 54.3% up 4.2 percentage points from the year ago quarter. This seemed to cheer investors, as Nvidia was up a percent in after hours trading after the earnings call, but I regard this as shortsighted. The longer term trend is that the discrete GPU is a dead end for mainstream Intel platforms, and I doubt that the specialty applications that Nvidia is exploiting to good effect will offset the decline in the PC discrete GPU market.
For its Tegra line of ARM based SOCs (systems on chip), the news wasn't so good. Revenue for the Tegra segment declined dramatically from $208.4 million in fiscal Q4 to $103.1 million in Q1, a greater than 50% decline sequentially and a 22% decline from the year ago quarter.
Nvidia management offered up a rather ludicrous "we meant to do that" defense. Nvidia management claimed that the successor to the in-production Tegra 3 quad-core processor, the Tegra 4, had been deliberately delayed as the schedule for a similar Tegra 4i processor was moved up. The Tegra 4 is a faster more powerful version of the Tegra 3 which hasn't yet entered production. The Tegra 4i should really be called the 3i, since it uses a CPU core design more like the Tegra 3, but the Tegra 4i integrates a 4G LTE modem with the SOC. This makes the 4i more attractive as a processor for high end Android phones.
There was no explanation why the 4i had to be accelerated while the 4 was delayed. The best explanation I can come up with is that the market was much stronger for the 4i. Whatever the reason, this left Nvidia with only the 18 month old Tegra 3 to offer. With the exception of the Microsoft Surface RT, which only sold 200,000 units in calendar Q1 according to IDC, most of the products that used the Tegra 3 have been superseded by newer devices. A year and a half is an eternity in the mobile world.
Failing to advance both the Tegra 4 and 4i in parallel suggests that Nvidia is resource constrained, and underscores the difficulties of trying to be a commodity ARM SOC producer. In my previous post, "ARM vs. Intel: Why it Doesn't End Badly for Either", I argued that the problems of the commodity producers such as Nvidia are a systemic result of a paradigm shift away from the Intel model of the commodity CPU maker to a new model based on custom SOCs that device makers design themselves.
I can't offer absolute proof of my thesis yet, but there's a lot of circumstantial evidence. In addition to Nvidia's problems, there were the withdrawals of Texas Instruments and Freescale from the ARM SOC market. But the best evidence is the fact that the two companies that are the most successful mobile device makers, Apple and Samsung, design their own custom SOCs. Nvidia management conceded during the conference call that they can't really compete with Apple and Samsung, and are thus forced into a second tier of products.
Given the trends for both its GPU and Tegra segments, I believe Nvidia badly needs a partner. Probably, the most logical company to purchase Nvidia would be Intel, just as AMD bought ATI. Then Intel could begin a process similar to AMD's of incorporating Nvidia GPU designs into Intel processors. Nvidia's GPU ventures into servers are interesting, but they would be all the more compelling in the form of Intel/Nvidia SOCs.
Nvidia's GPUs would also benefit from Intel's excess foundry capacity. In addition to packaging Nvidia GPUs inside future (post-Haswell) processors, Intel could also offer motherboards with discrete Nvidia GPUs included.
I don't look for this in the near future. Probably Nvidia will have to suffer, and its investors along with it, a good deal more before relief comes in the form of a suitor.