Nvidia: Gaining Share In A Declining Market

|
About: NVIDIA Corporation (NVDA), Includes: AMD, INTC, MLNX, TSM
by: Mark Hibben
Summary

Nvidia graphics card market share over 80%, according to Jon Peddie Research.

Explanations for the graphics card market decline.

Motivating the Mellanox acquisition.

Rethink Technology business briefs for March 14, 2019.

Nvidia graphics card market share over 80%, according to Jon Peddie Research

Jon Peddie Research's results for PC graphics card market share show Nvidia (NVDA) taking a commanding lead in calendar Q4 2018, with 81.2% share of the world wide market. At the same time, Peddie's data shows that the overall market for graphics cards, which Peddie refers to as GPU add-in boards, continues to decline, falling 40.2% y/y.

Jon Peddie Research does market research and analysis primarily on the GPU (graphics processing unit) business and issues quarterly reports on the overall GPU market, which include GPUs that are part of Intel (INTC) and AMD (AMD) chips. Since so many consumer-oriented PC processors include built-in graphics, Intel and AMD tend to dominate the overall GPU market.

Peddie also issues a separate press release on the “GPU add-in board” market focused on graphics cards for PCs. These are typically used for enhanced graphics performance for professional visualization and rendering, or for gaming. By far, this part of the market is dominated by gaming in terms of unit volume, so Peddie's add-in board report serves as a reasonable proxy for the PC gaming business.

Starting in 2018 Q3, Nvidia began gaining large chunks of share in the market, even as total add-in board shipments have declined, as shown in the two charts below.

Data source: Jon Peddie Research.

Explanations for the graphics card market decline

The overall decline in the PC graphics card market is usually attributed to the oversupply of low-priced graphics cards that developed starting in Q3 as a result of declining crypto-mining demand. While this has been a factor, it doesn't fully explain the data, since I wouldn't expect such a large shift in market share in favor of Nvidia due to crypto alone.

Undoubtedly, a factor in the market shift is simply the lack of new GPU product from AMD. While Nvidia introduced a whole new GPU architecture, Turing, specifically for PC gamers last year, AMD fans have had to wait for AMD's next all-new architecture, Navi, expected later this year.

AMD's Radeon VII, while billed as the first 7 nm GPU because it's fabricated on TSMC's (TSM) 7 nm process, isn't really a new architecture, since it's based on the existing Vega series. The chips inside are thought to be identical to what AMD offered for a series of Radeon Instinct MI50 GPU accelerators for the datacenter market.

Clearly, the chip, which offers 16 GB of HBM 2 memory, isn't really optimal (in terms of price) for the PC gaming market. There were reports that Radeon VII was in very limited supply at launch, suggesting that the card was released primarily to sell off chip inventory that wasn't being sold as the MI50.

Even without the supply constraints, Radeon VII probably wasn't going to move the sales needle very much for AMD. Gaming performance, as typically measured in frames per second, was not impressive. Tom's Hardware tested it against its usual gaming suite at 2K and 4K resolution. Tom's summarized the results by taking the geometric mean of all the results for the two resolutions shown below:

Not only did the Radeon VII not beat the current generation Nvidia Turing RTX 2080, which it supposedly was targeting, but it didn't even beat the previous generation 1080Ti. Even more remarkable, despite the move to 7 nm, Radeon VII continues to lag in energy efficiency. The graph below shows real time power consumption of about 300 W for Radeon VII and 220 W for RTX2080 during game play of Metro: Last Light.

And this is despite substantially underperforming in FPS compared to the 2080:

On the Nvidia side, the situation has been only slightly better. Nvidia introduced the new Turing GPU architecture, which promised a heightened level of realism as a result of its implementation of ray tracing. Ray tracing is a technique widely used in computer graphics to generate photo-realistic images and animation. Computer-generated special effects in movies use ray tracing.

But the first generation of Turing consumer cards made a number of compromises. Frame rate performance in games suffered considerably. There's perhaps too much emphasis on frame rate, but it has become the de factor standard by which graphics cards are judged.

When Nvidia introduced another Turing technology to improve frame rates, called Deep Learning Super Sampling (DLSS), there were numerous complaints that image quality suffered. Only the very top of the line RTX 2080 Ti seemed to offer enough performance to achieve ray tracing at acceptable frame rates, and at $1200 for the Founders Edition, there weren't many takers.

If all of these problems weren't enough, the lack of gaming content all but doomed the Turing launch. At the beginning of the year, only one game, Battlefield V, implemented ray tracing, and the game was widely panned by reviewers as being incomplete and offering poor ray tracing performance.

By the time that Nvidia held its fiscal Q4 conference call in February, management admitted that the Turing launch wasn't going well. Said CFO Colette Kress,

. . . sales of certain high end GPUs using our new Turing architecture, including the GeForce RTX 2080 and 2070 were lower than we expected for the launch of a new architecture. These products deliver a revolutionary leap in performance and innovation with real time ray-tracing and AI, but some customers may have delayed their purchase while waiting for lower price points or further demonstrations of the RTX technology in actual games.

I agree that Turing is revolutionary, and I'm quite certain that Nvidia's ray tracing technology will be an important long-term advantage in PC gaming, but I think they totally screwed up the launch. It was simply premature to release Turing to consumers. The content wasn't there. The performance wasn't there. And the value wasn't there.

But I believe that Nvidia managed to capture the imaginations of a lot of gamers. Everyone, even AMD's CEO Lisa Su, acknowledged the value of ray tracing, even as many gamers were unconvinced by the value of Nvidia's first implementation of it. The logical thing to do was to wait for a better, more cost-effective ray tracing solution.

And that's what I think the market is doing and why graphics card demand is depressed, combined with the market share shift to Nvidia. Nvidia has shown everyone the future of PC gaming, and those few who can't wait will buy a Turing card.

Those who are waiting may be waiting for an AMD equivalent to Turing, but that's going to be a long wait, since Navi is not thought to have any ray tracing capability. More likely, Nvidia already is hard at work on Turing 2, which will be fabricated on 7 nm and offer the performance and value that Turing 1 does not. Turing 2 could arrive later this year, but the specific timing will be determined by Turing 1 inventory.

Motivating the Mellanox acquisition

I expect Turing 2 to restore Nvidia's gaming business to growth, but at this stage, there's room for debate about whether gaming revenue will ever return to the levels we saw in the first half of fiscal 2019. Nvidia doesn't seem inclined to wait for the gaming business to resume growing, and perhaps its faith in gaming as a growth driver has been shaken by the Turing launch. This may in part explain the acquisition of Mellanox (MLNX), an Israeli company that specializes in optical interconnects for datacenters and high performance computing. On Monday, March 11, Nvidia announced that it had acquired Mellanox for $6.9 billion in cash:

The acquisition will unite two of the world’s leading companies in high-performance computing (HPC). Together, Nvidia’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. . .

Datacenters in the future will be architected as giant compute engines with tens of thousands of compute nodes, designed holistically with their interconnects for optimal performance.

An early innovator in high-performance interconnect technology, Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters. . .

With Mellanox, Nvidia will optimize datacenter-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers. . .

The companies have a long history of collaboration and joint innovation, reflected in their recent contributions in building the world’s two fastest supercomputers, Sierra and Summit, operated by the U.S. Department of Energy. Many of the world’s top cloud service providers also use both Nvidia GPUs and Mellanox interconnects. Nvidia and Mellanox share a common performance-centric culture that will enable seamless integration.

Interestingly, Intel was the competing bidder. What's interesting about this is that Intel has claimed for some time that its “silicon photonics” would outperform Infiniband. Intel's silicon photonics involves putting laser transmitters and receivers directly on the silicon die that hosts the CPU.

This has been an active area of research for some time, since normal photonic materials are tertiary alloys such as Indium-Gallium-Arsenide (InGaAs) that are incompatible with silicon fabrication. The normal approach is to fabricate optical transceivers separately (usually in the optical interconnect housing), and then connect these to processors through normal high speed electrical buses.

Although innovative, I doubt that silicon photonics was ever cost effective. Hailed as a breakthrough in 2016, today Intel only offers conventional optical transceivers in the QSFP28 form factor.

The hype surrounding silicon photonics became grist for the mill of the Intel bulls. In late 2015, Diane Bryant, head of the Datacenter Group, published this chart for the Intel Investor Meeting, extolling the virtues of Intel's silicon photonics and its so-called Omni-Path optical communications architecture:

I guess it's reasonable to infer from Intel's bid for Mellanox that the TAM's claimed for Intel's optical comm solutions failed to materialize. In February 2016, I wrote a rebuttal to some of the claims being made by Intel bulls on behalf of Omnipath and silicon photonics. Here was my investor takeaway:

Intel's silicon photonics research, as interesting as it is, has served as a kind of stalking horse to mask what is really a much more conventional move into data center networking. Here, Intel's push into the data center is reminiscent of its acquisition of Infineon and subsequent push into wireless modems. In 2012, Intel acquired QLogic's (QLGC) InfiniBand business and the Aries Interconnect team of Cray (CRAY). The merging of these organizations and intellectual property serves as the basis of Omni-Path.

Intel claims that the optical connection will migrate ever closer to the processor, but that may or may not happen. Its silicon photonics work may never provide the competitive discriminator assumed by Intel's supporters. In the meantime, INTC is left with trying to enter a market against a powerful incumbent, Mellanox. Intel is, in effect, trying to set up a proprietary network fabric product line in competition with an industry standard. Whether the data center industry will adopt Intel's proprietary standard is debatable.

Intel's Omni-Path, as currently implemented (and probably implemented for the next five years), would not preclude the use of competing processor technology based on ARM Holdings. Any system that provides a PCIE interface will be able to employ Intel Omni-Path. Far from being a lock on the data center, Omni-Path could be a high-risk venture with little real return. The conservatism of data centers that works so well in Intel's favor in processor technology works against it in network fabrics.

One does have to wonder what the ultimate objective of the acquisition is for Nvidia. Despite considerable resistance, ARM processors are making some headway in the datacenter. Last year, Amazon (AMZN) announced its custom designed 64 bit ARM Graviton processor for its internal use. The latest entrant to the fight for ARM acceptance in the datacenter is an ARM server CPU by Huawei.

With Intel processors relegated to a supervisory role in HPC when used with Nvidia's GPUs, it's not inconceivable that Nvidia could build a GPU-heavy SOC with a built in ARM CPU to take over the supervisory role. Oh wait, it already has. The chip is called Xavier.

Source: Nvidia.

Xavier is, of course, not a server processor, being intended for Nvidia's Drive systems for advanced driver assistance and autonomous vehicles. However, Xavier is a very powerful processor the design of which could serve as a precursor for a future datacenter product. Any such product is probably some years in the future.

In the meantime, I think the acquisition is somewhat defensive, given Intel's interest. Nvidia may have been concerned that with the acquisition of Mellanox, Intel would have more leverage to exclude Nvidia's GPUs in favor of Intel's forthcoming GPUs. Intel tried to get similar leverage in CPUs with its own optical comm solutions, but without much success.

Mellanox also adds some bulk, increases revenue and earnings and diversifies the product line, which Nvidia probably needs. More than anything else, the acquisition shows that Nvidia is serious about the datacenter and protecting its role as a long-term growth driver. I remain long Nvidia and rate it a buy.

Disclosure: I am/we are long NVDA, TSM. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.