No Nonsense Nvidia: A Rebuttal

| About: NVIDIA Corporation (NVDA)

Summary

Another contributor has commented on Nvidia's deep learning positioning and the prospects of GPUs vs. ASICs in the deep learning market.

There are a number of misconceptions in his article which seem plausible from a layman's perspective.

This article is for clarification purposes on the technical side.

Nvidia (NASDAQ:NVDA) has the hardware lead in deep learning, full stop. I have explained why this is so in an article I published last May. Since then, Nvidia investors have enjoyed outsized gains, which recently has brought about a number of articles speculating about an imminent reversal. This article is a rebuttal on a recent piece about Nvidia's AI perspectives, and possible threats from specialty deep learning hardware. Giving my opinion as a deep learning researcher, the recent piece contains a number of technical inaccuracies.

ASICs vs. GPUs

I will go about this statement by statement. The article starts off with the statement that GPUs are not really meant for deep learning:

Being the most accessible and state-of-art hardware for deep learning does not change the fact that GPU is still far from optimal for such applications. After all, GPU has its DNA optimized for video gaming, not for deep learning. There are substantial improvements that can be gained with dedicated deep-learning designs.

Nvidia has been pretty much the first company to recognize the hardware side of deep learning. This is how it has successfully created the dominant hardware/software ecosystem around deep learning (cf. cuDNN, its widely used neural network library). It is also why a lot of deep learning software currently does not even bother with supporting anything other than CUDA. So, never mind that Nvidia sells dedicated deep learning hardware for autonomous driving, the author does not seem to be aware that shaders for games and training neural networks require the same exact arithmetic operations, that is, primarily matrix multiplications.

Below is another statement that is very misleading:

Precision: Research has shown that DNN performances actually degrade little by lowering numeric precision to 16-bit or even 8-bit if compensated with larger networks. This indicates the high-precision arithmetic logic in GPU is an overkill. The equivalent amount of data bandwidth could be utilized more wisely.

The reason that this is so misleading is because it sounds plausible from a layman's perspective. Quantization of neural network weights is primarily done for inference, which means trained networks.

The whole point of this is to require less memory and provide more compact representations of trained machine learning models (e.g. to deploy on mobile devices). A larger network is not used to compensate for smaller weights. Research results about quantization at training time have not made it into practice at any scale because gradients tend to be too unstable. To put it briefly, lower precision weights in neural networks cannot capture the small, gradual changes during training time for most applications. However, once a network is fully trained and its weights are fixed, they can be converted to a lower-precision representation without performance loss. This is quantization.

Hence, using application-specific integrated circuits (ASICs) for deep learning, in the current state of the market, is targeted towards improving inference performance (using a trained model in production to predict something, e.g. recognizing a face). ASICs like Google's (NASDAQ:GOOGL) (NASDAQ:GOOG) Tensor Processing Unit are, of course, more energy efficient, but they are not used for training.

The future of deep learning ASICs

Intel's (NASDAQ:INTC) acquisition, Nervana Systems, plans to release an ASIC that will also be used for training. It's a good step for a company that has fallen behind in this area, but suggesting Nvidia will lose market share to deep learning ASICs anytime soon is flat out ridiculous. Pretty much every serious deep learning application today is built around using Nvidia GPUs, on software infrastructure that interfaces Nvidia GPUs, running on cloud platforms using Nvidia GPUs. An ecosystem like that is not easily penetrated.

Of course, this does not mean ASICs will not have their place. My guess is that Intel will want to capture the other end of the deep learning market (i.e. low-powered/mobile deep learning hardware). This is indeed an emerging market, simply because running machine learning software on normal mobile CPUs/GPUs is too energy-intensive. Smartphones/tablets will, soon enough, include deep learning specific hardware. Nonetheless, Nvidia is providing the 'workhorses' of the deep learning revolution, i.e. the large scale compute power to train commercially relevant deep learning models.

Another false technical suggestion in the quoted article is:

These ASIC alternatives do not require decades of GPU expertise to design but have the potential to deliver significantly higher performance and with much lower power consumption and at a better price.

This makes it sounds like ASICs are a magic bullet that Nvidia, for some unknown reason, would just not be aware of. This just in: Nvidia knows what an ASIC is. Nvidia makes ASICs as part of its hardware offerings. In fact, its deep learning systems for autonomous cars could be viewed as deep learning ASICs.

The reason Nvidia makes its GPUs the way it does is that they provide the best compromise of flexibility and performance for almost all applications. In his praising of ASICs, the author failed to stress the application-specific component. It means that all chip logic is in hardware and not programmable.

One thing the author might not have known is just how quickly the field of deep learning is moving. New neural network architectures, training methods and frameworks emerge all the time. GPUs need to be programmable to accommodate for this. That is also the reason Intel plans to integrate Nervana hardware into CPUs instead of selling these ASICs outright.

Bottom line

Nvidia is not currently threatened by ASICs. Nvidia might be threatened by Intel at some point in the future if Intel can establish a deep learning ecosystem. It should also be noted that none of this has any bearing on Nvidia's short-term price movements. The quoted article has tried to conjure a short-term danger that is simply not there.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Expand
Author payment: $35 + $0.01/page view. Authors of PRO articles receive a minimum guaranteed payment of $150-500.
Tagged: , , , Semiconductor - Specialized
Want to share your opinion on this article? Add a comment.
Disagree with this article? .
To report a factual error in this article, click here