Intel Proclaims Machine Learning Nervana

| About: Intel Corporation (INTC)


Intel announces the Nervana Neural Network Processor.

The evolutionary branch in processors.

Intel's scattershot approach to machine learning.

Rethink Technology business briefs for October 17, 2017.

Intel announces the Nervana Neural Network Processor

Source: Intel

In a blog post today, Intel (NASDAQ:INTC) CEO Brian Krzanich announced the Nervana Neural Network Processor (NNP). He wrote:

The Intel Nervana NNP promises to revolutionize AI computing across myriad industries. Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights – transforming their businesses...

We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models. This puts us on track to exceed the goal we set last year of achieving 100 times greater AI performance by 2020.

The NNP is the brainchild of Nervana Systems, a startup that Intel acquired for about $400 million in 2016. At the time, Nervana had developed an open source deep learning framework called neon. From neon, Nervana developed Nervana Cloud, which was optimized to run on Nvidia (NASDAQ:NVDA) Titan X GPUs.

Also at the time of the acquisition, Nervana was developing a custom ASIC called the Nervana Engine that it was claimed would outperform Nvidia Maxwell generation GPUs by a factor of 10.

Presumably, NNP is based on the Nervana Engine work, but whether it achieved its performance goal is unclear. A blog post by Carey Kloss, VP of Hardware at Nervana, in 2016 described some aspects of the processor. It used high bandwidth memory, now fairly common in high-end GPUs. Based on the official image of Nervana shown above, it appears that some form of HBM is still being used.

The processor also featured built-in networking via six bi-directional data links, which sounds an awful lot like Nvidia's NVLink. For instance, the Nvidia Tesla V100 (Volta) GPU supports six NVLinks as well.

The evolutionary branch in processors

Neither Kloss's post nor today's post by Naveen Rao of Nervana on the NNP provided any performance specs or comparisons to existing hardware solutions that are shipping, such as the GV100 in the Nvidia Tesla V100 accelerator. The fact that no performance specs or comparisons have been provided is probably significant. Machine learning performance has been a rapidly moving target.

Most of the performance comparisons that I've seen have been with a Maxwell or even a Kepler GPU. In most cases, any performance advantages with those older GPUs have already been eclipsed by Nvidia's Pascal and Volta architectures.

Most ASIC approaches to machine learning acceleration focus on certain matrix multiplication operations that turn out to be extremely useful. Google's Tensor Processing Unit is one example. However, Nvidia anticipated this with the Volta GV100 processor by incorporating specialized Tensor Core processors that perform these matrix calculations.

It's this kind of co-opting of specialized functions that has served to keep microprocessors relevant to modern computing. This, along with the economic advantages of the microprocessor, is what I believe will protect general purpose (programmable) processors from the threat of ASICs.

At the same time, I don't doubt that we're in the midst of an evolutionary branch of programmable processors based on the pressure of adapting to the needs of machine learning. The branch favors massively parallel processing and high memory bandwidth, along with some other features such as the aforementioned matrix math.

I doubt that there's any processing architecture currently available or contemplated for the near future that could be considered optimal. Thus, evolution on all fronts will continue.

Intel's scattershot approach to machine learning

Intel has pursued virtually every conceivable option for machine learning except the GPU. This omission appears to be the result of Intel's failure to develop its own GPU in its Larrabee project. That Intel couldn't make a competitive GPU based on x86 cores has had consequences that have propagated to this day.

Intel introduced FPGAs in its Deep Learning Inference Accelerator (NASDAQ:DLIA) in November 2016. It still offers FPGAs for datacenter acceleration, but the DLIA appears to have fallen by the wayside.

A Xeon Phi Coprocessor (as a PCIE add-on card) was also recently discontinued, although Intel is soldiering on with Xeon Phi. Xeon Phi is the company's attempt to salvage something of value from Larrabee, and it has found a place in High Performance Computing. Xeon Phi also was billed for a time as a machine learning platform, at least until Nervana came along.

Intel also offers the Movidius Myriad X, which has a dedicated hardware accelerator for deep neural network inference. Movidius is another startup that Intel acquired in 2016. It specializes in so-called vision processing units, essentially standalone image signal processors, using its proprietary SHAVE (Streaming Hybrid Architecture Vector Engine). Movidius found that SHAVE processors can also be useful for machine learning.

With this capability, the Myriad X can perform “AI at the edge”, an area that Nvdia is investigating as well. This involves performing tasks such as image object recognition in processing very close to the sensor, rather than in a cloud server.

So, Intel has a total of five different machine intelligence platforms: FPGAs, Xeon Phi, Myriad X, Nervana NPP, and yes, the humble Core processor, which still does a lot of machine learning workloads.

The company doesn't seem to know which architecture to bet on, so it's betting on all of them, except GPUs. That's not unreasonable. None of us can claim to know where this process is going to lead.

We all are just placing bets on our favorite horse. I'm betting on the GPU as the source of the branch point that will evolve into ever more capable platforms for machine intelligence. And I'm betting on Nvidia as the most capable company in the machine learning space.

Nvidia is part of the Rethink Technology Portfolio and is a recommended Buy.

Disclosure: I am/we are long NVDA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Author payment: $35 + $0.01/page view. Authors of PRO articles receive a minimum guaranteed payment of $150-500.
Want to share your opinion on this article? Add a comment.
Disagree with this article? .
To report a factual error in this article, click here