What Did Intel Just Buy?

| About: Intel Corporation (INTC)

Summary

Intel just acquired Nervana Systems, a deep learning startup, for $350-400 million.

I take a look at Nervana Systems products and its expertise in this niche.

Intel likely concerned about custom deep learning hardware by tech companies on the one hand and GPUs on the other hand.

M&A in the machine learning market keeps happening at a strong pace. Just last week, I commented on Apple's (NASDAQ:AAPL) acquisition of Turi for $200 million. This week, Intel (NASDAQ:INTC) bought Nervana Systems for reportedly over $350 million. Again, this is a private acquisition due to the nature of the deep learning market. Most companies simply never make it to an IPO before being acquired, reflecting the explosive demand for expertise in this market. What did Intel just buy?

Nervana Systems: Focus on performance

Software. On the surface, Nervana Systems might seem quite similar to Apple's recent acquisition Turi. Being founded in 2014 and having grown to just short of 50 employees since then, it's a typical startup success story. Its CEO, Naveen Rao, has previously worked as a researcher for Qualcomm researching neuromorphic hardware. The company markets a number of what I would, in 2016, call standard deep learning services via its software platform, Nervana Neon.

As a disclaimer, I use deep learning frameworks for research, but I have not used Neon before. Even though it reports great performance numbers, it's not quite as established as other frameworks. The mainstream right now would be Google's (NASDAQ:GOOG) (NASDAQ:GOOGL) TensorFlow, Theano, Caffe and Torch, often used through a wrapper library like Keras. The truth is that absolute benchmark performance does not matter as much as software developers would like it to. Commercializing deep learning is more about providing the right tools and infrastructure around a framework that enables developers to smoothly incorporate them into their software stacks and applications. For instance, deeplearning4j is much slower than most of its competitors but runs on the Java Virtual machine, thus integrating nicely with most open source data processing frameworks (Hadoop/Spark). This matters more to decision makers and software architects than pure performance for most use cases.

Neon is open source though, so I had a look at the code. The gist of the matter is that Neon seems to provide a number of highly specialized back-end implementations that allow for high performance while sacrificing a bit of usability. Essentially, this goes against the trend of providing a general-purpose distributed computing platform like TensorFlow, which then also can be used for other things apart from deep learning.

Note that this has a completely different focus than Apple's recent acquisition: Turi is based in GraphLab, a general purpose distributed computing platform that can provide the backbone for pretty much any type of machine learning, not just deep learning. It's sometimes easy to forget, but there is a world of machine learning besides deep learning. Nervana's Neon is not really a computing platform, it's more a collection of tuned neural network implementations bundled together with a Python front-end. So what Intel bought is not the software but rather expertise on tuning deep learning implementations for GPUs. This fits the press statement on the acquisition mentioning plans to improve Intel's Math Kernel Library towards deep learning. However, that's not all - there is also hardware.

Hardware. Further, Nervana apparently had plans to release a custom machine learning chip in 2017, the Nervana Engine:

From the horse's mouth:

The Nervana Engine design includes memory and computational elements relevant to deep learning and nothing else. For example, the Nervana Engine does not have a managed cache hierarchy; memory management is performed by software. This is an effective strategy in deep learning because operations and memory accesses are fully prescribed before execution. This allows more efficient use of die area by eliminating cache controllers and coherency logic. In addition, software management of on-chip memory ensures that high-priority data (e.g. model weights) is not evicted.

The result of this deep learning-optimized design is that the Nervana Engine achieves unprecedented compute density at an order of magnitude more computing power than today's state-of-the-art GPUs. Nervana achieves this feat with an ASIC built using a commodity 28nm manufacturing process which affords Nervana room for further improvements by shrinking to a 16nm process in the future.

This is quite interesting. I have previously written about Google's machine learning ASIC, the Tensor Processing Unit. The short story is that GPUs, mostly from Nvidia (NASDAQ:NVDA), are the workhorses behind deep learning today and will remain so for the coming years. However, it has also turned out that general purpose GPUs cannot keep up with the progress in deep learning research to the extent that companies like Google want (in this particular instance, having 8-bit integer precision for matrix calculations).

This puts Intel in a somewhat precarious position because it rightfully suspects market share in the big data processing world will be slipping away to proprietary hardware at the big tech companies on the one hand and GPUs on the other hand.

Nervana's ASIC seems a bit less specialized than Google's Tensor Unit. By their own account, it is supposed to be a GPU without all the stuff that deep learning algorithms don't care about. Of course, these are all just marketing statements. When someone promises "ludicrous speed", like this chip does, I cannot help but be reminded of another tech CEO known for over-promising and under-delivering. Nevertheless, this deep learning ASIC, or at least the engineers who have the expertise to design one, is exactly what Intel needs to figure out how it's supposed to compete in deep learning going forward. Right now, Nvidia outright owns this segment and with big techs building their own ASICs, Intel needs to act fast. This acquisition seems well put together, as the expertise exactly matches what Intel lacks and the market demands.

Summary

Intel reportedly paid $400 million for a deep learning startup with fewer than 50 employees and might have strategically overpaid a little. If you compare this with what Twitter (NYSE:TWTR) paid for 13 PhDs a few weeks back ($150 million), the math kind of works out. The market value per machine learning researcher in acquihire situations right now is around $5-10 million as a consequence of every technology company trying to reposition itself for a world driven by AI. Like Apple, Intel has recognized that it cannot keep dominating its specific segment without adopting to machine learning. The acquisition is a positive signal, but it remains to be seen how Intel acts with regard to custom deep learning hardware. Expect more rapid acquisitions in this space.

If you enjoyed this article, scroll up and click on the "follow" button next to my name to see updates on my future articles on software, machine intelligence and cloud computing in your feed.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Expand
Author payment: $35 + $0.01/page view. Authors of PRO articles receive a minimum guaranteed payment of $150-500.
Tagged: , Semiconductor - Broad Line
Want to share your opinion on this article? Add a comment.
Disagree with this article? .
To report a factual error in this article, click here