Intel: Another Threat Emerges (And It's Not Zen)

| About: Intel Corporation (INTC)

Summary

On top of 3 other threats, Intel now faces a new threat in the server room.

This threat is GPU compute.

This article also addresses what Intel is doing to respond to this threat.

Intel (NASDAQ:INTC) is increasingly looking like a potential short sell opportunity. In my article titled "Intel Faces Tremendous Near-Term Threats" I already described 3 emerging threats to its business:

  • Intel losing its production process advantage, which gave it cost, power usage and performance advantages.
  • Advanced Micro Devices (NASDAQ:AMD) regaining share due to its new Zen-based processors, something that I detailed further in my article titled "Either AMD Or Intel Will Be A Short."
  • ARM-based solutions gaining share. Here, with the immediate threat that Apple (NASDAQ:AAPL) might switch its Mac business to home-grown ARM CPUs.

Today's article adds a further threat. The server marketplace, where in practice only Intel and IBM (NYSE:IBM) competed, with Intel massively dominating through its Xeon lines, is seeing a new revolution. The revolution is the shift of workloads towards GPU compute, or the usage of GPUs (Graphic Processing Units) to handle computing tasks previously handled by regular CPUs.

GPUs are extraordinarily adept at handling massively parallel tasks. They are thus endowed because instead of just having a low number of general computing cores (10-24 cores on typical server CPUs), they have a much larger number of specialized cores (3584 CUDA cores on the Nvidia (NASDAQ:NVDA) Tesla (NASDAQ:TSLA) P100, for instance). Thus, for problems which can be broken down into numerous parallel tasks, GPUs will be able to work on all those tasks simultaneously and handle the overall problem in a fraction of the time needed by typical CPUs.

In today's data centers, more workloads are appearing which lend themselves well to GPU compute. Such is the case with most big data or AI workloads, which typically involve extreme amounts of tabular data on which similar calculations are applied. For these, GPUs can be seen as a kind of accelerator, delivering a 1-2 order of magnitude (10-100x) increase in performance. Moreover, GPUs can deliver this at a competitive cost. The result is that big data and machine learning are quickly migrating towards using GPUs instead of CPUs.

Of course, for Intel this is a threat, for while Intel is the prime supplier of CPUs, it's not even a factor when it comes to discrete GPUs. There, Nvidia and AMD dominate (though it remains to be seen for how long Imagination Technologies, Qualcomm (NASDAQ:QCOM) and ARM (NASDAQ:ARMH) will remain out of the GPU compute market). A migration of server workloads towards GPU compute thus represents less demand for Intel CPUs.

To add to this, a major stumbling block when it came to GPU compute has been the difficulty in programming for high parallelism. As more and more programmers get comfortable with the concept, it's likely that the range of tasks able to be undertaken by GPU compute will also broaden. This will amplify the impact from this threat.

Intel's Response

As with other threats, we cannot assume Intel will stand still. When it comes to GPU compute, it's obvious where Intel's response is. It's in the acquisition of Altera, a prominent FPGA (Field-Programmable Gate Arrays) supplier.

Here, it pays to know what happened with Bitcoin mining. I once published a blog post titled "Bitcoin Is An Amazing Show Of Capitalism Strength." In search of faster and more efficient Bitcoin mining, the market went through the following phases:

  • Initially, Bitcoin was mined using traditional CPUs.
  • Then, Bitcoin miners adapted the algorithms to run in GPUs (GPU compute), taking advantage of their proficiency at solving highly-parallel tasks. This delivered an order of magnitude improvement in speed and efficiency.
  • But alas, Bitcoin miners did not stop here. Next, Bitcoin miners built FPGA-based computers for the purpose of mining Bitcoins. In what do FPGA solutions differ from GPU solutions? Well, GPU compute solutions are still running a software solution on top of generalist GPU computing cores. FPGAs, on the other hand, allow the chips to be re-programmed to provide a hardware-based solution to the computing task. This thus delivers another order of magnitude improvement, as the task is run at the hardware level (it's no longer software running on top of hardware - the hardware itself is reconfigured to solve the problem directly).
  • Amazingly, Bitcoin miners did not stop there either. They then moved to ASICs (Application-Specific Integrated Circuits). ASICs, like FPGAs, handle a given task at the hardware level. However, unlike FPGAs, ASICs cannot be re-programmed after manufacture. Removing that flexibility, though, removes another layer of inefficiency - thus allowing for yet another jump in performance and efficiency.

Seeing this progression, it's obvious what Intel is doing. It's not competitive at the GPU level, so it will be trying to bypass GPUs altogether by providing FPGA solutions geared towards the server room. FPGAs remain programmable, much like CPUs and GPUs, so they should be a good fit for highly-specialized tasks at which GPUs excel, while keeping their flexibility to handle different tasks over time.

It's doubtful that the server room will ever make the jump towards ASICs except for highly-specialized, mission-critical and unchanging, tasks. ASICs lose their flexibility, as once they're manufactured, they can only solve the task they were built to solve. Thus, Intel had only two choices:

  • Become competitive in GPUs, which is a uncertain endeavor at best.
  • Bypass GPUs and go straight towards FPGAs.

It should be said that moving to FPGAs is not risk-free. While massively parallel programming for GPUs presented a roadblock for GPU compute adoption for a long while, the problem with FPGAs is even harder. FPGAs don't just have the massively parallel problem to deal with, they also have specific limitations on what can be built to be solved in hardware. It's like marrying the complications of programming massively-parallel tasks with the complications of building hardware, which actually runs.

Conclusion

On top of the 3 existing threats I have already spoken about, Intel now faces the increasing popularity of GPU compute in the data center. GPU compute is going to take away workloads from traditional CPU-based servers, and that will mean less demand for Intel hardware from data centers.

Intel's response to this will be to deliver competitive solutions using FPGAs. However, this response is likely to take a long time to fully materialize, and will face its own adoption roadblocks as well. These roadblocks are harder to overcome than those vanquished by GPU compute, and yet GPU compute took years to win over its own problems.

As a side note, while this trend will be positive for Nvidia (currently dominating GPU compute) and AMD, I am not writing this as validating a long thesis on either. The reason is simple: both are extremely expensive stocks - and it's not certain AMD will be able to dislodge significant GPU compute market share from Nvidia.

Disclosure: I/we have no positions in any stocks mentioned, but may initiate a short position in INTC over the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Expand
Author payment: $35 + $0.01/page view. Authors of PRO articles receive a minimum guaranteed payment of $150-500. Become a contributor »
Tagged: , , , Semiconductor - Broad Line
Problem with this article? Please tell us. Disagree with this article? .