Intel: Apocalypse... Not

| About: Intel Corporation (INTC)

Summary

Intel weathers the security storm.

Krzanich assures CES audience that security is job one.

The data revolution is coming, but will Intel own it?

Rethink Technology business briefs for January 12, 2018.

Intel weathers the security storm

Intel's CEO Brian Krzanich at CES 2018. Source: Intel.

Following the news of the Meltdown and Spectre security vulnerabilities, some have chosen to see this as a disaster for Intel (INTC). SA contributor EnerTuition wrote recently:

In our view, the security problem is a much bigger problem than Intel is acknowledging, and Intel investors will be in for a very rough ride for the next couple of years. While Intel may not have much of a problem on the consumer side from this security issue, in our view, Intel’s data center business is at a serious risk.

Now that the dust has settled (somewhat), a clearer and certainly less dire picture is emerging. Of the three vulnerabilities that researchers have identified, only one of them, called Meltdown, is not a problem for processors made by Advanced Micro Devices (AMD). In a statement released yesterday, AMD acknowledged that its processors are susceptible to two variants of the exploits labeled Spectre.

Mitigating these vulnerabilities generally entails some performance impact on any processor, whether made by Intel, AMD, or even ARM architecture processors designed by Qualcomm (QCOM) or Apple (AAPL). Microsoft has a good blog piece on the impact of the vulnerabilities.

Microsoft's limited testing on newer Win10 machines showed “single-digit” performance impact, while Server workloads showed greater impact. Generally, IO intensive loads, such as frequent disk access, have the greatest impact. Disk access benchmarks also correlate well with the large impact.

So there is an impact, but it's not the Intel Apocalypse that the wishful thinking of AMD fans would have it. Undoubtedly, the class action lawsuits will continue. But how much damage can one show when the vulnerabilities have yet to be exploited, and there is negligible performance hit for most users? I doubt that that the cost of the lawsuits will be a great burden on Intel.

More concerning is the impact in the datacenter. But, once again, with AMD processors vulnerable to Spectre, and Qualcomm's processors vulnerable to Meltdown and Spectre, it's unlikely that security vulnerabilities will become a positive discriminator for any processor offering. Also, as I discuss below, there is a paradigm shift underway in the datacenter that renders the security concerns somewhat moot.

Krzanich assures CES audience that security is job one

It is against this backdrop that keynote of CEO Brian Krzanich at CES this year needs to be understood. Intel is not the dominant force it once was, but it's not on its last legs either. Krzanich began by addressing the security issues, assuring the audience that Intel is fully committed to improving the security of its processors.

Intel's competitors should take note. Intel is certainly in a position to modify the design of its processors and put those changes into place in a timely manner. Intel probably has a crash program underway to devise a workaround to the Meltdown vulnerability that minimizes performance impact. This would only make sense. Probably, it will take a year to bring to production, and be folded into Intel's next generation of 10 nm high performance desktop and server processors.

This may entail some opportunity for AMD, but it will be a limited one. As I keep pointing out, Intel has a tremendous stake in its datacenter supremacy, and I expect that it will maintain a lock on the datacenter, even if it has to surrender its ample margins to do so. I don't believe it is AMD that is going to disrupt Intel in the datacenter.

The data revolution is coming, but will Intel own it?

Following Krzanich's pious pronouncements regarding security, much of the presentation was a regurgitation of a longstanding Intel theme: the explosion of computer and device generated data. Intel predicts that by 2020 the average person will generate 1.5 GB of data per day, and an autonomous vehicle (AV) will generate 4 TB of data per day.

The audience is always left with the obvious, but unspoken implication that because Intel supplies most of the big iron for datacenters, that Intel will own the data revolution. At CES this year, Krzanich even went so far as to compare the social impact of the data revolution to the internal combustion engine and the integrated circuit.

There are a number of problems with this scenario, not the least of which is whether all that data actually ever makes it into a datacenter. For instance, the AV data is mostly in the form of video captured by its various cameras, as well as potential LIDAR and radar data.

If those 4 TB of data are being generated by a car driven 4 hours per day, the data is being generated at a rate of 2.22 Gbits/second. Even contemplated 5G networks will typically not be able to handle that kind of upload rate in real time. That means storing the data and uploading it later, such as over a home Wi-Fi network.

But if you look at what is probably the most credible soon to be commercially available processor for autonomous vehicles, Nividia's (NVDA) Drive Xavier, there's no provision for the storage of such a massive amount of data, since this would only add cost and complexity to the hardware.

Nvidia's Drive Xavier. Source: Nvidia.

The big data revolution that has Intel salivating may well be a myth for a number of reasons. As mobile processors become increasingly powerful and capable of hosting local AI, users may simply prefer to keep their data on the device rather than uploading it to the cloud. These powerful ARM based SOCs may in fact reverse the trend of the past few years of increasing dependence on the cloud for personal computing.

And the infrastructure needs extend beyond the extreme of the autonomous vehicle. While home internet download speeds in the US have topped 60 Mbits/sec on average, upload speeds are typically about a third of that.

In order to support the data revolution, we're probably looking at another decade's worth of infrastructure improvement so that fixed internet connections (homes and businesses) can have roughly 1 Gbit/sec uploads.

But the biggest factor weighing against Intel's dominance of the data revolution is that there is already a disruption of Intel in the datacenter underway, and it's not by AMD. It's by Nvidia. The fundamental paradigm of computation in the datacenter is changing.

The old server paradigm was a large number of x86 processors serving as hosts for an even larger collection of virtualized x86 processors. In the new server paradigm, high performance GPUs shoulder most of the computational burden, while conventional CPUs serve a largely supervisory function. The ratio of GPU to CPU is typically 3/1 or 4/1.

The new server paradigm has already taken over advanced supercomputing, as I describe in my article on the Summit supercomputer being built by Oak Ridge National Laboratory. Summit consists of 4600 processing nodes, with each node essentially an IBM server consisting of two Power9 processors and six Nvidia Tesla V100 GPUs.

This new server paradigm is making rapid inroads into the datacenter in order to host increasingly important AI functions. As is the case in supercomputing, the motivation for this is simply energy efficiency and cost effectiveness.

Nvidia's CEO Jensen Huang made this point very well at his CES press conference. He compared the energy and hardware requirements to perform a basic AI image recognition task using a standard approach. He compared the hardware and power requirements using multiple racks of traditional CPUs to using a single rack equipped with 8 Tesla GPUs.

The traditional CPU approach with 160 servers.

The Nvidia GPU approach with a single server.

This trend is only accelerating as more and more cloud services feature various forms of machine learning assistance for their customers. Nvidia claims, with ample justification, to be the platform of choice for cloud-based AI.

Nvidia is the real, perhaps only, disruptive threat to Intel in the datacenter.

Disclosure: I am/we are long NVDA, QCOM, AAPL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Expand
Author payment: Seeking Alpha pays for exclusive articles. Payment calculations are based on a combination of coverage area, popularity and quality.
Tagged: , , , Semiconductor - Broad Line
Want to share your opinion on this article? Add a comment.
Disagree with this article? .
To report a factual error in this article, click here