Intel Is Changing The Definition Of The PC

| About: Intel Corporation (INTC)

Summary

With 3D XPoint coming, Intel's job cuts are expected.

Intel is about to change the computing landscape.

While I predicted 3D XPoint, few people are listening to my predictions of its impact.

This is yet another break from my three-part series (part one and two) on Micron's (NASDAQ:MU) hybrid memory cube technology ("HMC"). The final article is taking some time because there is just so much change that is going to happen as a result of HMC and the 3D XPoint memory technology that sits inside of it.

But Intel (NASDAQ:INTC) represents a bigger component of that change.

Click to enlarge

This change can be summarized as follows: when you combine capacious storage-class memory and processor into a single, high-bandwidth package, performance will increase to levels that change the computing landscape. The PC itself will disappear because your phone will have more than enough computing power and energy efficiency.

Microsoft (NASDAQ:MSFT) has already developed Continuum so that your display and input can flow across a variety of devices. Apple's (NASDAQ:AAPL) iPhone is going to be displaced when businesses and schools provide Windows "Phone Computers" (PCs?) to the employees and students, respectively. Or is a MacPhone coming in order to fend this threat?

Click to enlarge

The only use case that isn't better served by this model is gaming, which is already being folded into Intel's Xeon line - the Core desktop processors and Xeon server processors already very much similar. With 3D XPoint memory, the differences become even smaller because the memory error correction is moved from the processor into the memory. Although the mobile group might borrow a Xeon core here and there, there's no reason to maintain a desktop computing group anymore.

Just a couple of weeks ago, Intel announced the following organizational structure:

  • Client Computing Group
  • Data Center Group
  • Internet of Things Group
  • Non-Volatile Memory Solutions Group
  • Intel Security Group
  • Programmable Solutions Group
  • All Other
    • New Technology Group

What I'm getting at is that, going forward, the Client Computing Group is no longer comprised of separate desktop and mobile components - it is just one group that produces processors with storage class memory, filling the needs of 90 percent of the personal computing market with very little diversity (i.e. - two, four or eight cores with between 64GB and 2TB of storage class memory - put the Legos together and be on with your day). The other 10 percent will be gaming and CAD workstations that are fulfilled by the Xeon from the Data Center Group.

What stands out from the new structure is the Non-Volatile Memory Solutions Group. They specifically characterized the function as "non-volatile" which means that they aren't targeting DRAM at all. This means that the performance of 3D XPoint is as high as I believed it to be. From an Intel patent application:

NVRAM has the following characteristics:

[...] very high read speed (faster than flash and near or equivalent to DRAM read speeds)

Given the cost structure, if 3D XPoint read speeds are "near or equivalent to DRAM read speeds," and all writes go into the speedy eDRAM cache, then the need for standalone DRAM has just vanished. I realize that I am repeating myself but I still feel like nobody is listening. Even though this is merely content from Intel and Micron patents, people are still doubtful. So here's Intel's Rick Coulson further explaining the situation in real life:

We did a little demo a while back where we took a PC and restricted it to 256 megabytes of memory. And we ran a side-by-side demo with an 8 gig[abytes of memory] system. You couldn't tell the difference. And the little meter on the paging - 200,000 pages a second - page misses a second. And that was okay if you're fast enough.

What he's talking about here is virtual memory paging - the mechanism that computers use in order to make more efficient use of DRAM by swapping unused "pages" of memory to disk. Historically, paging has always been a bad thing because disks - including SSDs - are so much slower than DRAM. But it has always been a necessity in order to make efficient use of a scarce resource.

The dialer app on my phone is "always running," for example. When I don't use it for a while, its memory pages will be moved into storage (NAND flash, in this example) in order to make room in the DRAM for the apps that I am running instead. If I then attempt to open the dialer, there is a huge delay before it opens. This delay happens as a result of the CPU getting the dialer app's "pages" from NAND and putting them back in memory so it can run.

Everybody hates paging but Rick's point is that, if the secondary storage is fast enough (3D XPoint in this case), then you won't be able to tell the difference and paging won't be a bad thing. Processors will come with a little chunk of eDRAM either on-chip or very nearby and the rest of the memory system complexity can be discarded - a huge chunk of the system cost.

This was a crude demo that facilitated a quick proof of concept for Intel - that, as long as the secondary storage is fast enough, you can't tell the difference between a system with a lot of DRAM and one with barely any. Just before that section, he mentions that there is an energy concern. Intel has many patents that discuss the problem and they've subsequently solved it with a technique that they call "pinning," which just forces the most frequently swapped pages to stay in the cache.

Summary

Now you know how Intel is replacing RAM with SSD, as I wrote about in my previous article. And, now that it is coming from an Intel employee, you might actually believe it. Intel's new Client Computing Group is going to produce products that are remarkably less complex and higher performance than today's CPUs. All of the external memory complexity and associated cost will vanish.

In a different presentation, Rick Coulson goes into detail about how demonstrating 3D XPoint over the NVMe bus is the fundamentally wrong way to demonstrate the performance, as the bus is adding more than 10,000 nanoseconds of latency to a device that has a 2-digit latency - even though the technology is 1000 times faster than NAND, they were only able to demonstrate 7.3 times faster because of the inefficiencies of using memory on a PCIe bus instead of a memory bus.

Click to enlarge

He also mentions that, even if 3D XPoint had infinite performance, you'd only see "less than 8x" of a performance increase versus flash in this sort of a configuration. Intel is obviously hiding the performance. They could easily demonstrate a memory device if they wanted to show the true performance. Why are they sandbagging? There's only a handful of reasons that I can think of:

  • They need to buy Micron in order to lock out competitors.
  • They want to keep the specs from their competitors. (But why announce so early?)
  • The manufacturer of the first-to-ship 3D XPoint product wants the thunder for themselves. Microsoft? Apple?

Unfortunately, for many employees, the layoffs were another step in this transformation.

Disclosure: I am/we are long INTC, MU.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.