Seeking Alpha

Micron Conspires With The Rest Of The Industry

|
About: Intel Corporation (INTC), MU, Includes: AMD, AMZN, HPE, IBM, NVDA, SSNLF
by: Stephen Breezy
Stephen Breezy
Long/short equity, tech, chipmakers, alternative energy
Summary

Intel is working furiously to maintain their grip at the center of computing.

But Micron called IMFT and appears to be conspiring against this grip.

Micron's 3DXP intellectual property seals the deal.

In my previous article, Micron's X100 Will Drop Like A Bomb On The Intel Data Center, I postulated that Micron (MU) is about to unleash a remarkable storage class memory architecture which will cut across the grain of Intel's (INTC) entire ethos: high-powered, high-margin CPUs at the center of the computing industry. Because the amount of information in that article became unwieldy from a bite-size, non-technical investor perspective, I did need to break it up into several articles, this one being the second. The first is prerequisite for this one if you haven't read it already.

Micron X100 3D Xpoint MemorySource: Micron

To the point:

A year ago, Micron and Intel announced that their IMFT joint venture would cease production of 3D NAND flash memory in order to focus exclusively on 3D XPoint ("3DXP"). Because flash scaling is known to be closing in on all practical scaling limits in the not-so-distant future, both companies saw fit to increase focus on its successor. So both companies decided to go their own way on 3D NAND and put it on hospice within cheaper fabs for the sake of putting the expensive IMFT fab to work on the sexy new 3DXP that they had been perfecting for well over a decade.

In 2009, during the earlier years of development, Intel told us that their researched reinforced their expectations of 3DXP - which is known generically as "phase change memory" - to scale past DRAM and flash "for future random access memory and solid state storage applications", respectively.

Since both DRAM and flash store data in the form of electrons, the physics of which become increasingly unwieldy as circuit sizes fall below the 20 nanometer level, both technologies were expected to reach their scaling limits somewhere above the 10 nanometer range. This is outlined in the following articles:

Deeper Dive (for those who wish to explore the advanced details):

Back in 2014, IMFT released a traditional 16 nanometer "planar / 2D" NAND and, even though Micron made it work in their own commercial products, the error rates and durability were so poor that Intel opted to stay with the older 20 nanometer NAND for their product offerings. Micron had discovered planar NAND's scaling limit so now the industry knew that it was time to start stacking the memory cells vertically ("3D").

But, in order to scale 2D NAND into vertical 3D NAND structures, they needed to make the memory cells much larger in those first two dimensions. So the industry lost some density on the 2D aspect but gained the capability to stack potentially hundreds of layers. This kicked the can on bit cost scaling for some years.

Deeper Dive:

Today, we're in a situation where it is predicted that 3D NAND cost scaling will stop somewhere around 200 layers in the not-so-distant future. There is industry consensus on this but IBM summarized it nicely last August at the Flash Memory Summit.

DRAM is in a similar situation and the industry has leveraged measurements like "10 nanometer class" and "1x, 1y, and 1z"* because they are anticipating the end of the line but don't exactly know where that will be aside from the fact that it will have a 1-handle on it (e.g. something between 10nm and 19nm).

(*) Note - Seeking Alpha's text editor will not allow me to post the 1-alpha, 1-beta, or 1-gamma Greek symbols or I would discuss those as well. Just know that they all have a 1-handle as well.

This is just a clever way of hiding the inevitable from investors. I'll go on record to say that they won't get under 12 nanometers on DRAM but I'd be happy to see 14 without sacrificing any speed or energy efficiency. The issue here is described in an Intel patent:

  • US Patent 9,001,608; Coordinating power mode switching and refresh operations in a memory device; Assignee: Intel Corp.; Date: 12/2013

As DRAM devices become faster and denser, they consume more energy, even when the memory system is not servicing any requests. The increase in device speed leads to higher background power dissipation by the peripheral circuitry, and the increase in device density results in higher refresh energy. For instance, it is projected that refresh operations utilize a substantial amount of DRAM power while simultaneously degrading DRAM throughput in future 64 Gb devices. These trends have caused the memory subsystem to become a major contributor of energy consumption in current and future computing platforms.

Intel's entire strategy hinges on a "two-level" system memory architecture ("2LM", using their abbreviation, which makes searching related patent apps very easy), which I outlined back in 2015 with my Purple Swan article, predating, by a month, the hastily-prepared 3DXP announcement by Intel and Micron. Intel's future architecture was hinged on the fact that DRAM would no longer be suitable for the bulk of system memory.

IntelSource: Intel; Prepare for the Next Generation of Memory

Two-Level Strategy

I have suggested that Intel is directly responsible for the tremendous delay in advancing PCI Express beyond the v3 standard, where we have been stuck for nearly 8 years. While AMD (AMD) is just now releasing processors with PCIe v4, understand that we should be well into PCIe v5 had there been no shenanigans taking place behind the scenes.

What shenanigans?

For Intel, the crux began back in 2007 when Nvidia (NVDA) released a technology called Compute Unified Device Architecture ("CUDA"), which allows their graphics processors to be accessed for all around general purpose processing - not just graphics.

Nvidia's CUDA facilitated the acceleration of certain types of computation well in excess of that available on traditional CPUs like those of Intel and AMD. At the time, it was very hindered by the low speed and high latency of the PCIe v2 bus where it was relegated. So CUDA remained relegated to the academics and scientists, a few of which who clearly saw its potential to change the face of computing.

By 2013, PCIe v3 was starting to take over and facilitated much lower latency and higher bandwidth for add-in cards. Simultaneously, Amazon's (AMZN) cloud computing - known as Amazon Web Services ("AWS") - was just beginning to catch on like wildfire. When AWS adopted Nvidia in order to accelerate those previously academic and scientific tasks, the face of the industry changed forever.

With Nvidia's CUDA now on a grand scale and just an Internet connection away from the entire planet (excepting the rural US), little nuggets of the future computing began to work themselves into our daily lives. "Voice wreck ignition" turned into "voice recognition". Our phones not only began to understand us but also answer us in a voice that nearly seems human. Cars began to drive themselves. And technology started to suck just a little bit less with every day (mind you, we still have a long way to go before anyone would characterize it as "good").

For Intel, the problem was clear: their largest and fastest-growing customer was becoming Amazon via AWS. But AWS married their Intel CPUs with Nvidia's accelerators. Intel and Nvidia were now unlikely allies - a relationship that was facilitated by the PCI Express connection between their respective products.

For Intel, the problem was very clear: if they were to release PCIe v4 on their processors, then Nvidia's GPU accelerators would steal even more relevance within the computing industry. Intel's own CPU chips were indicative of this trend, as they became dominated by their on-board GPU, as pictured below (where the "Sunny Cove" cores are the actual CPU cores, dwarfed by the GPU):

Intel Ice Lake DieSource: WikiChip; Intel Ice Lake CPU die

Without a competing Nvidia-like high-power, discrete GPU accelerator ("dGPU"), it appears that Intel made the decision simply to stay on PCIe v3 until they did have such a product. Since the rest of the industry always waited for Intel's lead on new standards like PCIe, this caused a massive delay on progress. Intel's dGPU will finally be out this year so they'll be free to release new versions of PCIe in an attempt to push Nvidia off of their perch at the top.

If it weren't for you meddling kids.

Once Intel's proprietary, Intel-only goal of computing became apparent, the rest of the industry began working on the inverse: an Intel-free computing model. While Nvidia and IBM (IBM) did advance the interconnect between CPU and GPU with Nvlink, this, too, was also proprietary - Nvidia was following in Intel's footsteps in order to protect their livelihood.

After the announcements of 3DXP and Omni-Path in 2015, it finally became apparent that Intel's 2LM architecture would render their system memory as proprietary as well - no longer would just any DDR memory just plug into an Intel processor. Everything would need to be Intel-free because Intel would be free from everything else.

In late 2016, industry forces HP Enterprise (HPE), Dell, Samsung (OTC:SSNLF), and Huawei - among others - announced the Gen-Z consortium, which was developed explicitly to counter the threat from Intel by removing the "central" aspect of Intel's CPU ("central processing unit") model. Instead of a high-powered processor at the center, a centralized interconnect fabric would democratize computing.

Disaggregated Gen-Z ModelSource: Gen-Z Technology: Enabling Memory Centric Architecture; 8/2017

Gen-Z is just one of many decentralized, open computing architectures (OpenCAPI and CCIX being the other major two). As long as one of them gains enough traction against Intel, then we may wind up with a Rambus-like debacle in the near future (two decades ago, Intel tried to make their system memory proprietary but failed).

OpenFabrics Alliance MembershipSource: OpenFabrics Alliance Workshop; 3/2017

Micron's Meddling

In October of 2018, Micron announced intent to exercise their rights to acquire Intel's half of the IMFT joint venture - at cost - and they completed that deal last October. Now, Micron owns all of IMFT, which we know has been focused exclusively on 3DXP.

In a past article (Fishy), I posited that 3DXP gen1 was simply a mocked-up test mule chip that was designed primarily to work out the bugs in the technology. This is indicative by the cross section image below. It appears that they are using Micron's DRAM process for the front-end processing ("FEOL") and most of the back end wiring. But the 3DXP layer appears to from that of an Intel back end of line ("BEOL"):

3D XPoint Cross-Section TechInsights Source: TechInsights ©2017 with annotations by Stephen Breezy

The conclusion is easy to draw from this image: given a proper, high-speed CMOS design on the front-end, 3DXP has a lot of extra performance to be realized. Read the whole Fishy article for the full details.

If you listened to the last Micron earnings call as closely as I did, you may have picked up on a hint as to their plans on 3DXP, now that they own the whole fab.

Micron suggested late Wednesday that it will mothball - or at least substantially downsize - its Lehi, Utah fabrication plant, which it took sole ownership of after buying out the IMFT joint venture with Intel in a $1.5 billion deal that closed in October.

"We plan on relocating equipment and certain manufacturing employees to other Micron sites ", the chipmaker's CEO Sanjay Mehrotra said on an earnings call. "Redeploying equipment will also help us optimize Micron's front-end equipment capex," he added, hinting at issues ramping up 3D XPoint production.

The Lehi plant was dedicated to creating 3D XPoint memory products - an emerging memory category based on phase-change chalcogenide materials.

The move will help "right-size" the fab, the CEO said.

Source: Computer Business Review; 12/2019

Conclusion

With respect to 3DXP, I believe that CBR drew the wrong conclusion from Micron's earnings call - there are no issues ramping up 3DXP production and Micron's quick draw on their option to buy out IMFT is indicative of the value there. Rather, Micron is installing a CMOS front-end for their next iteration of 3DXP.

When Micron and Intel released 3D NAND, industry watchers wondered why they had chosen the expensive "floating gate" model instead of the cheaper "charge trap" model that the rest of the industry had adopted. The floating gate model requiring extra steps (b, c, d, and e) as pictured below:

Intel Micron Floating Gate 3D NAND ProcessSource: Electronic Engineering Journal; 2/2016

This image really caused the rubber in my head to start burning because it looked so familiar. Why would Micron and Intel spend so much extra money developing 3D NAND, with Intel losing money on the deal? A picture is worth a thousand words:

Micron 3D Xpoint PatentSource: Micron US Patent 9,728,584; Three dimensional memory array with select device; Application: 6/2013

When Kioxia (formerly Toshiba Memory) recently commented that cross point memory technologies like the existing 3DXP gen1 don't scale economically for bit cost savings, they were correct. The drawing above is that of a Micron-owned design to bring 3D NAND-like vertical scaling to 3DXP - all while retaining byte addressability and tremendous lithographic process scaling advantages.

Keep your eyes peeled for the next generations of 3DXP, now largely owned by Micron.

Disclosure: I/we have no positions in any stocks mentioned, but may initiate a long position in MU over the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Editor's Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.