- New rumor on Intel getting into the memory business.
- This would be a massive surprise to the industry.
- How would Intel use a new memory technology?
Back in July I wrote a couple of articles (1 2) speculating about Intel (NASDAQ:INTC) re-entering the memory business in some manner, whether embedded DRAM or some kind of NVM (Non-Volatile Memory) to be used in SSDs (Solid State Drives).
The nicks and bruises were just beginning to heal when I received the following email through Seeking Alpha from another author and very straight up guy:
I've been corresponding with a guy who commented the following regarding Intel's memory efforts:
"Regarding Intel, I just traded mail with a guy who is considered the "father" of the xxxxxx business in Taiwan. He is presently inside of UMC. He thinks Intel is on the verge of announcing $4B in CapEx for their new memory and thinks it will be built in a converted logic fab. No mention of MU at all.
He asked that his name not be used..."
So, get your sticks and stones out because I'm going to ramble around the Intel-and-memory subject again.
By the way, the above rumor is really shaky since it seems to be twice removed from the original source, but sometimes that's as good as it gets.
The first question to be asked is, "Why in the world would Intel want to be in the memory business? After all they started the company making DRAM and quit that. They moved onto flash memory and sold that off, ultimately, to Micron (NASDAQ:MU).
First of all those exits from the memory business might as well have been 100 years ago. Intel exited the DRAM business before the term "Memory Wall" was ever mentioned. The node at that time was 10u vs. 20nm today. A DRAM cell today is .000004 times the size of that first DRAM cell. A 4Gb DRAM of today contains four million times the bits of that early 1Kb chip, and the new chip costs half as much as that early chip. That's Moore's Law in action over 45 years.
Fast forward to today and we see that microprocessors have become so fast that memory is now one of the bottlenecks to increased performance, whether in terms of raw speed or lower power consumption. To say that Intel, as the leading high-performance microprocessor manufacturer, has an interest in memory technology, cost and availability is an understatement of galactic proportions.
From here on I will stop using the terms DRAM and NAND when discussing future technologies, since we don't know what is around the corner in memory technology.
We do know that the performance of computing devices has become so high and the power requirement have become so stingy that anything that helps those two trends will be of interest to Intel.
We also know the closer, physically, system memory can be located to the processor the better that processor will perform on both speed and lower power consumption.
We also know that Intel has a toe in embedded system memory through its Crystal Well initiative and we know that they are in the SSD business using Micron made parts.
We also know that there is a large body of research going on in alternative memory technologies. That is because both DRAM and NAND memory are reaching the end of their ability to scale (shrink), and some of the shortcomings of those two technologies are becoming more serious.
The Holy Grail for memory would be infinitely durable, non-volatile (even for system RAM), very fast, and zero power when not being read or written and much smaller (cheaper) than either DRAM or NAND.
One of the candidates for that universal memory is ReRAM, sometimes called RRAM. This is resistive memory, where the cells can switch to a low resistance state or a high resistance state that can be decoded as a one or a zero.
If you Google either of the terms above you can get hours of reading about ReRAM.
OK, where would the use of this "universal memory" be of most value? Well, everywhere, I suppose, but mobile might be the first use case due to the never-ending quest for lower power.
A UM (Universal Memory, from here on) that could serve as RAM and storage embedded in an application processor package would change the mobile business substantially.
PCs? I suppose, but UM would have to be really cheap because DRAM, NAND and HDDs are really cheap.
High performance computing? Intel is going to crazy levels of cost and technology, called HMC (Hybrid Memory Cube), to get large chunks of memory as close to the Knights Landing chips as possible.
Data Center? Well, ya. Data centers have to power the RAM and storage and then air condition the energy-generated heat out of the building, a double whammy.
How about the IoT (Internet of Things)? The chips that will be developed for the IoT will be physically small with lots of features and will have RAM and storage memory on-chip. This is for sure, and it is nothing new, all single chip microcontrollers have on-chip memory. Even the old Intel 8048s and 8051s (the SoCs of their day) had on-chip RAM and program memory. It would be nice if the memory was of only one technology for fabrication simplicity.
So, to summarize, Intel has reason to develop and deploy a new memory technology that could be embedded in virtually all its compute products to a more or lesser extent.
How does Intel keep a memory technology from becoming commodity priced? They have never been able to do it before.
In every one of the use cases described above, the memory would come securely bolted to the compute element, either on-chip for IoT or in-package for some of the other applications.
If you actually did google ReRAM, you would find out that the different permutation of the technology have "warts." Maybe too slow, maybe not enough durability to be used as RAM, always bumping into the fact that DRAM and NAND is very cheap and in very high volume manufacturing.
A true UM will probably involve a materials breakthough such as strained silicon and HKMG (High K Metal Gate) did for advanced nodes. Interestingly, Hafnium oxides (the HK in HKMG) are now mentioned as a material that could solve some of the ReRAM problems. Intel, of course, has more Hafnium experience than anyone else.
Bits and pieces:
Intel revised the joint venture with Micron, shrank it, but included "emerging memory technologies"
At the November 2013 Investor Meeting both Diane Bryant, Senior Vice President of the Data Center Group, and Stacy Smith, Executive Vice President and CFO, made unusually strong comments about "Non-volatile Memory," no mention of NAND, just storage or NVM.
Obviously the four legs of the data center business from, Diane's slide 21, are CPU (leader), Fabric (working on it), cabinet to cabinet communications or Intel Silicon Photonics (to be released next year according to Brian Krzanich at the Citi presentation yesterday), and "Non-Volatile Memory" (again no mention of NAND).
Stacy mentioned it on slide 32 of his presentation as the "Non-Volatile Memory Solutions Group," not NAND and not even SSDs. He felt NVM was a growing and profitable business and was "one of his personal favorites."
The electronics world had a near miss with disaster on the near demise of Elpida. If Micron had not saved Elpida the DRAM business, particularly mobile DRAM, would have been in the hands of two Korean companies, Samsung and Hynix, that can be best described as "cousins."
Yesterday, at the Citi presentation Brian Krzanich basically pre-introduced the Grantley server chip (which he said will be released next week) and mentioned that it will be compatible with DDR3, DDR4 and two "other" memory technologies. What was that about?
What would a new memory technology mean to the rest of the world? Not good if you are in the traditional memory business. Great, if you are Intel.
Looking back at the rumor. What does "on the verge" mean? Next month, next year, next decade, never? Maybe next week at the Intel Developers Forum.
Wouldn't that be sweet for Intel shareholders?