A few short years ago, the thought of Intel (NASDAQ:INTC) re-entering the memory business was laughable. The concept was so far off the road of rational thought that if mentioned in out loud, the response from industry insiders would be public ridicule.
Time and technology advances change things.
Until the ARM (NASDAQ:ARMH) based mobile business exploded on the scene, virtually all memory, both DRAM and NAND, was used with Intel or AMD x86 microprocessors. It would be fair to say that the Intel x86 microprocessor family spawned the memory business as we know it today.
Since bits (ones and zeros) are the ultimate commodity, it didn't take long for memory to establish a boom-bust business cycle with the bust cycles generating negative gross margins. In that environment, Intel wisely exited the memory business.
Memory, however, is essential to microprocessor based systems of all types.
Microprocessor technology and performance has outrun the ability of memory to keep up, and the microprocessors have run into what is called the "memory wall," which is where further performance progress in microprocessors is impossible without fundamental change in how memory interfaces with the processor and the physical proximity of the memory to the processor.
One of the ways to blast through the memory wall would be to integrate the DRAM on the CPU chip. The problem with that is the chip would become so large as to be impossible to manufacture in commercial volumes. So, we continue with memory modules mounted on a mother board some distance from the CPU chip
A few years ago, microprocessors consumed as much power as a 100 watt light bulb. Getting rid of that heat was the problem of the time. At about that time, the mainstream DRAM chip was 1Gb. It took 32 of those chips to get the standard 4GB used in PCs. Also at that time, the mainstream NAND memory was about 32Gb. It would have taken 32 of those chips to make a 128GB solid state drive and the cost was astronomical.
So, we have continued with the PC structure of today; a separate CPU in a package, DRAM memory modules external to the CPU and mass storage is still in power hungry, fragile hard disc drives, but we are nearing a tipping point.
Today we find that the Intel Haswell CPU chip only consumes about five watts of power. For the same number of transistors, the CPU chip would be 1/8 the size of the chip of, say, 2007. The chip didn't actually shrink that much since the smaller transistors allowed for many more of them in the name of increased function and performance. For example, the Ivy Bridge didn't shrink as much as it could have, moving to 22nm, because Intel added over 400 million transistors to the graphics section.
DRAM memory chips have gone through the same shrinking process and now contain 4Gb on the same size and cost chip of the 1Gb chip of 2007. Now it takes eight chips to make that 4GB of PC memory. NAND, of course, has not been left behind. The density is up to 128Gb (yes, that is 128 billion bits), so that a 128GB solid state drive can be built from 8 chips.
Solid state drives have become a reality. Most of the Apple (NASDAQ:AAPL) Mac product line uses SSDs as standard equipment. The benefits of SSDs are many; low power, low weight, fast boot, high reliability (no head crashes), light weight, much, much higher performance.
Okay, so far we have the progress in low power x86 processors (code named Haswell), we have the predictable reduction in size and cost of both DRAM and NAND memory, we have the acceptance and volume use of solid state drives. What else is required for Intel to capture the semiconductor revenue involved in the DRAM and SSD? The obvious barrier is that both the DRAM and SSD would have to reside in the CPU package in order to achieve the performance and power benefits discussed above. All of the memory devices laid out in chip form would be nearly 4sq. in of silicon, far too big for inclusion in the CPU package.
The answer to the problem is what is called chip stacking. Sounds easy enough, but in reality chip stacking is a difficult technology. The leading methodology is something called TSV (Through Silicon Via). This means the chips are designed to have holes in them in precisely the same place for vertical alignment. With today's technology, it would involve stacking eight DRAM or NAND chips producing, in the case of NAND, a 128GB solid state drive in the footprint of a single NAND chip, or about .3 sq. in. in the case of DRAM, an eight chip stack would result in a 4GB DRAM module with a footprint of about .1 sq. in. The Haswell processor itself would require another .3 sq. in. The total area required for processor, SSD, and DRAM would be .7 sq. in. Many CPU packages are 1 sq. in or more in area, so the chip stacking would make it possible for Intel to make all the major silicon components of a PC and package them in the CPU package safely hidden away from the crazy competition that has characterized the memory industry throughout its history.
In the real world, an eight layer IC stack is a big problem, however, both DRAM and NAND are likely to go through one more shrink, which would double the density and bring both stacks down to a four level height. Four layers would be much easier to build and have higher yield and therefore, be less expensive. Hynix (OTC:HXSCF) has already introduced an 8Gb DRAM chip and anything that Hynix can do, Intel can do better. NAND technology will also extend to a 256Gb chip that would enable a four chip stack, 128GB SSD the size of my thumb nail.
So the predictable advance of technology has progressed from the point of, "no way" in regard to Intel re-entering the memory business to, "OK, I can begin to see the business rationale from here." The time from a technology "no way" to "OK" has taken about six years of evolutionary improvement.
The key here is the TSV technology. So, where does Intel stand on TSV technology? Twenty-one months ago, Intel demonstrated a Hybrid Memory Cube that was a joint development of Intel and Micron (NASDAQ:MU) and based on TSV chip stacking. The HMC is a commercial product. To demonstrate the power of chip stacking, the HMC exhibits 15 times the bandwidth (speed) of current DRAM modules, thus smashing through the "memory wall", and it does this at 30% lower power per bit.
So, all of the technology and pieces exist today for Intel to produce a "Compute Module" that would triple the Intel dollar take on each PC.
The big question is, "Can they do it profitably?"
The in-package 128GB SSD could add no more than $100 to the PC price. Using depreciated 22nm fabs, an Intel 300mm wafer would cost $2000 to produce and have to sell for $5000 in order to maintain the 60% gross margins that we have come to expect from Intel. With the last shrink of NAND, mentioned above, the total silicon area of a 128GB SSD (four 256Gb chips) would be about 1.25 sq. in. About 70 such SSDs could be made from one 300mm wafer. 70 times $100 equals $7000 per wafer, far more than the $5000 target, thus leaving room for added packaging cost and yield loss on a difficult technology.
The DRAM stack would require about .37 sq. in of silicon. Given the huge improvement in power and performance, that integrated memory stack should be worth $35 vs. the $25 of memory modules of today. One wafer should produce about 230 of those memory stacks for a wafer revenue of $8000, again far above the $5000 target for 60% gross margins.
So, the technology has progressed to the point of feasiblity and it looks like Intel could make serious money on the memory demand that is driven by its processors.
So where is the capacity? Intel is operating nine wafer fab facilities currently and is running at less than 50% of capacity. The nine fabs do not include the two huge fabs that are still under construction and are targeted for the 14nm transition and 450mm wafers. Later this year, Fab 42 and D1X will ramp 14nm technology. Those fabs are so large that they could handle all of the PC, server, and all the mobile SoCs that will be used on earth for the foreseeable future. When those new fabs come on line, Intel will have about 6 million wafers worth of unutilized, depreciated fab capacity. That is almost exactly the capacity required to manufacture the amount of memory discussed above.
Of course, I can't claim that this is the future Intel business plan, but all the technology and profit pieces are in place if it should make the decision to re-enter a captive memory business.
It is ironic that the original Intel business strategy, which was to use the proprietary x86 microprocessor to "persuade" customers to give it the high volume memory business in a package deal, could return. This time there would be no persuading, the processor would simply come with the memory "installed," kind of like and engine and transmission "installed" in a new car.
When I can see a full business strategy cycle over 40 years, and it makes sense, I know I am getting too old.
Disclosure: I am long INTC, MU. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.