Seeking Alpha
Profile| Send Message|
( followers)  

Intel (NASDAQ:INTC) and Micron (NASDAQ:MU) have come together to create the future of exascale computing. Micron appointed Scott Graham for its Hybrid Memory Cube technology in 2012. At the MemCon 2012 Conference, Scott made his keynote speech on this technology. He helped investors understand how this invention redefines traditional DRAM memory architecture.

When I look at my smartphone and navigate through the homescreens, I want everything to be faster. I guess this is the collective rhetoric everywhere. The need for speed is driven by hordes of speed-centric applications, systems and mobile computing devices. Hybrid Memory Cube (HMC) is a technology that exponentially increases memory performance and reduces operating costs. HMC delivers performance improvements that are legions ahead of contemporary memory technology. A single HMC delivers speed and performance that is fifteen times more than a traditional DDR3 module. It needs less board space, less active signals, less operating power and less active pins, as Graham showed in the MemCon conference cited above. I guess that means lower costs.

My assumption is that growth and demand of mobile devices will fuel the demand for DRAM shipments. Mobile handset and tablet share in the DRAM market is to be 26% according to iSuppli, a research firm. DRAM shipments will rise to more than three hundred billion by 2019. In 2013, this market will become an oligopoly. It will be dominated by Micron, Hynix and Samsung (OTC:SSNLF).

What are the growth drivers for Hybrid Memory Cube technology?

Speed is just one of the highlights of HMC. This technology is ruthlessly efficient in energy consumption. HMC's per-bit energy consumption is 70% lower than DDR3 DDRAM technologies. Another aspect that contributes to its success is its form factor. The per-bit density is higher, which allows for more memory inside a machine without increasing the physical size of the memory. Therefore, HMC requires 90% lesser physical space when compared with RDIMMs.

  • Zero or negligible latency - The HMC has super-fast inbuilt responders. Therefore, there is less work queue processing time. There is an increase in bank availability as well. All of this makes for zero or negligible latency.
  • Improved bandwidth - One HMC equals the performance of 15 DDR3 modules. The logic interface dramatically improves speed and aids in increased bandwidth creation.
  • Reduced power consumption - HMC uses 70% less electricity per bit when compared to DDR3.
  • Small and powerful - HMC requires 90% lesser space when compared to RDIMMs.
  • Flexibility - The logic layer allows HMC to be placed or integrated into any platform or application.

Market Potential and Competitor Landscape

Historically, as higher speed memory came to markets, they replaced existing slower memory. I was just reading a decade old article which sadly announced the demise of SDR SDRAM as it was replaced by DDR1/2/3 SDRAM sticks (DIMMs). Now that we have the new Hybrid Memory Cube technology, it remains to be seen if this will slowly replace the aging DIMMs.

In order to get some idea of the market potential of HMC, I spent hours researching. I could find nothing on the HMCC page, nothing in Micron IR pages or earnings calls, no one has any article or study on this. Finally I called up Micron's IR, spoke to a very helpful person who promised to talk to Scott Graham and send us material. He sent us the following graphics from a study by Yole, and told us it "has been cited at several industry presentations by other 3D IC speakers as reference for the size of the opportunity, and Micron concurs with the data." (via email).

So here's Image 1:

(click to enlarge)

As we can see here, the 3D TSV penetration rate (HMC is 3D TSV) will be only 1% this year, but will become 9% by 2017, or roughly $38 billion out of a total semi market of $445 billion.

Next Image:

(click to enlarge)

This gives you a breakdown by industry. As you can see, Logic System-in-packages (SiP) and System-on-chips (SoC) are slated to rise exponentially in the 3D TSV area. This is a huge investment opportunity.

Meanwhile, to get an indirect idea of the market potential for HMC, let us look at the product it is going to replace - the DRAM. According to a Forbes article by Trefis, "DRAM shipments (will increase) from 29.3 billion in 2012 to 334 billion by 2019." A major chunk of this will be driven by mobile, where faster, smaller, more efficient memory is premium. This, broadly, is the stake, and this is what investors betting on this sector want to look at.

In the global DRAM market, Micron's HMC competes with Hynix and Samsung. Micron's main focus has always been DRAM chips, although the company is also into manufacturing NAND Flash. In Q1 this year, Micron increased its revenues by 13% and 6% in the DRAM and NAND segments. During the same time last year, there was an 8% decrease. Looks like there is an upward trend in the memory market this year.

Samsung and Micron have locked horns this year over TLC NAND chips. Hynix has reportedly posted a profit and it's on the heels of Micron. The recent purchase of Elpida Memory gives Micron access to economies of scale.

So what are the risks for investors in this technology? I would say there are very few risks. Of course, the potential of the HMC technology would depend on the sales of tablets, smartphones and ultrabooks. However, this is a technology in which the two tech giants, Intel and Micron, have both invested in. My research shows how much better it can be compared to existing volatile memory. Moreover, the demand for the mobile segment is on the rise, and so, I think this is a very big opportunity for investment.

What and how did HMC come about - The Background

In 2007, the Exaflop Feasibility Study Group identified a need based on their research. A question was raised as part of the research. How can high performance computing devices perform a quintillion floating point operations (FLOP) per second with perfect efficiency?

An Exaflop machine was not the answer, for an Exaflop machine would need more than 1.5 gigawatts of power. This would be equivalent to consuming 0.1 percent energy of the U.S power grid. This type of power consumption was enormous, and this option was not even a remote possibility.

The research group's next task was determining the total power requirements based on the power required to execute a FLOP. 70 picojoules of power was required for one FLOP. Research scientists and engineers felt that they could shear this down to 5 to 10 picojoules. But the energy consumption to move data from the source to execution unit and back to the destination was humongous - it was in the range of 1,000 picojoules. So this was the problem, and HMC emerged as a solution.

So what is the Hybrid Memory Cube?

Specially engineered DRAMs constitute the fundamental building blocks of HMC. Each DRAM can store 1 gigabit of data. These DRAMs are placed one over another on a logical interface device that functions as a common communication tier. The DRAMs connect with this logical interface device using Through-Silicon Vias (TSV) - which is a vertical connection providing electricity supply. The TSV increases the ability of the DRAMs to form a parallel interconnection between them and the logic die (the stack base).

(click to enlarge)

Image source: brightsideofnews.com

The vertically stacked DRAM dies are bedded on the logic layer, which is at the stack base. It is the logic layer that handles all DRAM activity. This type of structure is a cube-like arrangement and hence the name Hybrid Memory Cube. System designers have the option to mount the HMC near processors or at a distance. The near-memory approach, i.e. placing the HMC near to the processors, provided the best system performance.

The stacked approach that HMC follows is different from the conventional DRAM approach which juxtaposes RAM dies on a stick. But in HMC, memory dies are stacked vertically, and the connecting wires between them are shorter. Data is transferred faster between each memory die utilizing less energy.

There is another reason why HMC enables superfast data speeds. It does away with logic transistors in each DRAM die and positions them in one location, which is typically at the stack-base. This type of memory architecture is different from conventional DRAM architecture where each memory die has its own logic circuits. Each of these logic circuits consumes more power and contributes to increased complexity of the I/O process.

But in the HMC, one logic circuit handles the circuitry for all the stacked memory dies. This centralized logic processing approach allows for faster data rates in the range of 320 GB per second. Interestingly, it consumed 70% less power to achieve this.

Is the Hybrid Memory Cube a silver lining for Micron?

The HMC has been awarded the Product of the Year at the EE Times and EDN ACE Awards Event on April 24th, 2013. This award marks the birth of a new way to design memory architecture to support increasing needs of bandwidth, memory density, and power efficiency in desktop, mobile, and network computing.

Although revenues of Micron tanked 14.6 percent this quarter when compared to last year's, this award will surely keep this company in the reckoning, especially in the DRAM area. Micron is still awaiting early traction for its Hybrid Memory Cube technology. This will depend on many factors. One of them is the induction of ultrathin computing devices. Micron hopes that the market will see the introduction of several notebooks and Windows 8 smartphones to necessitate the need for market-focused HMC manufacturing. Micron does not foresee enough market demand to start full-fledged production of HMC right now.

The Future of Hybrid Memory Cube technology

HMC will soon form the backbone of network systems. Network system performance is going to transform for the better in the coming decade. With the onset of new-generation 100G and 400G network infrastructure, HMC will become a necessity. HMC will also morph into supercomputing systems and such high performance computing systems and devices.

If DDR4 is evolutionary, HMC is revolutionary. The pendulum has shifted to a technology that does not just bypass contemporary memory architectures. In fact, HMC has the potential to cause a complete paradigm shift. It is definitely a successful attempt to redefine memory technology by being a completely novel standard in memory architecture.

Source: Hybrid Memory Cube: Making Super-Fast Computing A Reality