Seeking Alpha
Profile| Send Message|
( followers)  

The new Advanced Micro Devices (NYSE:AMD) is truly Geeks Gone Wild. AMD has spent the last 7 years and billions in R&D reinventing itself, and the new AMD is going after a whole new market: the Computer Fabric market. AMD used to compete with Intel (NASDAQ:INTC), making chips to put in boxes, but AMD has realized that Computer Fabric is the future and chips in boxes are the past.

HSA and APUs and Freedom Fabric, Oh My!

AMD founded the Heterogeneous System Architecture (HSA) Foundation, invented the Accelerated Processing Unit (APU), and bought SeaMicro to sell their Freedom Fabric. What does it all mean?

Single-chip (technically 'single-threaded') performance doubled every 18 months through the 1980's and 1990's, but this increase has dropped off substantially since around 2006, as can be seen in the below figure.

The HSA Foundation was founded by AMD and consists of Qualcomm (NASDAQ:QCOM), Samsung (OTC:SSNLF), ARM Holdings (NASDAQ:ARMH), Oracle (NYSE:ORCL), Texas Instruments (NASDAQ:TXN) and many others. Heterogeneous architecture is their answer to this problem. AMD has cleverly elected to not play this game of diminishing returns with Intel anymore. Instead of designing bigger and faster general-purpose chips, they are designing efficient ways to combine chips in a fabric. The idea is to scale out instead of up, since Moore's law is coming to an end and transistors are becoming cheaper-but-not-smaller.

Heterogeneous System Architecture is about allowing different, specialized chips talk to each other, while Freedom Fabric is a technology which allows a lot of those chips to talk to each other. Intel is still in the 90's mindset of decreasing transistor size fast enough that everybody else's chips are obsolete by the time they hit shelves -- which is why they are hitting a brick wall as transistor size reduction rapidly comes to an end. On the other hand, AMD is no longer in the 'chip in a box' business, AMD is in the Computer Fabric™ business. Their architecture, roadmap and business model no longer revolve around increasing the performance of individual general-purpose chips, but instead around Scalable Silicon™, the combining of different types of specialized chips to increase silicon efficiency and to lower TCO for their customers.

AMD is deeply invested in their new vision, both financially and strategically. They acquired SeaMicro for $334m, and I estimate they have spent at least $6bn in the last 7 years researching and marketing HSA. To get a sense of where they are focusing their efforts -- of the 39 universities AMD has research partnerships with, by my count 90% are HSA-related.

And for good reason. Today, as more computing is offloaded to hyperscale data centers and general-purpose architecture is reaching its limits, it is finally becoming economical for large customers to identify specialized architectures -- rather than general-purpose architectures -- which will speed up different business needs while lowering TCO. When a large purchaser like Facebook (NASDAQ:FB) wants silicon with specialized function, AMD will simply integrate those extra components into their fabric and ship as many yards as Facebook wants.

AMD's first fabric products are hitting the market in 2014. The fabric consists of two parts, the compute nodes, and the thread for connecting them together. The thread comes from their purchase of SeaMicro and is called Freedom Fabric, a technology for packing computing nodes inside a dense, power efficient package.

Previously this was difficult because the technology for threading nodes together was either expensive, or proprietary, or slow, or all three. Freedom Fabric is fast, cheap and open. Freedom Fabric is so fast and so cheap that you can thread nodes together and have performance scale almost linearly with the number of nodes -- the holy grail of scaling.

This new fabric needs compute nodes to fill in the gaps. Technically any architecture can drop in for the compute nodes, but the geekiest is a new architecture developed by AMD called an APU, or 'Accelerated Processing Unit'. The APU is a general-purpose processing unit (aka CPU) fused with a Vector Arithmetic Unit (aka GPU).

Wait a minute. Vector Arithmetic is like, all that physics and science stuff. Blegh!

That's right! Vector Arithmetic, or 'Linear Algebra', is the language of physics, and indeed of all of science. And that is why AMD's geeks have finally, truly gone wild. See, since Vector Arithmetic is the language that scientists speak, it is also the language for speech recognition, face recognition, ad-serving, data mining, virtual reality, autonomous robots, self driving cars -- and everything else that makes up the interface between computers and humans! Vector Arithmetic IS the interface.

AMD cut their teeth making games look realistic, which is why the newest animations coming out for PlayStation4 and Xbox One look so real it is stunning. And now, AMD is going for the whole pie, not just gaming.

Today, APUs are just powering Xbox One's touch free UI and virtual reality headsets. Tomorrow, AMD's APUs will be powering Google (NASDAQ:GOOG), Facebook and Amazon (NASDAQ:AMZN), not to mention other enterprises like Walmart (NYSE:WMT) and Chevron (NYSE:CVX). I'm not making a logical leap here -- let's take a practical area like machine learning. Machine learning implementations already exist for GPUs, and are anywhere from one to two orders of magnitude more performant than their CPU brethren, for example as seen in the following graph (lower is better).

(click to enlarge)

AMD's new APU is expected to have twice the performance at nearly 1/3rd the price of an equivalent Intel 4770K + Nvidia GT 630, so on the graph above it would score even higher. What would this mean for Google, whose success depends on their machine learning algorithms? Well, according to the internet, the top 5 most popular machine learning algorithms are: K-Means, Singular Value Decomposition, Naive Bayes Classifier, Support Vector Machine and Decision Trees.

GPU implementations already exist for four of these (which means that time to implement for APU will be negligible): K-Means, Singular Value Decomposition, Support Vector Machine, and Decision Trees. The last one, the Naive Bayes Classifier has portions which are vectorizable, so the APU will allow the computation to be accelerated. In fact, recent research (e.g. on Decision Trees) shows that a mix of CPU and GPU computation is generally faster than using either architecture exclusively - perfect for AMD's mixed architecture!

In other words, by adopting AMD's architecture, Google will cut costs in a heartbeat.

The IT industry is roughly $2,000bn and growing at a pace of 4% a year. It is clearly not a dying industry.

(click to enlarge)

And for AMD, Kaveri is just the lowest hanging fruit, an HSA proof-of-concept, so to speak. As Moore's law crashes and the semiconductor industry rapidly reassembles itself, Rory Read, Andrew Feldman and the rest of the folks at AMD are just champing at the bit to ride this new trend, already with a foothold in many of these high-growth areas (cf. the PlayStation 4 selling 1 million units in 24 hours)!

So why does AMD's valuation reflect a dying company in a dying industry, rather than that of a high growth company well positioned within a high growth industry?

My answer is that AMD has simply been misapprehended by The Street. To me, AMD's vision is spot on, and they are just the right size to execute on it: not too big, not too small! I am long AMD.

Source: AMD: Geeks Gone Wild