Today, an interesting article titled "Chips Replace Boxes In The Data Center" by Dana Blankenhorn popped up. The author is quite bearish on Intel's (INTC) server prospects and argues that in today's modern server environment, the "box" doesn't define a data center server, but in fact, the chip does. Further, Mr. Blankenhorn argues the following in favor of ARM (ARMH) against Intel:
For data center customers, it's much more convenient to turn ARMH software into custom hardware, on your schedule, than be stuck with whatever capabilities a corresponding INTC chip may have.
Unfortunately, this is not quite the reality of the situation. I believe that it's time to go back to basics and understand exactly what ARM is and what ARM isn't. Further, it is key to understand why Intel is not at such a perilous disadvantage as the author of the piece - and likely many others - believe in the server space.
What Is ARM?
This is the million-dollar question. What is ARM? It's not magic. It's not the fairy-unicorn goddess of performance per watt. It is what is called an instruction set architecture, commonly abbreviated as ISA. What this means is that a bunch of computer architects sit down and figure out what "commands" a particular class of CPUs will understand, how the CPU will interact with memory, what data types they can understand, and so forth. It's the native tongue of a computer processor.
Now, ARM also provides what is called micro-architecture. While an instruction set architecture is what the processor understands, the micro-architecture is how the machine itself operates. One can have many different micro-architectures that implement the same instruction set architecture. ARM provides reference micro-architectures to its licensees, but many choose to design their own micro-architectures that implement the ARM instruction set.
ARM: No Inherent Performance/Watt Advantage
Now, the next thing that seems to be really hyping up Wall Street is the notion of ARM in servers. So, here's the scoop. It seems that for a number of workloads, it is not necessary to have particularly strong CPU cores, but it is important to have a lot of them at the same time. Using many "weaker" cores rather than fewer stronger ones may have some power efficiency benefits for these workloads, hence the market opportunity is created.
Fundamentally, the ARM instruction set has no material advantages on a performance-per-watt level. Historically, however, ARM and its licensees have developed micro-architectures that have been targeted towards lower power consumption. This isn't a "limitation" of x86; it's a set of design choices for a particular processor. As Intel showed with its latest Atom Z2460, it is very competitive with the very best that ARM and its licensees can put out. In fact, according to Anandtech in its latest review of the iPhone 5 (in which Apple (AAPL) debuted its own custom ARM microarchitecture):
At least based on this data, it looks like Intel is the closest to offering a real competitor to Apple's own platform from a power efficiency standpoint.
So, when Intel designs a low power oriented micro-architecture on x86, it's competitive (to put it kindly) with the ARM-based processors. There's no "magic" to the ARM instruction set that makes it more power efficient - it's all about the actual CPU micro-architecture and the transistor technology.
The Dirty Little Secret About The ARM-y
The real reason that many semiconductor designers, including Nvidia (NVDA), Qualcomm (QCOM), Applied Micro (AMCC), and Calxeda are so gung-ho about ARM has very little to do with the inherent technical features of the ARM instruction set. These companies will go out of their way to say how "outdated" x86 is, and how they're starting with a "clean slate" by using ARM, and so on, but that's not at all the real reason for the bullishness.
The truth is, Intel won't let just anybody have an x86 license, and it certainly won't grant access to its horde of patents and technology to just anybody (only AMD (AMD) gets that right). Nvidia tried to force its way into a license via lawsuits, but failed. However, Nvidia and others got a break with ARM.
See, the great thing about the ARM instruction set is that it has been around for a long time. This has given rise to the software support (compilers, operating systems, profilers, etc.) that is needed for an instruction set to be successful. With the smartphone revolution, and as transistors shrunk significantly, ARM processors became "fast enough" to be used in real computing devices, so mainstream platforms such as Google's (GOOG) Android and Apple's were now building enormous ARM-compatible software bases. This was finally a viable competitive challenge to Intel and its x86 dominance in the client space.
However, the server space is quite different. The software here is designed for x86-64, and this is very meticulous, hand-optimized code that has been maintained and updated over many years. All the neat tricks to squeeze every last ounce of juice from these Intel and AMD processors is there in the code. So switching from x86 to ARM on an instruction set level would be extremely painful, especially for mission-critical data centers.
Now, if there were a real reason to do the switch, then it could be viable. The problem is, there isn't. On the most recent conference call, when addressing the threat of ARM competitors in the micro-server space, CEO Paul Otellini was very blunt:
We've got out second generation of the Atom micro-server chips out now, the first one is on 32-nanometers. Now we're sampling the 22-nanometer one and what we've decided is that we are just going to push Atom as hard as possible in this space, and have it be a better offering for our customers than having to switch all their software and worry about all the reliability features.
Now, to further add salt to the wound, there have been some very convenient "leaks" of the details of Intel's next generation "Avoton," the 22nm Atom system-on-chip. These will feature between 2-8 cores and consume only between 5W - 20W depending on the particular model. It is likely that these products will be very good on a performance/watt basis, as the current 32nm Atom - based on a 5 year old micro-architecture - is still very competitive with the latest ARM designs. If this is the case, there will not be a compelling enough reason for the data centers to ditch the current code base for brand new, unproven platforms from much smaller players.
Don't buy the hype until you understand the facts. Intel is not competing with ARM. It is competing with the server system-on-chip designers such as Calxeda, Applied Micro, Nvidia, and Marvell (MRVL). While the ARM-in-servers push could be successful in the case of great innovation from these players coupled with a fumbling from Intel, the ARM guys have a big mountain to hike. Intel has the resources, process technology, and experience to corner any space of the server market that it wants.
The "Nehalem-EX" and "Westmere-EX" chips made significant inroads in the "big iron" segment of the server market traditionally dominated by IBM's (IBM) POWER and Oracle's (ORCL) SPARC. The "Nehalem-EP," "Westmere-EP," and now "Sandy Bridge-EP" chips dominate the 2-way and 4-way server and worksatation segments. Is it really wise to doubt that Intel will make significant inroads in the micro-server space, especially as it pushes aggressively to develop low power designs for the smartphone and tablet spaces?