Apple Dumping Intel In The MacBook? Don't Believe This Bogus Rumor

| About: Intel Corporation (INTC)

Apple Considering A Switch To Custom-ARM Based Cores?

So the latest juicy rumor from Bloomberg is that Apple (NASDAQ:AAPL), developer of the popular iMac and MacBook Pro lines of computers, is considering a switch from Intel-based micro-processors to its own home-grown ARM (NASDAQ:ARMH) compatible processors. Of course, as Wall Street loves bubbles and is absolutely infatuated with ARM Holdings, Intel managed to gap down intra-day on the release of this story and ARM Holdings shot to a brand-new 52-week high (trading at 66x past earnings and 17.23x sales, reminiscent of the dot-com bubble).

Now that we're all on the same page, it's time to dissect this rumor and show that, at least in the near and medium term (and probably long term), it is likely to be bogus.

ARM And Intel: The Dirty Secret Nobody Wants You To Know

It seems that the general investing public had seemed to believe that because Apple was able to design the venerable ARM-based "Swift" core for its A6 system-on-chip that the next logical step would, of course, be to go and design a super-high performance ARM-compatible, ultra-high efficiency chip. Right?

Well, not so fast. It's important to understand that there is a world of difference between designing a high performance micro-processor and designing a strictly low power, low performance one. Intel has years of experience inventing brand new, power efficient techniques for high end micro-processors on its side and needs only to bring it into lower thermal envelopes (as it is doing with Haswell which has been demonstrated burning only 8W). ARM - and its licensees - need to fundamentally devise brand new techniques to achieve high performance while keeping power in check.

If Apple were to design its own ARM-based processor for the Macs, it would need to offer a tangible performance and performance/watt advantage over Intel's parts in order to justify the movement to a brand new software ecosystem (with recompiles, translators, emulators, and so on - significantly hindering the user experience). This is not likely, as Intel has many years and many billions of dollars invested in R&D to figure out techniques to do efficient, high performance processing.

For example, Intel showed its latest "Haswell" processors working in an 8W thermal envelope (that's measured not theoretical) at the latest IDF. Now note, this is a full, high-performance processor that includes very high performance cores and strong integrated graphics. So, what's the dirty little secret? ARM's latest Cortex A15 - which is tangibly slower than the very lowest 1.3GHz, soon-to-be-2-generations-old Intel "Sandy Bridge" based Celeron 867 - consumes 8.32W average in the latest Samsung "Chromebook." The tests were run on the Chromebook with the display turned off, so that will not skew the results.

Stop and think about this for a second. ARM's flagship, mobile-focused core that is supposed to take the server and notebook spaces by storm, consumes as much power as Intel's upcoming flagship Ultrabook-focused "Core" part.

The Intel advantage here is due to a combination of years of experience in designing high performance, power efficient micro-architectures as well as a significant process technology lead, both of which reflect Intel's heavy R&D spending over the years (that Wall Street seems to give Intel so much flak for).

Apple: Lots Of Cash, But Is Shrewd

Now, the thing to note is that Apple isn't ARM. It actually makes billions of dollars per quarter and can afford to up its spending on R&D should it choose to start to try to be competitive in the high performance CPU space. However, this would be a substantial commitment that would require that the return on investment (in particular the heavy R&D that high performance CPU architectures require as the "low hanging fruit" is all but picked).

After several generations of trying and trying, Apple may actually be able to generate something comparable to the Intel chips at the time on the micro-architecture side, but it will cost a lot of money. The next hurdle of course is fabs.

Fabs, Fabs, Fabulous Fabs!

Intel's dark horse in the CPU race is its fabrication technology. In terms of high performance processes (the kind that MacBook-level chips are built on, not low power), Intel has a substantial lead. While other foundries are still struggling to get 32nm and 28nm chips out the door, Intel has ramped the majority of its production to its new 22nm tri-gate process.

Intel will have a true 14nm process available in 2014 for both its low power SoC and high performance processors, while competitors such as Global Foundries (which still hasn't managed to ship any viable 28nm parts) are claiming that they will have a "14nm-like" process available in 2014 that integrates 14nm FinFETs on top of what is essentially a 20nm process. Do not believe this for a second, though - the move to FinFETs is extremely difficult, and if Global Foundries cannot get out a decent 28nm planar process shipping by the end of 2012, then its 14nm FinFET aspirations are purely a pipe dream.

Taiwan Semiconductor (NYSE:TSM), the major legitimate foundry for the industry, has been shipping 28nm since late 2011/early 2012 and has a much more sensible and rational roadmap (since it has made a mint for its shareholders under excellent leadership by CEO Morris Chang):

At the most recent earnings call, investors received more clarification on when we would be able to expect 16nm (TSMC's equivalent to 14nm - these are just naming differences for the same thing).

So "risk production" for 16nm FinFET happens in November 2013 and then "mass production" starts about a year later - so late 2014 or early 2015, assuming no major delays (this is a fairly major assumption). However, Taiwan Semi is pulling the same trick at 14nm/16nm that Global Foundries is by marrying 14nm/16nm structures to a 20nm/22nm back end process. This is a time-to-market move.

Intel, however, will be able to come to market with a "true" 14nm process in mass shipments in the middle of 2014 with its "Broadwell" processor and likely its "Airmont" Atom chip at the end of 2014:

Apple's Tough Choice, Then

So, if Apple wants to move to a custom designed ARM-core for its iMac, MacBook Pro, and Mac Pro lines, then it will need to develop or acquire the chops to outgun Intel in the highest performance segment of the processor world. Unfortunately, no other company currently offers chips at the high end that come anywhere close to Intel's in terms of performance and performance/watt, so it'd be hard to simply "buy" into that industry.

And assuming after several generations, Apple is able to develop a competitive core with Intel's latest-and-greatest, it still needs a reliable foundry partner that can be on the cutting edge of process technology (after all, the better your process, the more aggressive the design teams can be). It is unlikely any of the major foundries will catch up to Intel on the technological side, but the more disturbing thing is the reliability of the partner for high volume. TSMC can barely supply enough of its 28nm chips to customers, so it is tough to imagine that TSMC would kick out all of its customers for Apple (who has a nasty habit of squeezing suppliers out of their margins).

Finally - and this is the bit that people really seem to forget - there's the little issue of software compatibility. Everything on MacOS is compiled and optimized to run on the x86 instruction set. Moving to an ARM-based chip would mean a recompile of everything. This could be done, but it would be a pain to make sure everything is all squared away.

History: Why Apple Moved To x86

The main reason Apple moved to x86 was that the PowerPC chips at the time simply couldn't compete. Period. Intel's chips were faster and more power efficient, and backed by a company whose job was solely to design CPUs (further driving home that "RISC vs. CISC" means nothing - PowerPC was a pure "RISC" machine). The roadmap for the PowerPC products was simply uninspiring, and Intel's lead in both micro-architecture and process technology was clear.

There is no such compulsion to move away from Intel here. In notebook and desktop chip spaces, Intel is far and away the leader in power efficient CPU performance, and in graphics it is improving dramatically each generation (likely at the request of Apple).

Now, the rumor piece noted that Apple wants tighter integration between its iOS and MacOS products, since an iOS app won't run on x86 and vice versa. The nice thing, though, is that iOS apps are developed exclusively on x86 Macs. Here's why this is such a big deal:

When iOS app developers need to test their code, they actually run a simulator to run the app on a "simulated" iPhone/iPad on their Macs. But this simulator is NOT an emulator - it actually compiles the code down to x86 and then runs natively on the machine! This means that porting iOS apps to x86 would actually be extremely easy, should Apple move iOS to x86.

Will Apple move to x86 in iOS is the real question! Now, Apple does have an in-house semiconductor design team, so it seems unlikely that Apple would want to fire all of these folks just to buy mobile chips from Intel. So here's what's really likely to happen:

Windows RT and Windows 8 from Microsoft (NASDAQ:MSFT) are fundamentally incompatible - one runs the ARM instruction set and the other Intel's x86. However, the way to make apps that run seamlessly on both is to have non-performance critical programs (i.e. all iOS apps) be written in a runtime language (Windows RT = Windows Run Time) that can be executed independently of the instruction set of the processor, and then have performance sensitive code (such as movie editing software) be written for MacOS in a native language (as they are now).

The rumors that both iOS and MacOS need to run on ARM-compatible chips in order to unify the ecosystem is yet another example of FUD - don't buy it.

Conclusion - Good Luck, Apple

If Apple decides to try to become even more vertically integrated by designing its own high performance ARM microprocessor for the Mac products, then it will need a lot of R&D expenditure and luck. It is a major business risk to migrate away from x86 in an environment that is so dependent on legacy applications, and an even further risk to try to go head-to-head with the world's largest and most competent chipmaker on its own turf.

Apple has a lot of money that it could use to make this work, but it's unclear as to whether this would be the best use of shareholder cash. It would still need to pay ARM its license fee for the instruction set, and the fab would still need to get its cut (and given that none of the fabs actually need Apple's business, it is unlikely that they will give Apple any special deals). It is unclear how much of a gross margin savings Apple would get from such a move.

On the bringing together iOS and MacOS thing, there is an easier way to unify the ecosystems without totally switching instruction sets and forcing recompiles of all existing applications. And there's no guarantee that an Apple-designed high performance CPU architecture would be good enough - or profitable enough - so as to justify the increasing R&D costs associated with staying on the cutting edge of CPU designs.

In any case, always be skeptical of these "rumors" from "unconfirmed" and "anonymous" sources - anybody can make up anything they want, and it is unwise to trade on fiction.

Disclosure: I am long INTC. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Author payment: $35 + $0.01/page view. Authors of PRO articles receive a minimum guaranteed payment of $150-500. Become a contributor »
Tagged: , , , Semiconductor - Broad Line,
Problem with this article? Please tell us. Disagree with this article? .