D. Warren A. East - Chief Executive Officer, Director and Member of Disclosure Committee
Simon Segars - Executive Vice President, General Manager of Physical IP Division, General Manager of Processor Division and Executive Director
Tim Score - Chief Financial Officer, Director, Member of Compliance Committee, Member of Disclosure Committee and Member of Risk Review Committee
Francois Meunier - Morgan Stanley, Research Division
Sandeep Deshpande - JP Morgan Chase & Co, Research Division
Gunnar Plagge - Citigroup Inc, Research Division
Ambrish Srivastava - BMO Capital Markets U.S.
Nick Hyslop - RBC Capital Markets, LLC, Research Division
Gareth Jenkins - UBS Investment Bank, Research Division
Julian Robert James Yates - Investec Securities (UK), Research Division
Brett Simpson - Arete Research Services LLP
Lee J. Simpson - Jefferies & Company, Inc., Research Division
ARM Holdings, plc (ARMH) 2012 Analyst Day May 23, 2012 5:00 AM ET
D. Warren A. East
Thank you very much, everybody, for coming to our Analyst Day this year. I'm literally going to stand here for not very long and explain the context of the day this morning and what you're going to hear. You're normally used to hearing from me and from Tim and Ian, so you're going to hear less from us this morning and more from the ARM team.
We're going to kick off with Simon, who's going to be highlighting the importance and key benefits of the ARM business model, which our business model is a key differentiator for us and that's what enables the ARM world to deliver innovation. And the business model itself also delivers serious economic benefits to our partners and to their customers. He will then also look at the next 5 years or so of market evolution and some of the opportunities there for ARM in the partnership and how we're going to bring real product to market to realize that.
Now the business model is crucial. But also the technology that underpins it is crucial. And Tom is going to talk about how ARM is so good at low power and power efficiency in particular. By that time, we'll all need a short break. And so after the break, we're going to look at a couple of areas that are contributing to growth over the next several years for ARM and that are of strategic importance for us. One, from an ARM product perspective, Pete is going to talk about our graphics offering and how we're developing the potential for leadership in graphics. And then you're going to hear from Ian, who will be talking about server design, with data centers as an entry point for server design but the whole significant opportunity that both presented to and enabled by the ARM architecture being used in servers.
And then before we close, we'll have a Q&A session and -- or actually, before we get to Q&A session, Tim is going to summarize the growth opportunity, what that means through a financial lens, talking about opportunities for revenue, margin and earnings growth over the coming years.
So with that, I will hand over to Simon to kick off.
Thanks, Warren, and good morning, everyone. Hope you're doing well. So as you saw from the agenda, we're going to talk about some of our technology. Tom is going to talk about our CPU roadmap, what we're doing about low power. Pete is going to talk about our graphics roadmap and what we're doing there. And then Ian is going to talk about how some of this technology comes together in a product. So before we get into all of that, though, what I'm going to do this morning is talk about ARM's approach to business, ARM's approach to working with our customers and how we go about developing some of those products.
At the heart of all of that is partnership. Partnership has been the way that we have grown this company over the last 21 years and remains our view on the right way of doing things going forward. So before we go look at the future, let's just go back a bit. ARM is 21 years old. I was fortunate enough to join the company just after we got started. And so I've witnessed firsthand the evolution of the company and the business during those 21 years. And it's been a phenomenal evolution over those 2 decades. Interestingly, we take mobile phones for granted today. And at ARM, we spend a lot of time thinking about mobile phones and where they're going, and you'll see that as a trend this morning.
But when I joined the company, actually nobody else in the company had a mobile phone. Apart from our CEO, who had this big, clunky analog thing, nobody else had a mobile phone, which is quite strange when you think about today, where there are enough mobile phones in the world for kind of everybody on the planet to have a couple. Now at that time, you could buy a mobile phone, the original Motorola phone shown there was about $4,000 to buy new at the time. So that was kind of outside the price range of most people. And fortunately over the last 21 years, these things have evolved enormously into the devices that we carry around and take for granted today.
Now computers are developed a lot as well. And these 2 things have come together into what is now barely recognizable as a mobile phone is really a mobile computer. And today, of course, we have a phenomenal amount of compute power in our pockets. And there's, I'm guessing, a huge amount of that in the room right now. So functionality has gone up and through economies of scale, through the way process technology has evolved and through the way that ARM's business model has evolved, we've been able to deliver all of that increased -- massive increase in functionality at a much lower cost. And that's been one of the key things that's helped the industry as a whole evolve and technology evolve to what we know today.
Now the industry has evolved a lot in the time frame of ARM. And if we go back to the '70s before ARM was around and certainly before I was at work, what you saw was fully integrated companies, companies that did absolutely everything themselves. They did design, they did marketing of their products, they sold direct to consumers and they manufactured everything. They were completely vertically integrated companies. Now the problem with that was if you wanted to design anything, you had to do everything. And that's a very costly way of approaching the world.
So over that period since the '70s to today, what we've seen is the industry disaggregate. So first of all, if we look at semiconductors, you have the era of the ASIC model, where there were companies who specialized in getting your chip built, in doing the manufacturing, then handing you back a finished working device. That went into systems integrating companies. I used to work for a telecoms company that operated in this way. We did ASICs with LSI and they gave us chips, we built telephone exchanges.
But that model continued to disaggregate as well. And what we saw going after that was the advent of EDA. So ASIC companies couldn't afford to do all of the software developments for tools design themselves, and there just wasn't the economy of scale there. You needed specialized EDA companies who could amortize the cost across the entire industry. Same with IP. When ARM came along, if you wanted a microprocessor, you had to build your own. There was no other choice. So it limited the number of people who could build CPU-based chips. With the advent of ARM, again we were able to design CPUs, design other IP and amortize the cost across any entire industry, so made it very cost-effective for lots of people to do design. But what that has led to is this industry where companies can specialize, companies can achieve economies of scale and deliver a much more efficient solution from an industry cost perspective.
Now that has, in turn, enabled a very sophisticated design to be done by lots and lots of different people. We have seen design costs going up along the way, for sure, and I'll come back to that. But we're seeing the number of transistors that you can put into a device go up enormously and the sophistication of the device go up enormously because of the way that this specialization has occurred across the industry.
Now the downside of this disaggregation is -- or potential downside is that if everybody concentrates on what they do alone, then the whole thing becomes actually a bit inefficient. If I only worry about CPU design and I leave you to worry about putting your chip together, then it may be that I make the wrong decisions based on the next problem that you have to solve. So smart people have recognized this. And personally, I like to think of this as not a stack of people operating a supply chain, but actually more of kind of circle of companies who -- where the smart ones who have kind of worked out, but actually communicating a lot with each other, can reduce and remove the potential inefficiencies between these slices of disaggregated chain.
Some of couple of good examples. As I said IP has been really an approach to design that's been really transformational. It's allowed people to take very complex building blocks and build really sophisticated chips without having to do everything themselves. But at the end of the day, you've got to build that. The way you build that is to use EDA tools and you need a process at the end of it so that you can manufacture the transistors. By IP companies and EDA companies and foundries working together, we can look at some of those issues upfront so that designers can then take all of that knowledge safe in the knowledge that when he comes to bring it all together, it's actually going to work. So 3 collaborations like that, we've been able to reduce some of these inefficiencies. So the key to it is an approach to business that is around partnership and openness. And that is what ARM has been doing and is what we intend to continue to do as we go forward.
So at the heart of our business model is about helping the efficiency of the industry. What we do is design IP that we license to people who build chips. Those chips, in turn, get sold to OEM companies who build their own products. And every time that they sell one of those, then we extract a small royalty from that. Our remit is to create those designs and work very closely with our customers, work very closely across the supply chain to understand the needs of today, the needs of tomorrow and the needs of the next week to make sure that our technology is going to be well suited for the future needs of those end products.
So I said a moment ago if in 1990, if you wanted to design a CPU-based chip, your only choice was to do everything yourself. You had to design the processor, all the software tools. If you wanted an operating system, you had to write it. And you could only amortize those costs over your end products. Now if you have a lot of them, that's great. You can afford to do that, but very few people can. And so the IP model of providing processors, and ARM has been enormously successful in that, has proven to be a very successful way of enabling lots of people to chip designs without having to reinvent the wheel every time.
Now we are not alone in that. We have a very active partnership of over 900 companies. We call it the Connected Community. And part of what we do is taking the profits that this business generates and obviously investing it back into our R&D roadmap. And you'll see some of that this morning. But we also invest that in developing this community of companies who are going to help with the usage of ARM technology by anybody who wants to use it. Whether you're writing a code for an ARM processor, trying to get your chip tapeout or tested or packaged, trying to run an operating system or applications on top through this network of companies that we actively develop and engagement with, there is somebody who can help you. And that's a really, really important thing. We work hard to ensure that no matter what you're doing around ARM, there is someone where if you've got a problem, who can help you solve that problem and do it in the way that is economically viable for them as well as for you as well as for ARM. That community is a great thing, the partnership is a great thing. If it does any work, if everybody can make a profit from that, we recognize that and work hard to ensure that the community around ARM is as vibrant as possible.
So this model, the way the industry has evolved, the way ARM has evolved, has helped create lots and lots of different products. And if we look at how some of these have evolved over the years, going back to 1990, as I said, there was big old analog phones which didn't have an apps processor inside them. Nobody really sought apps. The Internet didn't really exist back then or the Web didn't certainly. And so making phone calls was just about all you could do with it. I actually have one of those phones in my office. And it is so big, the instruction manual is actually printed on the inside of the battery. If you're wondering what to do, you take if off, remind yourself, put it back together and off you go.
Now in the first decade of ARM's existence, what we saw was phones get a lot smarter. ARM processors came into these devices and they moved from being machines that you could make a phone call on to machines you could start to organize your life. And we saw that the first smartphones, really feature phones with ARM processors in them with enough processing power to run other applications to help you with your calendar and your contacts, et cetera. In the same time period, what we saw with desktop PCs was them go from being big, gray and ugly to being big, gray and ugly, pretty much doing the same thing they ever did, running the same software that they ever did, enabling you to do what you did before and without really any significant reduction in the cost of the raw materials. The CPU still costs you a couple of hundred dollars.
Now in the last 10 years, we've seen smartphones get even smarter, the number of processors integrated into those smartphones go up, the amount of technology integrated into 1 single chip go up and up with more connectivity, more functionality, such as graphics, that you'll hear about later on, all integrated into 1 chip, where through economies of scale, through the manufacturing industry that exists around the [indiscernible] community, you can get that chip for as low as $15. And it is not a lot of money to spend for a hell of a lot of transistors.
On the CPU side -- I'm sorry, on the desktop PC, on the PC side, things have changed a bit. Laptops are much more common now than they were 10 years ago. Still though, you do pretty much the same things with pretty much the same old software. And once some ARM technology has got in there, in the disk drive and in some of the connectivity, the main CPU is still going to cost you $100, still very expensive compared to all the functionality you can get on the left-hand side of this slide for $15.
So that doesn't look particularly satisfying. Going forward, though, I think the next 5 years or so is going to be where it gets really, really interesting. We're going to see, on the left-hand side there, more and more functionality, high levels of base connectivity, high levels of graphical and video interface. And where we have a new opportunity is on the right-hand side, where the ARM technology now is sufficiently powerful in some of this deliberate performance that we can start seeing it in more conventional clamshell form factor and other form factor devices. With a very low power consumption, the high integration of the functionality into these chips that can sell for $20, this opens up, I think, a new realm of devices, new opportunities for new form factors, very low costs, no span, very thin, very lightweight. And I think we're going to see a broadening evolution of the type of devices available all based on ARM technology. So I think that the next 5 years is going to be really interesting as we see this evolve as what you do with the device that you see on the right-hand side there can suddenly change in how you interact with the Web and basic services at large can change. So this is going to be really interesting.
As we look at kind of how we've got ourselves to here, the question then we need to ask ourselves is: Are we organized the right way for success going forwards? Before I go into that, I'm going to look at, though, is some of the technologies that we look at within ARM as key drivers of the next 5 years or so. So the key technology areas that we look at are, obviously, mobile computing. As I said, we spend a lot of time in ARM thinking about phones and tablets and thin form factor devices. But also servers. If you had a chance outside before this event started, we have a Calxeda box, which is really interesting. Connectivity, how all these devices are going to talk to each other. And the Internet of things, which may seem a bit less glamorous but personally, I think, is a really exciting technology area.
So let's look at some of these in a bit more detail. Now mobile computing. You may think it's just more of the same, but I actually think that the way in which mobile computing is being driven has changed. It's gone from what technology can I put into this device to what the users actually do with these devices. And so the usage model is much more driving the specification and the requirements of mobile computers now that they ever were. And devices are changing, as I've been saying, enormously from machines you make a phone call on to machines that you operate your life with, you can interact with the Web, you can interact with data services, some of those we're talking just about now, just fully then started, and how you organize around your work life and also your social life, your home life.
Increasingly, a lot of the way you're using this device is how you interact with others, how you interact in a social way, which doesn't necessarily mean goofing off at work, surfing Facebook. It means having the right data, right interactions with the right group of people based on the context that you're currently in. So it may be an event like today, blogging about what's going on at this event or the conference. It may be sharing photos of your son's broken arm, which happened to me last night, with the rest of your family whilst you're on the road. So using the device to interact with both work, with your personal life in a more connected, social way is really driving the evolution of these devices.
The technology, though, is still -- provides some underlying limiting factors. But that usage model is driving some of the key things about data connectivity rates, about screen sizes and about security, which is another really important area that we see because as more and more of your personal data goes on to this machine, then you're going to really, really care about how secure it is. So there's still lots of things that get in the way. But what's clear is that we're going to need a range of different solutions to address future needs. Now I'm sure all of you in this room, given what you do for a living, are pretty well aware of how mobile phone volumes have grown over time and have forecast out for what they're likely to do going forwards.
Here, we have some data from Gartner, showing over 1.2 billion mobile devices out in 2014. And this is smartphones in all levels of smartness because we are seeing a kind of tiering of levels of smartness. In the West, it's very easy to think about the very high-end phones, the so-called super phones with big screens and [indiscernible] graphics and all the rest of it. But there is a huge growing market for low-cost devices in emerging economies and a big business opportunity around that. Kind of 2009 was kind of an interesting takeoff point for smartphones. And I think a couple of things kind of came along at the same time. We were able to deliver a processor that was powerful enough to enable different applications to be downloaded onto a phone.
And what really changed at that point was all the software that the machine ran was not on it when it left the factory. That's the case of a feature phone or a basic phone. But with a smartphone, you can download applications. And with a combination of a high-performance processor, of a touch screen and of an open operating system, we've been able to put a platform in the hands of thousands of developers who can then take all that underlying hardware and innovate around it. We aren't constrained by the person that designed it in the first place and the narrowness or restrictiveness of his imagination. We've been able to give this to thousands of people who can now think about it in different ways. And people regularly find ways of using the underlying hardware that none of us ever thought of when we were thinking about the ARM processor that's going to go out in the future. And nobody in the device manufacturer thought about it either. And that's what makes it really interesting and means that there is an insatiable need right now for more and more compute performance in these devices, more CPU performance, more GPU performance, and using these in very, very creative ways.
So we see needs for more and more compute power going forwards, and you'll hear about some of our roadmap activities later on this morning. And we're going to continue driving that forwards for many years to come. What's key, though, is that with all of these different devices, you need a different range of solutions. You're going to see single-core phones in low-cost markets. You're going to see quad-core phones in high-performance markets. And whilst it's easy to think about just the top end and just the apps processor, there's actually a lot of other technologies within your phone. Now I don't recommend you do this, but if you took your phone apart, what you will see is many different silicon devices in there. You'll see the big apps processor and that the kind of the big, sexy [ph] thing that gets a lot of airtime, it's manufactured on the latest, greatest process technology. But you'll see a lot of other devices in there as well interfacing with the outside world. Now whilst we like to think about all things digital, the outside world is annoyingly analog, and interfacing with it is actually quite difficult.
So what that requires is a range of different technologies, different process technologies. And it isn't all about 22 nanometers and funky transistors, some of those older technologies are going to stick around for a long time: 0.18, 0.13 micron, higher voltage domains, analog devices to allow you to interface with the outside world, the power management, the touch screen, require analog interfaces. So those technologies are very important. We have to keep driving the cost out of them. But you'll see them around in this device for a long time to come and as we try and continually shrink the form factor, what's going to be required is to put all of these technologies together, is more creative ways of packaging devices, of stacking die together into what's called a 3D IC, where the connectivity between the different die is straight through the silicon. Lots of technical horror associated with that, that we have to solve over time. But different technical challenges on just about making the CPU clock faster. So lots of different dimensions on which we are evolving the technology for mobile devices.
I forgot this slide. And so on top of all of those devices is the software that runs on it. And obviously we have been working with Microsoft on their Windows on ARM initiative. This is carrying on from a long engagement that ARM and Microsoft has had. ARM has had -- people have been regularly [ph] working on Windows devices for a long time. Windows Mobile has been running on ARM for many, many years now. And we're expecting to see that deploy into these very high-end mobile devices, again in a different range of form factors. Now we think that that's going to deliver some really interesting technologies. As clamshells that we're showing on one of the earlier slides, different tablets, different form factors, running Windows is going to offer some new opportunities for people making ARM-based devices. And we think this is a really interesting time. Now that's going to roll out obviously over the next little while. We're working to get ready for that. We're working with our partners to get ready for that. And we're very excited about the prospects of the kind of device that we're going to see.
Now you don't need to sell many of those devices before somewhere in the world, somebody needs a new server. And later on, Ian is going to be talking about the work we are doing around servers. Now all of these new devices, as they grow, as they consume vast amounts of data, is going to lead to an order of magnitude increase in the amount of data flowing around the Internet. And that does require lots of servers. That's a great business opportunity in its own right, but servers are annoyingly power-hungry. And unless you want something the size of the Three Gorges Dam power station in your back garden, then we need to do something about that. And we are doing something about that in ARM and with the ARM partnership and with the ARM community around all of that.
Servers are a big drain on the world's power consumption. And so there is an opportunity to take all of the goodness around the ARM model, very low-power processors, very efficient manufacturing and design to build SoCs for servers. Instead of conventional multipurpose chips that fuel servers, what you'll find is if you know what you're doing, then it's always more power-efficient to build some dedicated hardware to solve the problem. Servers have themselves evolved from being general-purpose things that calculate the weather in Tokyo next week to farms of machines which are all running the same application over and over and over again. When you've got a context like that, an SoC may well be the best approach for achieving very low power. So that is something that's very interesting to us, it's interesting to our partners. And I think over the next few years, you're going to see different approaches to building servers, which are going to be fundamentally deliver more performance in a very, very power-efficient way. Now that needs an ecosystem around that. And again, I don't want to steal Ian's thunder, and I'll let him talk about that.
At the other end of the computing spectrum is the Internet of things. Now what the Internet of things has in common with servers and cell phones and everything else is it's about getting the right amount of compute power in a very low-power implementation, and then deployed in massive quantities. Internet of things is about combining sensors with processors in very power-efficient networks to gather data from everywhere in the environment, from machines talking to each other, embedded in buildings so that you can control the lighting and the heating, embedded in the roads so you can work out where there's a traffic accident and generating vast amounts of data. Now data in itself is not particularly interesting. What is interesting is the information that you can glean from that data. So the data needs passing through a server so that we as humans can take action based on it.
And some good examples about how this may help us on our day-to-day lives. Yesterday driving around London, it turns out there's a garden party at Buckingham Palace. Our cab driver knew that, so we took a different route. If I had been driving around London and I didn't know that, I'd have gotten stuck in a traffic jam. Now were the Internet of things deployed and traffic information was being recorded live, maybe the satellite navigation in my car could get reprogrammed on the fly to divert me around it. And I can tell you, a lot of people did not know what was going on yesterday, and there was a lot of traffic congestion that this could have helped with.
Other things and there's kind of an example out there with the lighting application. If your dishwasher tells your washing machine not to run the spin cycle right now because I'm about to, then you'll get a smaller power spike into your house. That helps with energy deployment, that helps you with your electricity bill. So lots and lots of ways in which intelligence, embedded into everything around the planet, can help make the world a more efficient place. And we think there's a big opportunity for ARM technology in there.
Connecting all of that together is mobile infrastructure. We're seeing the amount of data flowing around going up enormously. And as a result of that, the architecture of mobile infrastructure is changing from being very centralized to being more decentralized. We've seen the growth of micro servers -- sorry, of microcells, finFET [ph] cells deployed closer to the edge to gather the data, to get it into the Internet, give the answer back to whatever mobile device you happen to be carrying around with you. So change in architecture here, again big opportunity for more machines and more deployment of ARM technology.
So as technologists we look at this and go, fantastic, big, hairy problems here, intellectually interesting problems to go and solve from a technology point of view. From the business point of view, what's clear is the demand for semiconductors is not going down anytime soon and all of those devices have in common that they need high performance, low power, low cost, and that is what the ARM ecosystem is really good at delivering. Now we view this partnership model to get from where we started to here, and we strongly believe -- I firmly believe that the partnership model is the way that we're going to address some of these key technology and business challenges going forward. It's all about collaborating, collaborating in an open way.
At the heart of what we do is design technology and deliver it to our customers who build chips. The cost of building a chip is pretty scary these days. If you wanted to build yourself an advanced fab on 20, 22 nanometers, something like that, the bill for the equipment alone is going to cost you about $6 billion. That's on top of the billion or so you will have spent, maybe $2 billion, on the process R&D. So a very expensive proposition. The cost of all of that has to get amortized across the chips that are built on it. And that means that the cost of the silicon, unless you get the scaling out of it, goes up, and that's not something fortunately that's happened so far and we're going to have to keep working away to make sure it doesn't happen in the future.
Utilizing all the transistors that you put down leads to increased design costs, verifying that when you put 1 billion transistors down on a chip, they're all connected up in the right way is a nontrivial problem. So the cost of design, the cost of verification goes up, the cost of manufacturing has gone up, so the mask that you that you need to run through the fab is very expensive, maybe a couple million dollars for each design, so you better not get that wrong. So people spend a long time on verification, so the cost of design going up. And then when you look at the software that's going to run, that is also getting very, very complex and hence very expensive to develop. So all of these costs are pretty scary when you look at it. And again, it comes back to the only way to approach this is to take a modular approach to integrate highly optimized and verified building blocks that you can put together to leave yourself with a tractable problem. You just cannot afford to do everything from scratch yourself. You have to leverage work that is done around the industry to have any hope of building a device of the kind of complexity that we and our partners do actually put together.
So to do that requires collaboration between the key partners in the ecosystem, between designers actually building chips, between people manufacturing them and everything in between, between the IP companies, the EDA companies, the software designers. It's really important that we work together around open standards, around APIs that everybody can get access to, to ensure there's a high degree of reuse from one design to the other and a high degree of knowledge sharing to make this a problem that is actually -- you've got time to solve. So I think that the industry's only going to survive against this increased complexity, increased cost if we operate in that way. So openness collaboration is absolutely the key to our success and to this industry's success going forwards.
When we look at the kind of chips that we anticipate our customers build to solve a lot of these problems going forward, we're seeing an increased number of multicore designs, gigahertz, microprocessors integrated into a chip with only 1 or 2 specialized processes, things like graphics and video, running very low power at very low cost. That $10 to $20 price range, really important to enable the wide range of end device that we're getting used to being with everyday. So for our part, it's important that we build those building blocks to prevent kind of wheel reinvention every time in our customer base, that those building blocks work really efficiently together so that jointly, we can unleash the creativity of thousands of designers around the world. If we go back to the integrated model where only a few people do each step, that is not going to lead to a vast proliferation in the type of devices that we use. It's not going to need to an awful lot of choice for consumers at the end of the day. And we think that is a bad thing, which is what we're trying to prevent.
When we look at the advanced process, again, there's some pretty hairy issues that you have to deal with. Annoyingly, physics does get in the way. And as you're trying to build transistors as small as 20 or 14 nanometers, you've only got a few atoms of silicon to play with and they tend to behave a bit erratically, and so you have to work out how to deal with them. Manufacturing is very hard. There's lots written about everybody having issues with manufacturing right now. That is not surprising, it is very difficult. It always has been hard, but it's really hard right now. Fortunately, the industry has got a fantastic track record of solving these problems. And whatever issues exist today will get solved over time, of that I am pretty sure. So as we look at how ARM designs get deployed on these advanced processes, and kind of as I was saying earlier, it's no good if we just say, "Well, I'll design my CPU, implementing it is your problem. Off you go, have a nice day." We long recognized that we have to worry a lot about that. We have to worry about how the CPU and the GPU connect together through our system components and ensure that we can deliver that in a way that can get implemented on these advance processes.
When we look at the 20, 14 nanometer space and we look at all of that technology that people want to put on chip, one of the other challenges that we have is the power scaling that we've enjoyed for the last 30-odd years has kind of run out of steam. So we're having to spend more time on design to really get the power out of these very, very complicated SoCs. So we work in anticipation of these problems. We've engaged very early with the foundries on advanced process nodes. 20 nanometer, we've been running silicon for a number of years now to look at the challenges that come from advanced structures and to preempt the challenges that our customers are going to face when they come to do it in a real product. In this way, you can solve a lot of those problems upfront and derisk an increase in time-to-market for our customers when it comes to, say, Cortex-A15 or our future 64-bit cores, and put them on one of these advanced processors. We've been doing that with 20 nanometer for a number of years, work on 14 nanometers is underway. And we believe we're in a strong position to provide that complete solution to the problem that we are solving. We aren't trying to solve every problem on the planet, that's what our customers do. They're going to take out our building blocks, integrate them, implement them in a very power-efficient, cost-effective way and put their own creativity around the outside of that. So we are gearing ourselves up to continue to be ready for when our customers move to these advanced process nodes.
So around all of this, as I said earlier, is the ecosystem. What ARM is doing is very interesting to us. It is nothing without the work that we engage with across the entire industry. Our Connected Community, with a membership of over 900 companies, is there to make sure that no matter what you're doing, help is on hand. Whether you are building a chip, work with the foundry, getting tapeout, whether you're designing software tools, designing software that's going to run on the chip, whether you're utilizing the operating system or running applications, that Connected Community is helping you solve that problem. Now we view that as a really important thing. Nobody can do everything. We've never thought we could do everything, and that's certainly not going to change over time. Partnership is the way in which we're going to address the next round of technology challenges and do it in a business-efficient way so that everybody here can make money along the way.
So in summary, this disaggregated model we believe is the way for us. Reaggregation, I don't think, is going to help solve problems going forwards, and I don't think it's going to help lead to a high degree of diversity in the end products that consumers get access to as they walk down the high street. The way we've been working thus far has been very successful way of structurally lowering the cost of doing design. And when you can lower the cost, the volumes go up, more and more people can do it and better designs are delivered at the end of the day. We think that's been an important thing for the last 21 years, and I think it's going to be an important thing for the next 21 years as well. And we've seen a vast array of products built around ARM, all of which have this key or leverage of key attributes of our products of low power, low cost and utilizing the ecosystem. Whether you're building a chip for a dishwasher or a smartphone or a DVD player, operating system in cars, so all of that is helping you do that in a very cost-effective way. And we believe that this is the way that we're going to address the next round of challenges.
So with that, I want to thank you. Thank you for coming today. Thank you for your attention. And I'm going to hand over to Tom, who's going to talk about some of our roadmap for how we're delivering the technology in this next generation of devices. Thank you.
Thank you, Simon, and good morning, everyone. So as Simon said, my name is Tom Cronk. I'm the Deputy General Manager of the Processor Division. As such, I'm responsible for the development of our processor technology and the development of that business in general. So not surprisingly, I'm going to talk about processors.
So Simon has talked about how we develop processors in a very symbiotic ecosystem, how we work together closely with people and how effective that ecosystem is at both rapidly developing and perhaps more importantly, even rapidly deploying great technology. He also showed, at a very high level, how processors sit in modern-day System-on-Chips, more and more of the -- embodying more and more of the functionality that the device that comes from the processors.
I'm actually going to speak, as he said, about another attribute of ARM and of our processors, which is energy-efficient design. And ecosystems aside, low-power processor design and energy-efficiency design had always been and will remain a prerequisite for the success of these systems. If you can't do that, you can't build successful systems. I'm going to use an analogy of DNA. So we all know that DNA contains our genes. Our genes are contained in every single cell that we have. And our genes completely define the way we grow and the way we function. And in that sense, it really is a good analogy because I'm going to put forward a proposition that ARM has a low-power gene, an energy-efficient gene, and I hope I'm going to show you how that manifests itself in the products that we build day-to-day and the difference that, that makes to the partnership and to the end products that are produced.
So why do I say we have a low power, energy-efficient gene? So we go back to the beginning. So as Simon said, the company was formed 21 years ago. We had 12 engineers in a barn. And this has a twofold effect to it. First of all, the end market that the guys were designing the first processor cores for was PDAs, which was a new concept in those days, personal digital assistants. Those were battery-powered and therefore, by definition, the processor had to be designed to work in an energy-constrained environment. But the second effect was quite significant as well. It's more of an evolutionary effect or a pragmatic effect. 12 guys operating in a world which even then was a field full of giants. They basically have to produce a core that was small just to be able to do it and they were also constrained as a startup with budgets and things like that, so they had to find a way of doing it a different way. They had to find a different way of doing it, and that led them to risk. And really those 2 effects, part by design and part by necessity, really sort of gave birth to the low-power gene.
Now Simon on -- as Simon said, as phones became feature phones, became smartphones, became back to PDAs and functionality got richer and richer, but the low-power gene still prevails. And I'll explain how. So the list at the bottom really is a hierarchal step to producing an end product from the ARM architecture. It starts at the top with the specification. Last year, we'll talk a bit more about this later, we designed the next generation of the ARM architecture, a specification we called the version 8 specification of the architecture, which happens to be a 64-bit implementation and is quite literally at the top level of [indiscernible] specification. It describes the instructions that are included in the architecture. So this is the instructions that the machines execute. It describes the way they interact, the way they interact with software and the way you program the machine. And every single instruction that gets put into this architecture goes in with the conscious and subconscious thought, what's the impact of power, does this save power, will this make efficient systems, will it make easy to build efficient systems. So that goes in right at the beginning at the top level.
Beyond that, we then get to what we call the micro architecture. So these are the products that you know, the Cortex-A8, the Cortex-A9. That's a representation, a design based on the architecture. And again, at that level, the gene prevails. What's the impact of putting a register [ph] there, what's the impact of putting the register [ph] here, is this driving to an end, low-power system, and on and on. So the next part is worrying about integrating that design in a system, how is the processor talking to memory, did it really need to move that piece of memory. Because if it did, that's going to cost power, at least kind of tradeoff all the way through low power, low power, low power. Implementation is how you actually take that design and then produce it in silicon, transistor structures, libraries, memory structures, all of these things are going if the low-power gene prevails. And then we go full circle, if you've done all of that the way through, then the guys that really bring these systems to life, the software developers, have the absolute best-possible chance of producing low-power and energy-efficient systems.
So why does it matter? Why does this energy-efficiency gene really matter? So of course, in the early days with the phones, as Simon said, it was simply about talk time or on time in the case of the PDA. But as time has moved on, it's been about increasing high levels of functionality, doing more for less because battery technology has not really evolved at the same rate as -- in fact, it hasn't evolved at the same rate as semiconductor technology. So all designs are still very energy-constrained. Even designs that are plugged into domains are energy-constrained. Because when you design a low-power system, it means it's about not having to put vents in the system, not having to put fans in the system, reducing the carbon footprint or just basically it's more simply about building cold products.
The TV outside is a fantastic example. The Samsung TV, if you'd get a chance to see it later, it's a beautiful thing. It's a 46-inch plasma TV, so it has -- an LED TV. No frame. It just looks great, but it's better than that. It has gesture control so you can move your hands around to navigate through the menus. You can even talk to it. And all of that is really only possible because it's a very low -- because it's a very low power designed in the system. You'll see the thing is only an 1.5 inches thick. Where are the electronics? They're hidden, they don't have to be called. It's all made possible because of low-power design.
Another example on the Internet of things and one I find quite amusing as an engineer is an example of one of our microcontrollers being used in construction. So you take a microcontroller, you take a LITTLE battery, you take an RF transmitter and you package it into a little package the size of a pea. You then take a shovel and you throw these into cement when it's being mixed. You throw them into cement, and then they basically detect the temperature of where they are and they form a mesh network. And when you're building big structures like dams, apparently it's super critical to the strength of these things that the layers of cement, subsequent layers of cement, all get added at the right points. And through this mesh network of temperature sensors, you know exactly when the right point to add the next layer is. So all of these amazing ideas that are spawning out of the Internet of things.
So let's look quickly at a couple of the products. At the high end today in these products we have the Cortex-A15, which really defines what we call low-power computing. The A15 was publicly announced by us back in 2011 September time frame. And it delivers a really significant uplift in peak performance. So if I used the A9, the ARM9 processor, Cortex ARM9 as a reference point, the ARM9, actually there's 2 Cortex ARM9s in the plasma TV I mentioned just before. Sorry, not plasma, LED TV, I mentioned just before. An ARM9 system has really defined the best use of experience today on tablets. The best tablets out there today are ARM9. It's a superb Web browsing experience. A15 will give a significant uplift again on that performance point with tablets. And in fact, A15 will deliver a surplus of computing relative to the bandwidth that you can actually get through connecting the device today. So you're seeing new experiences and new apps coming on the back of that one.
At the other end, the always-on, ultra low-power mobile category, we also recently announced the Cortex-A7. The Cortex-A7 is 1/5 of the size and delivers 5 times the energy-efficiency of processors that are in most of the phones today. So a massive, massive step forward. Now you can either use that to maintain the performance point and realize the cost savings. So you can imagine entry-level phones tomorrow based on this processor delivering high-end phone performance and high-end phones today that are at entry-level price points. Or you can use it in another combination which I'll show you in a moment, where you can get the best of massive battery life and peak performance. The other story of this slide shows is in the pictures on the right. So the pictures on the right show 2 generations of process technology over 3 to 4 years. And if you were just to rely on Moore's Law, you would expect the area of the device to shrink to 25%, so carving every 18 months, 25%. Actually through smart design in the processor cores, we're actually achieving better than that with A7. So with A7, we're down to 1/10 of the size that you were 2 generations ago in process technology.
So another good example of the low-power gene, the energy-efficiency gene coming to the fore was another announcement that we made last year, which is this concept, which we call big.LITTLE. big.LITTLE allows you to mix different sizes of processors. So the Cortex-A15 and the Cortex-A7 both deliver fundamental different performance points and operate at different efficiency levels. From an architectural perspective, they are identical. So physically, they're quite different, but architecturally they're identical. And this means that software can run on either of these cores without having to be aware of the difference in the cores. So you're also probably familiar, lots of end-equipment manufacturers are today selling products on, it's got 2 cores or it's got 4 cores, this concept of end peak [ph]. Today, those cores are the same. So you'll have 2 Cortex-A9s or you might have 2 Cortex-A8s or 4 Cortex-A8s whatever. This concept of big.LITTLE allows you to mix. So I can have 2 Cortex-A7s and I can have 2 Cortex-A9s -- sorry, A15s, for example. And you might wonder why. And the reason is it allows you to really stretch the performance envelope. I can have a very high peak performance delivered by the A15, and yet I can have a very, very long battery life sustained on like background tasks by having the software run on the A7. And the software will migrate completely transparently between the 2 cores depending on the need of that particular task. So clearly, if the Cortex-A7 has enough performance, it makes sense for the software to sit there because you get this energy-efficiency benefit.
And the chart shows how each of these core save energy relative to a dual core A9 implementation. The blue shows you A15 with some very high-performance dependent tasks and the green shows you A7 with some lower-performance dependent tasks. But it's interesting, you can see -- I think this is a good example. Actually, that low performance point isn't actually that low. The A7 is more than capable of running the platform OS and a game like Angry Birds, for example, without you even needing to bring in an A15. So another great illustration of smart energy-efficient design.
The next 2 slides I will just touch on briefly. The processor is part of the problem. It's the big part of the problem, but you mustn't forget the rest of the system. So as I mentioned at the beginning, we think about the way the processor sits in the system, how it physically connects to the system, how it talks to other processors. I gave you an illustration of that just now with big.LITTLE. And we think about energy efficiency on all of these things. Consequently we also, within the Processor Division, produce a system, we call it system IP products. So memory controllers, interconnect technology, we actually produce and sell alongside the processors because we need to develop it to keep evolving the processors in the right direction.
We also produce graphics processors, the same ethos prevails, and Pete will talk about those in a moment.
Back to the ecosystem message. Of course, it isn't just about ARM in this context. And actually, going back to the DNA analogy, the best of breed in the end comes from mixing DNA and mixing the best DNA. And that's a feature of the symbiotic ecosystem. So the processes are important. But in the end, everything matters: the implementation, the EDA, the operating systems, the software. It all needs to work together in a symbiotic way.
One slide to the future. So in March this year, we launched a product which we call the Cortex-M0+. This is the latest of our microcontroller offerings. This is the type of processor you would find in the cement example I used. This processor delivers 40% more energy than its predecessor. So that's ARM-to-ARM comparison. And it delivers greater than 2 -- in fact, it's probably -- it's 2 to 6 across the spectrum. So greater than 2 to 6 -- greater than 2% -- sorry, 2x energy efficiency gain over any other competitors in the field. And as I said before, it's really fueling people's imagination in terms of applications.
Another great example, I mean, you can just let your imagination go with it. Another great example I like is the umbrella, which I call the take-me umbrella. So you leave this umbrella by your door. The umbrella is connected to the Internet and it has a proximity sensor. You go out the door. It knows it's going to rain. So it calls for you, "Take me, take me." So you go out. And it can just go on and on and on with the things that you might want to invent.
Moving on. So after that, so ARM v8, I mentioned that we released this architecture publicly last year. And Ian is -- I think you're going to talk in more detail about the ARM in a high-end context. This will drive us at a rapid pace into new market segments. So we look -- we announced the spec last year. To date, we have 4 architecture licensees, so that's spec-level licenses, and 4 of our processor implementation licensees. So 4 architecture, 4 implementation licensees already on this. And we'll talk more about our products Cortex-A, whatever we end up calling it, later this year.
So a quick recap. ARM was formed 21 years ago with the use of energy-efficient design, low-power design processors that was part by design, i.e., the market and part through necessity. Small team, limited resource, have to think small, have to design efficient processors. That was then. The need in the market for low power and efficient design still prevails. It's -- in fact, it's an absolute necessity if you want to make successful end products.
I really like the DNA analogy, as you probably picked up. And I think big.LITTLE is just another fantastic example of how that low-power gene permeates through everything we do.
And yet another important aspect -- attribute is once you have that basic capability, you can achieve massive scaling. And I think we're in a unique position with the ARM architecture in being able to deliver these tiny microcontrollers right the way through to server-type machines all based off fundamentally the same instruction set, same architecture. And that, right on time, concludes the thought.
So logistics. I think we now have a break. So it's 20 minutes.
So if you'd come back at 20 past, I will much appreciate it. Thank you.
So welcome back. So my name is Pete Hutton. I'm the General Manager of our Media Processing Division. And what I'm going to do in the next 20 minutes is take you through how we've laid the foundations for graphics leadership over the last 6 years. So 6 years ago, we acquired a small company in Norway. Since then, we have built on that. I'm going to take you through what we've done.
Yes, there we go. So firstly, why is graphics important? So it's very important to consumers in terms of interactions with devices. You've seen some of the devices outside. Clearly, you have your tablets and your smartphones. Graphics is increasingly one of the things that drives consumer purchase of those devices, so it's very important to our end customers.
Graphics is a priority for anything with a screen. So smartphones, tablets, digital televisions, set-top box, personal navigation devices and the future of washing machines, printers. Anything with a screen is a target for us. And by 2016, there'll be 4 billion Internet-connected screens, all of which are a potential target for our graphics. So it's a very compelling market for consumers, it's a very compelling market for customers and clearly, it's a very compelling market for ARM. Four billion devices is a fairly attractive market to go after.
So this slide is probably the most important in the entire deck. I think we've been -- my part of the deck, guys, okay? Sorry, sorry.
No, it's actually the most -- anyway, anyway. So I think we've been a bit reticent about saying where we are in terms of graphics. So this slide does it. It's fairly busy, but let me walk you through it.
So we are the most widely licensed graphics processor available. So we have 60 active licenses. Now what I mean by an active license is a license that we think will generate units and will generate royalties. We have more licenses than this, but our -- my IR colleagues regularly prune them if they think they're dead. So we have 60 active licenses, and you can tell that 46 of those have been closed since 2009. So if you're familiar with the ARM business model, you know that it takes a long time from license to get into royalties. So the vast majority of those licenses that we have right now are not yet in products. They are not yet generating units.
The wins we have are across are very high-volume markets, and they're spread. We're not concentrated on one single market. So we have smartphones, we have digital TVs, we have mobile computers. So very widely spread, very stable business.
Last quarter, we talked to you and we said we were #1 in digital TVs. We're still #1 in digital TVs. You can see some examples outside. But we can confirm that this year, we will be #1 in Android tablets. So we will -- Mali, in terms of graphics, will be #1 in terms of Android tablets. Now I should clarify this. This is not just Android tablets that you will see reported in the West. This includes all China gray market tablets.
So I have an example in my hand. It's not a phone, it's my colleague's [indiscernible]. It's actually an Android tablet. So this is a tablet designed in China, manufactured in China, largely sold in China. You won't see it. Well, actually, this one you will see now. It's available on Amazon, and it costs GBP 48 -- GBP 58. So a very large market in China. And I think there's about 30 million units in the China gray market. So we're #1 in that space.
We're not yet #1 in Android smartphones. This year we'll get to about 20%. Right now, I think we're about 15%. But by the end of the year, we'll probably be 20%. That'll put us second or third in the market, behind, in this case, propriety GPUs. And for me, one of my major opportunities is actually proprietary GPUs.
So the great thing about all these licenses is you can see that they start to build the volume. So 2010, we had about 3.5 million units shipping. Yes, not that impressive. 2011, we had 12 partners shipping, and they shipped 48 million units. So a very high growth. This year, we'll get about 25 partners shipping, and we'll be in the triple figures. So over 100 million units.
So one of the things I said is we're leaders in DTV. And why are we leaders in DTV? Largely because a lot of the main OEMs have chosen our technology. So Samsung and LG designed their own chips in-house, put it into their digital televisions. A lot of the Chinese OEMs are also designing around our solutions.
We have some very significant silicon partners in this space. We have Media Second, MStar, ST and Amlogic who are building silicon, which is going into entry-level, mid-level and high-end digital TVs and also set-top boxes. And the reason we're very successful in this market is echoing one of the points that Tom talked about. We have the performance density leadership in graphics. So that means we have the best performance per millimeter or best performance per dollar or best performance per watt of any graphics solution out there. So that's why we lead in digital TVs.
The other reason we lead is because on the graphics side, a lot of the success and a lot of the product is actually software. So you have to have very mature software, you have to have very good integration support around the software and we do. We regularly get feedback from our customers that our software is very high quality, and our support is excellent. I'm particularly proud of the support that we do not just for our silicon partners, but for our end OEMs and their customers.
I'll show you later we have a next-generation product coming out this year, which will address larger DTV screen resolution. So the 4K by 2K resolution. So that's coming out later this year.
In terms of mobile, I say we have a road map to leadership. We're not there yet. We are #1 or we will be #1 this year in Android tablets, which is great. But we do need to build momentum in smartphones. Now we do have some very nice flagship wins. The Samsung GALAXY SII, SIII that was launched earlier this month is a fantastic example. It's well ahead in terms of graphics performance of any other smartphone in the market. There's already 9 million pre-orders, including one of mine. So go buy it now. That would be excellent. So that's a nice flagship product. We also have quite a few semiconductor partners in the space as well. So again, MediaTek, again MStar. Spreadtrum, particularly in China, doing very nice, low-cost and high-performance smartphone chips based around our technology.
And we think, to echo one of Simon's points, we think we have the right technology for the right market. So in all of the spaces, in the tablets, in the super phones, in the entry-level, in the mid-level smartphones, we have a CPU and GPU combination, which fits well in those spaces. One of the nice things about our technology and one of the nice things about the CPU technology is it's scalable. So you can take the same basic graphics core, they scale from 1 to 4 cores, for example, or in our latest generation 1 to 8 cores, and you can have a single core instantiation. It runs the same software. It gives you all the same features, it gives you all the same OS support. Or you can scale all the way up to 8 cores and have the ultimate in performance.
So one of the changes we have made in our graphics roadmap is really, we've identified over the last year that one size doesn't fit all. In the graphics space, the demands are bifurcating. It's very similar in concept to the way the processor -- the processors went. So on the CPU side, they actually trifurcated. So thank God it didn't happen to us. So you now get the application, the real time and the microcontroller CPUs. Exactly the same thing has happened on the graphics side. So we now have 2 completely separate road maps, which are addressing 2 completely separate areas. And just as you wouldn't take an A-profile CPU and shove it into an M-profile slot, you can't do the same with the GPUs. So the 2 road maps are really graphics. So fairly easy to understand. Pure graphics. And these guys, the customers, they're -- they just want the ultimate in performance for the smallest cost and the smallest power. They don't want fantastic or OS coverage, and they don't want all the latest and greatest that GPUs compute complexity.
The other roadmap is graphics and GPU computing, I'll talk about that in a minute, and that's where you're using the GPU effectively as a processor. So it's a parallel processor. But those are fairly complex, and they do take up a little more area, they do take up a little more power. So the market has actually bifurcated. We do have 2 completely separate roadmaps now.
So as I said, the easiest one to talk about is graphics. This is where you're using the GPU as a graphics-processing unit. You're running games on it, you're doing user interfaces here. We have the best performance density in the market on this. And that's really led by our Mali-200, Mali-400 ranges. The example I showed you earlier is a Mali-400-based device, single core Mali-400-based device. But Mali-400 scales up to 4 cores. Later on this year, as I said, we'll be launching Tyr. Tyr is aimed at high-resolution DTV displays and more complex smartphones. And there are products on that roadmap beyond Tyr. I'm not going to talk about it today, but we have a continuation of the roadmap beyond the product we're releasing this year.
On the graphics and GPU Compute side, we have spent the last 3 years developing a completely new grounds-up architecture. So what GPU Compute does is it blends the parallelism as a multi-thread capability of GPUs with the control parts of CPUs. So we've actually taken engineers from our CPU side or the processor division, put them into our graphics Division and come up with a -- effectively a blended architecture. I think it's very difficult to do this if you don't actually have both capabilities in-house. And it has taken us a long time. I mean, it was planned to take 3 years. It did take us 3 years, and it's one of the largest investments we've ever made for the company.
What this now does is it enables completely new use cases to be run on the GPU. So the kind of things you can do on this are image recognition. You can do gesture recognition. People are using this for new and innovative use cases. Again, these products are out there. We released them the end of last year. We will have -- well, I had silicon in-house for a long time. We have optimized all the software on that. So we've optimized all the drivers, all the APIs and all the operating systems. And we will see the Mali-T604 and 658 shipping in consumer products second half of this year. So I'm very excited about that.
We actually have a second-generation set of products coming out later on this year. And then we have Skrymir, which is our third-generation product, so that will release next year. So we're already on to the second generation of this; third generation is next year.
So as I said, the kind of things you use GPU Compute for are new use cases, new ways to interact with the devices. People are also using the fact that you have a complex GPU onboard the chip to reduce cost in systems. So you can reduce bill of materials by taking DSPs out, taking DSPs off the system and putting that kind of complexity on the GPU. And you can also use it for lowering power. Basically, the GPU Compute is a wide parallel processing machine so you can put the algorithms on that, run them across over GPUs, take the voltage down and you can reduce the power signature for that. So people are using it for that as well.
Now as Tom said, we have the low-power gene. We also have it in my division. We have moved people from processor division across. So I'm sure this is an analogy with genes and perhaps -- anyway, anyway.
So you would expect our GPUs are industry leading in balancing performance and power. That's great. You -- I would take that as read. What we've also done so is because as ARM, we're supplying the CPU, we're supplying the interconnect, we're supplying the memory controller as well as the GPU, we've also optimized the system power. So there's a number of techniques built into the GPUs which significantly optimize the system power. This is accessing external memory, which does cost an awful lot of performance, but it costs an awful lot of power. So we've done an awful lot within the GPU to optimize system power and worked very closely with all the ARM components.
We've also started to work on what we're calling Graphics POPs. So if you're familiar with our physical IP side, you know we have processor optimization packs, which are really aimed at getting the best performance out of processors. On the graphic side, we don't need the performance, but what we are looking for is really an area in power optimization. So it's optimizing a different path. So we're bringing those out fairly soon. And clearly, we also have feeling [ph] with Inside ARM [ph]. So our system division have tools which allow developers to see how their applications are running on real silicon, look at the power being consumed by the CPU, look at the power being consumed by the GPU, look at the power being consumed in the system and then optimize around that. So it's not just the GPU itself, it's the wider system, which actually is where a lot of the available power reductions are.
So ecosystem. We spend a lot of time on ecosystem. I've got 30 engineers dedicated just to ecosystem support. So that's working with gaming developers, user interface developers, apps developers, giving them free tools, giving them contents, making sure that their applications and software are optimized on our graphics.
So it's a big investment for us. And clearly, you could -- you could have 300 engineers, you could have 3,000 engineers. It's like painting the Forth Road Bridge, but we think we have the balance just about right.
Where we do have large numbers of engineers, or around about 250 engineers, is on the software side. So we have a very big investment in terms of operating systems support, in terms of APIs and in terms of enabling external partners.
So you can see some of the main operating systems up there. You can see Android, Windows, Pure Linux, Chrome. We have others, Pisan [ph], Nucleus. Too many to fit on the chart itself. But we have teams of engineers who are working on all of those areas and who are porting those operating systems to customer silicon and our own test silicon.
And then the API side, you can see there's a myriad of APIs you have to support. So direct access. You're supporting Microsoft Windows. GPU Compute I talked about. It's a great thing. It enables fairly experienced programmers, but not -- so they don't have to be black belts to program the GPUs. But there are a whole host of languages we support that -- which we then have to support. So there's OpenCL, there's Renderscript, there's DirectCompute. There are so many standards. It's really good.
So we have a lot of engineers focused on the operating systems side, a lot of engineers focusing on the API side and then further out into the community. And those engineers will also work directly with our silicon partners and their end customers. So we will then send people out directly to our partners' end customers to optimize the software on their platforms or to fix some issues they find even, in some cases, if they're not our issues.
So I think we've done a pretty good job in laying the foundations for graphics leadership. We have very strong foundations in place, which we're going to build on. As I said, any product with a screen represents an opportunity for us. We are already in leadership positions in terms of DTV and Android devices. We'll ship about 100 million units this year. Smartphones, we continue to focus on. We'll be about 20% of Android smartphones this year.
We think we have the right technology for the right markets. We're getting that confirmation from our end customers. We have the best graphics performance density on the market, and we have a completely uncompromised support of GPU Compute. So one of the things we've done on the GPU Compute side is made sure that it supports all the features that customers want, which has been fairly complex to get through, but we've managed to do it. And I'm particularly proud that the next-generation technology ships some exciting products end of this year. So thank you.
Good morning. My name is Ian Ferguson. For the last 4 years, I've been running the server initiative for ARM. I've been maniacally focused on it for the last 2 years. Simon mentioned it earlier that there was a demonstration of the Calxeda technology next-door. Public demonstrations of the ARM technology in the server space is starting to happen, and we felt this was an appropriate milestone to start sharing more of our vision and our strategies around what we're doing in this space with you.
Simon mentioned earlier, but I really want to emphasize it, that I think for me, the key takeaway that I want to share with you is that like the mobile phone market previously, what we see happening in the server market is the emergence of highly integrated System-on-Chip devices that are going to be very optimized for a specific set of server applications. And I'm going to talk to you for a few more minutes about why we see that happening, where we see that happening and when we see that happening.
Warren said earlier that the data center area, which is really the focus of my presentation, is really the entry point for ARM technology into the server space. I personally believe, based on discussions with the ecosystem and end customers that I spend all my days talking to, that actually the opportunity for ARM is significantly broader than that, especially as we get to 64-bit technology that Tom talked about earlier. So I will touch on some other areas later on in the presentation.
Okay. So on one of Simon's foils [ph] earlier, he talked about cloud computing, and he talked about really what we see there is companies where information technology, or IT, is the business. Okay? So what I mean by that is that the server infrastructure itself is the profit and loss generator for that business. And there's a few examples up there, some of them you might have heard of: Facebook, Google, yes, put Tencent, Baidu, Alibaba, companies in China that are delivering similar social media technologies out there. These companies, because that server is the profit and loss generator in the business, they are very motivated to look at new technologies that will help them make more money, okay? So why is that important for us? Well, the way they've set up their businesses is to be very mobile to look at and be adept at evaluating new technology. What I mean by that is they write their software in high-level languages.
I had a question in the break about legacy code, for example, compared to incumbent architectures. In the data center area, very little of that. High-level codes, as Simon said, people are specifically looking at 1 or 2 workloads. It's not a server where you have to run 50 different things. And so you write high-level code, whether it's Java, C++, whatever. So very portable. And the other thing that these guys look at is that software is either residing in themselves, okay, but have their own specific libraries or -- Google have their own libraries or they use open source. So if they see a PCO benefit to migrate to a new technology, it is inside their own control on how they get there. They're not waiting for a database from a third-party software company to go port it. It's under their own control.
I've put up a picture there of Facebook's data center in North Carolina. There's a parking lot; sorry, I've been in the U.S. for 13 years, carpark in the lower end there. And you can see some small vehicles there. And you know how big vehicles are in the U.S. So that's going to be a really a big building, right? And as we go forward, what we see and what our customers are seeing is these things are energy constrained. As Simon said, as Tom said, we spent 21 years understanding energy-constrained systems. Okay, this is a battery base, but when you have 10 megawatts and that is what your business is run around, it is energy constrained, it forces you to think of different ways on how you're going to solve that problem. You can't just look at a pure performance factor. Okay?
So a couple of examples about what Facebook has been doing. They build these buildings. They've got some in Lapland, they've got one in Oregon. As I said, this one's in North Carolina. The way they cool these system's radically different from how these things have being done in the past. They're also looking at the structure of these boxes and saying, "Do I need everything on this board?" So the old traditional servers that have these what they call vanity cases, the plastics that go around it, it actually blocks airflow. So Facebook has said, "We don't want that stuff. It just is an unnecessary cost, unnecessary recycling thing when we roll it out." So they're looking how they drive down costs, they're looking at how you can replace these systems more easily in the field when they fail, they're looking at how you cable these things in a cheaper way. And they're driving a standard called the Open Compute Project. It's for hardware that Facebook will use, but they're also broadening it out. And again, looking at driving server hardware to expand it, to drive volumes, to drive down costs, to Simon's point earlier. Some of our own partners -- Applied Micro, were announced a few weeks ago, is getting involved in that. It's a process-agnostic standard. We'll see where it goes. But again, an opportunity where people are looking at doing things differently because they're in an energy-constrained system.
So let's look a little bit more about some of these workloads. Walk-through this fairly complicated chart on the right-hand side. This came from HP when they did their announcement with Calxeda back in November last year, a project they called Project Moonshot. And what they have basically done is done some analysis on different types of workloads and compared how do those workloads run on incumbent server architectures as compared to running on microservers. And they look at those workloads on 3 metrics: costs, power and space. Okay? So the way you read this is if it's to the left of the line, it's a win for your incumbent architectures. So for example, if you look at that's in compute-intensive area, if you have an application that needs a lot of performance, Simon was mentioning earlier about the weather forecasting over Tokyo, that's a very mathematically intensive device -- or platform, excuse me. Now you could build that out of microservers, but you're probably going to need lots of microservers down there. So what you see there is that the cost of that is going to be something where it's advantageous to use your traditional way of building servers.
It's a space where 32-bit is an entry point, depending on where you go. Some people will need 64-bit. And in this space it's largely about addressing. This isn't, again, a place where you need massive compute for these types of tasks. But some people have written applications where they need a lot of memory space.
Okay. So let's just talk a little bit about some of the light scale-out applications. I view them as mixing of compute, networking and storage. And really, the magic is how do you find the right balance between those 3. If you're using cheap disc-spinning media in Facebook, you don't need a massive -- put another way, you need a CPU that utilizes the hard disk to a high level efficiency. You don't need something that has way more horsepower than it can actually read the data off the disc, okay?
So how do you balance those 3 things inside the system? Compute can be pure CPUs, it can be GP, GPU compute from sort of Pete's division there as MPD starts to drive that technology into some broader areas beyond mobile, or it can be hardware accelerators. People will look at certain algorithms and can accelerate those more efficiently in hardware than doing it in software. Okay?
One of the interesting areas that I have seen recently is around this whole Hadoop term. I'm not sure how many of you know about Hadoop, but it's basically a search algorithm. Originally came out -- the mass-produced technology came out of Yahoo! And yes, it's used for search queries in your own Google search or Bing or whatever. The underpinnings of how it goes and works out what you need is using those algorithm. But it's also used by financial organizations as they start to do queries in databases, maybe not the transaction itself when you buy or sell stock but as you build up data over days, months, years or big data. The analysts are looking for -- searching for patterns, right? They're looking for "Am I going to buy this stock based on this?" So that type of off-line data analytics. ARM is, we feel, one of the main places where the initial deployments will occur.
One of the interesting trends in the financial area is this desire towards real-time Hadoop. So people can't wait 20 minutes for a thing or wait for a batch job overnight. And this is an area where hardware accelerators or potentially v8 hardware can actually go in because it will provide more performance and reduce the latency to getting those queries back.
The other area of differentiation is around the rest of the SoC. As Simon said, we feel very passionate about what we do, but we have to enable our partners to innovate and differentiate. Some people are using the cost advantages of high-volume tablet chips and driving that into this sort of server space. Other people are integrating far more server-specific functionality down onto the device to provide more robustness, reliability and a more optimized solution. And again, you can talk the Calxeda technology next door.
So broad set of applications, different performance points, different I/O. And what that leads to is diversity of devices. So I'm just showing 3 here, from Calxeda, TI and Applied Micro. To the far right, you have a device that Applied Micro started to talk about, which is 64-bit up to 32 processors on a SoC, 3 gigahertz per processor, very high end. 10-gigabyte Internet integrated on SATA, integrated on -- a lot of integration. At the other end, you have the device that you see next door. And in 5 watts, you have a quad core, a 32-bit processor, very modest. And that's taken -- it's completely different performance and power and installation point. And over time, I could have added Ma Bell to this. As Tom talked about, we have architectural licensees on v8. We have partners looking at our Apollo cores and Atlas cores that you'll hear more about later this year, which are our -- the first 64-bit cores, looking at getting into this space with ARM-based technology.
James McNiven showed these 4 last year, for those of you who were here last year, and talked about how we go about building software ecosystems, so let's say ecosystems in general, and talked about how it takes a long time to build these ecosystems. It evolves over and -- it starts and then evolves over many years.
If we go forward to today, I think we've made some pretty good progress. There are now all of the critical software pieces you need in place to be able to ship a 32-bit server platform. And we will see server shipments this year, albeit in limited quantity, but boxes will start to ship this year for the sorts of applications I described earlier: web serving, memcaching, video content delivery networks, Hadoop types of areas.
So things like a server-grade Linux technology, things like a good performance-optimized Java, why is that important? If you remember, I was saying that these data center guys write their software in high-level languages so it's more portable. Hadoop has a very large job of components on it. So it's important for us to get a performance-optimized job, a compiler there.
Calxeda, the box is next door. They've been running their website for the last few weeks on that technology. You can see the demo next door. They also showed a number of applications at the developer conference a few weeks ago. They were showing a set of software called OpenStack, which is actually -- if you think back to what I said about the Open Compute Project, that is the software stack that Facebook is looking to drive as an open set of technologies. It's going to drive critical mass around these data center solutions. And the specific applications, there was something called WordPress and No JS. I'm happy to have discussions about what they are. It's less important about what they are. Really, what was important was that they just worked out for the box. They took the technology in an open source, puts it on an OpenStack box and it just ran on ARM. So again, in this space, relatively little tied to legacy incumbent code.
Now as we go forward, as I mentioned earlier, 64-bit is a place where we actually see the opportunity for ARM broadening. Our focus remains in the data center area for the beachhead. But as we get to 64-bit technology, it allows us to go and address other markets that are concerned with energy efficiency or space constraints. So again, energy-constrained systems starting in data centers, but areas of high-performance computing. Some areas of enterprises. If you think about New York where they're less about our power constraint -- it's a little bit about power constraint, but it's about space. People that are wanting a private cloud and are wanting to necessarily put all of their information out into public clouds like Amazon, but you've only got a small space in your New York Stock Exchange area or in Japan. How do you go and cram more density around that particular problem? We see opportunities there.
Emerging markets. Simon talked earlier, I think, about how we see smartphones being adopted not just in the U.S. but very broadly out into emerging markets. How does that technology go out there in spaces? How do you serve that technology at the other end of the wire in places where there's limited power or very unreliable power? And again, real cost constrained. So we see some massive opportunities there where people can rethink servers based on solving these problems.
So we start the software ecosystem now. We've been working with Applied Micro, whose one of the pioneers in this space. So there's one of our silicon partners. This is one of their FPGA boards down in the low-end corner there. So this is a board that's software compatible with what they come -- have coming down the road in terms of real silicon chips, but it allows us to get that stuff out into software partners and get that software ready for when they have devices and, to be frank, when other of our v8 partners have their devices, too. So we intersect to market their chips with software, and we move forward from there.
Just to make sure I go ahead [indiscernible].
So really to sum up, we see servers as increasingly becoming regarded as an energy-constrained problem. It's starting in the data center. I think ARM's applicability into the server market over time will be tied with the fact that more markets will start to view their server challenges as being energy constrained. Why is that important? That's a fundamental thing where people then have to rethink how they build that server technology, okay? They have to start thinking about what they put in the hardware, they have to think about how they go and integrate the piece point. The more you put down to the device and less SoC chips accesses, you're saving power. So really, this will be the rise of highly integrated SoCs for server markets. And like I said earlier, we have a strong track record of playing in energy-constrained systems.
We're going to put a little bit of guidelines onto here, and I think Tim will talk to you a little bit more about the financial modeling to help you with your different things here.
The market size is -- there's a typo here. The market, in terms of deployed servers out there, there's about 50 million servers out in the marketplace today. Okay? Less than 10% of that is currently in the data center area. That is the area experiencing the most explosive growth. We expect that to be somewhere in the 20% to 25% side of the overall server market by the year 2015.
32-bit ARM-based servers are going to ship this year. We've set the expectation that it's going to take time, right? We're at the early stages. Hardware will go out there. We will find some beachheads. But it's really going to be several years from now before you see meaningful shipments.
What we're lining up to do is see the first 64-bit servers shipping in the 2014 time frame. And as I said, we're building a software ecosystem right now to intersect those platforms that are going to be coming out in that time frame.
In terms of ASP, a theme that you've probably heard through all of these presentations, I think, is one size does not fit all. We see some people looking to use tablet technology, and those are going to be devices that are relatively cheap. We're seeing other people -- or if you look at that Applied Micro device, very high sets of functionality in terms of cores, in terms of I/O. You're going to see a range of ASP points. But we're expecting a range in the $50 to $200. And if you need some more granularity on that, work with myself or Ian Thornton.
With that, I'll hand over to Tim. Thank you very much.
It's almost morning. Finishing, it's almost afternoon. So I'm on the cusp. And we should be about 15, 20 minutes away from Q&A.
Long-term growth opportunity. I mean, pretty much everyone in this room still lives in a world of fluctuating macroeconomic news. Every day, we'll be trying to make sense of it. And the world tends to look at different place every morning we wake up. We also operate in an industry -- or we operate in an industry you guys invest in and analyze the industry that's characterized by 24-hour news flow. And again, it can be -- lead to wild extrapolations. So I think it's very good for us, once a year, away from the glare of the quarterly or half-year or full-year results, to be able to focus on ARM, our business model, what that business model drives in terms of industry economics, our low-power technology, our progress in graphics and some of our longer-term product opportunities. And what I'm going to spend a few minutes doing is try and draw that together as we think about how -- the financial modeling of ARM out into the future. And I probably haven't done this in this forum since 2007 when I did talk about some sort of longer-term shapes of the ARM business model. So really, that's what we're going to do.
And a bit more detail. Just remind ourselves of what we've been building for the last 20-odd years in terms of installed license base and where today's royalties and how today's royalties relate to that installed license base. We're going to look at the acceleration, if you like, of the pace at which that installed base had been built in recent quarters, in recent couple of years, and what that might mean to future royalty and for future share gains for ARM.
Most of you who look at our segmentation slides will be aware that we currently have about a 30% share of the total embedded processing market, and maybe we can think today about how that might develop over the next few years. And as we increase that share, which is a unit share, what is happening to ARM's royalty percentage per chip? Not average royalty rates across the blended space. That's interesting, but it's just a mathematical aspect. What really matters is how much value is ARM bringing to each of the chips where we're designed in.
And then drawing that altogether, what does it mean for the P&L? What does it mean for license revenue growth, royalty growth? I'll talk a little bit about our cost base so we can understand how the operating margin develops and the earnings develop. And I think I don't need to go into detail about one particular line on the P&L but I think with the tax changes that are going on in the U.K. jurisdiction, I think it's important to understand, as we look out at our modeling 5 or 10 years, what actually happened to the tax rate because that's changing quite significantly over the next year or 2.
So as I say, here's a reminder of where we are today. We've signed 870 licenses, just over 300 companies. Most of those companies are going to end up paying our royalties in due course. About half of them do today. You will see there in the sort of the box that we've signed 320 of those 870 in the last 3.25 years and they're not really moving the dial and in fact, in 2011, moving the dial in terms of royalty. 2011, 99.5% of royalties that we reported were generated from the licenses signed before 2009. So there's a lot of pent-up royalty already, if you like, with building blocks put in place.
Interestingly, of those 320 licensees signed in the last 3.25 years, 80% of them are Cortex and Mali. And also of those 320, 25% of them are either Cortex-A or Mali and therefore, typically characterized by a higher percentage royalty per chip than we have traditionally been used to with our math of sort of 1% plus. We're now moving, as we discussed in recent quarterly presentations, beyond that, and we'll look at that. And what this license base has driven so far is -- or in the last 10 years at least, a royalty CAGR of 24%, obviously well ahead of the industry rate. And we'll look at that in a bit more detail as well.
Again, followers of ARM who track quarter-to-quarter and try and develop a direction about how this investment proposition unfolds will, I think, have become used to this slide, which is our attempt to share with you what designs, what licenses and, therefore, what designs into how many semiconductor companies are important for us to increase our market share. And what that chart is basically showing you, those round blobs represent semiconductor companies. And we believe that we need to turn all of those blobs blue to have an 80%-plus share in each of our target markets. And you can see from there, as you would expect in something like smartphone application processors, it's mainly a blue picture. Some of them are a combination of blue and green. Green is these companies are shipping some ARM-based chips, yellow is companies that have announced designs on ARM but may not yet be shipping and the reds are the things we need to go after to get them ARM shaped. And so -- and on the right there, the 2011 share shipments. That is the by-segment analysis of the 30% that I referred to earlier.
Now just a little bit of a color work here for those who are bored with reading words and looking at numbers. A little -- if you sort of take away the segments and just look at the blobs, and you've got 108 major designs there that we need to turn ultimately blue, but from different shades of red towards blue. And you can see the half of them almost are pretty much there. And again, putting that in a easy-to-understand grid, that's kind of how it looked at the moment. And our goal is to get that nice, blue shade moving from left to right. And we believe that looking at and talking to our customers and our customers' customers, talking to our in-house teams and our commercial folk [ph], we believe that in 2016, that grid looks something like that. 49 blues have turned into 62. There are only 6 reds hanging out there that we need to go and turn. Now clearly, this is a bit of crystal ball work and it's directional. But this is actually based on our best estimate of how the market is thinking about deploying ARM technology over this period.
Licensing is obviously a precursor to share gain. It's actually a little bit better than that because, as most of you will know, many of our licenses are perpetual licenses, which means that you take a license to an ARM design and you can design it into your chip for as long as you are prepared to pay royalties. Therefore, for ARM to be generating a new royalty opportunity, you don't necessarily need to see a new license. You just need to see semiconductor companies getting more leverage in deploying existing licenses more. But you tend -- you will see more licensing.
Now so 2016, we put in there market share 40% to 50%. I mean, it sounds quite hand wavy but if you look at 2011, I said 30%. 5 years before, it was 17%. It's been growing at, on average, around about 3% per annum, a little bit quicker actually in the last year or so. And I think with the opportunity, the very high-volume opportunity in micro-controllers, there is a reasonable argument to say that our increase in penetration is going to grow at a rate that is higher than we have seen historically on a volume basis because of microcontrollers. But we see a world where, let's say 3% -- 3% to 4% over the next 5 years, we can see our 30% going into that 40% to 50% range unless, of course, there's an increasing share of markets, which in themselves are growing and will continue to grow in this period. And actually, if you look out 5 years beyond that, which I think you need to do when thinking about ARM, we see a world where those markets actually continue to grow. Some of the areas where we have a very, very high penetration to-date, given competitive environment, et cetera, we expect the market share to be flatter. But basically, across most of those end markets, we see ARM continue to grow looking way out.
And I didn't want you to leave here with the impression that once everything was turned blue or green, it was game over. Because actually, the world is a very dynamic place and you can see that some of the items on the right there, some of them being tossed on this morning, others not in much detail. But the things that we currently include in the segment chart that we show you, that is not the endgame by any means. And every time Ian and Jonathan update that, new things get included not just because of their own personal decision, but because there are new products out there that are capable of deploying our sort of technology. So I think there's going to be much more opportunity when we get out to 2016.
As we increase this unit market share, what is happening to the value per chip? I mean fundamentally this is an outsourcing business. We are substituting for our customers, fixed cost in their business by way of engineering headcount largely, with variable costs in the form of licensing and royalty. And basically, the more sophisticated the process is, the more work needs to be done by our customers -- would be needed to be done by our customers to do it, and therefore the more value we are saving them by providing an outsourced option. And in that environment, our customers are -- they understand that it is more cost for us and more value to them because they will pay more royalties. And this is why that graph is the shape it is. And as we moved into a world of complex processors, A class processors moved into a world of multiple processors like the big.LITTLE concept that Simon talked through, these are attracting high royalty rates. So 5 years ago, I was sitting here, we'd be thinking 1% plus for ARM. Now we're thinking, for the general-purpose processor itself, 2% plus or minus. We're talking about a world of graphics where, Peter has explained the trajectory we're on, that typically brings another 1% on top. We talked about a tax rate of physical IP and optimization packages, that brings a further royalty again. And the next-generation v8 will be for the same value reasons, looking to continue the trajectory. So as we look further out, you are seeing a world of ARM getting much more value per chip.
Before I look at the overall picture, just a reminder on what is actually going on with this -- with the tax environment in the U.K. We're forecasting about a 25% normalized effective tax rate in 2012. The U.K. corporation tax rate, as I'm sure most of you know, is already being legislated down. It dropped down in April 12, it's going down again at April 13 and it's going down again at April 14 to 22%. More importantly for ARM, the Patent Box tax regime is being introduced from April 13. And fundamentally, what that means is that for profits arriving from qualifying patents or qualifying profits arising from patents, they will be taxed at 10% rate. Now look; clearly, there's a lot of devil in the detail about what specifically constitutes a relevant patent and a qualifying profit. But suffice to say, this legislation is aimed at companies like ARM, who obviously, the U.K. government wants to encourage to invest in this country rather than all of the other places around the world that ARM could invest in given the talent pools around the world.
It comes in April 13. It is being implemented on a transitional basis. So the total benefit that a company like ARM will get in the end, 60% of that benefit happens in Year 1 and the remaining 40% happens 10% per annum for the next 4 years. The little graph on the bottom is not a very sophisticated way of showing that the tax rate is going down without putting specific year-on-year numbers, because we all know there are lots more things that go into a tax rate on an annual basis than just this one. But if I was to put one more level of science on that graph, I would probably have the rate going down a bit more in 2013 and '14 for the reasons I just said about 60% and then flattening thereafter. But essentially, as I said on the Q1 earnings call, in 5 years' time, ARM's tax rate should be, based on what we know today, sub-20%.
So long-term growth opportunity. 2007, I stood up here and painted a picture of license revenue growth that would be mid- to high-single digits. And in fact, in the 5 years prior to the downturn, CAGR of license revenue was 9%. I also painted a picture of royalties growing at broadly 2x the rate of licensing and typically well ahead of the industry, so mid-teens or a little bit more. What's actually happened since we talked through that is that in licensing has actually grown looking back over 8 years another 14% compound growth rate. Obviously, quite a lot of that pickup has happened in the last 2.25 years were partly due to bounce back out of the downturn, but partly because of the increasing utilization of ARM technology across these broadening end markets we've been talking about. Licensing has been growing faster than that category and royalty has been growing as we say there in the industry over that period has been growing at about 7% per annum. We've grown at 22% on the royalty, so at the top-end of our 10% to 15% range.
From an operating margin standpoint, over that period we've gone from 20% to 45%. Leading up to the downturn, ARM was -- had a margin in the very early 30s for 3 or 4 years. We've now seen a significant injection partly, of course, because of the higher than trend run rate license growth we've seen in recent periods, but also because of this royalty growth. And that was all driven in earnings CAGR in the last 8 years at 27%. So I mean, where do we go over the next 8 years? We see a world again and we've been consistently guiding, license revenue growth to still be in that sort of 5% to 10% on top of this higher base that's been built rapidly in the last 2.25 years. I think when we start growing licensing at 30% to 40%, the main question I was getting was, is this sustainable. Is it a one-off -- is it pent-up demand out of the downturn? Is it everything coming together in a glorious fashion and it's actually going to lease down? The answer is no. We expect trend license revenue growth on top of the base that we have built.
We also expect to see our royalty growth, our royalty revenue grow in a similar way to the way it has in the past, 10% to 15% higher than industry growth. You can see in the last 8 years, it's been at the 15% end of that range. So that's what we see looking out longer term. I mean from a margin standpoint, we have this conversation obviously a lot with investors about where is this. In 2007, I stood up here and said when our margin was 31%, 32% that this business is capable of sustainably supporting margins of 40% and above. And of course, everyone said when. And I said in the medium term. And it turned out I was right. Because in 2010, we went through 40%, maybe would've gone through a little bit quicker if it hadn't been the downturn, I don't know. 2011, 45%, and of course, everyone else is now what is it going to be in 5 years' time. And I think one thing we can be fairly sure of is that it goes up and to the right. I think the question is, is it 50%, is it 55%, is it 60% will really depend on partly, of course, the rate of penetration of these markets that we've been outlining, but also what are our investment opportunities in the out years to develop more technology to generate more licensing and more royalty. We're not managing this business for the highest possible margin in the short term, we're managing this business to be able to grow our overall profits and cash flow optimally, and that may well be the more likely shape of a 50% plus margin rather than a 60% margin. We'll have to see. But there is nothing that we can see in terms of the trajectory of the cost base and the need to invest that is different from the model that we've painted before.
R&D costs are going to go up in absolute terms. You've seen us recruiting about 10% per annum in headcount in the last 2 and a bit years. We're going to continue to invest as necessary to access the opportunity, but that is all within the overall shape. You're going to see periods of ARM where we are quite flat on that headcount. For 3 years, 2007, '08, '09, our headcount was flat. We had a big year of investment in 2006. We've had investment in 2010, 2011 and the start of 2012. So you're going to see it go a little bit in periods, but generally speaking, there's no change to the relationship between our cost base and this revenue projection. That means that our margin continues to grow over time.
And from an earnings standpoint, that revenue growth, that margin enhancement, and of course the boost that we're going to be getting from the tax rate, I think, sets a good foundation for our earnings going forward to be consistent with what we've achieved in the previous 8 years. So that's kind of how I would update what I said in 2007.
So in summary, before we move to Q&A. The reach of ARM technology is broadening rapidly. We know that. It's been driving licensing in recent quarters. The guys have outlined some of the markets that are underpinning that. The installed base has been growing steadily for a long time. That growth has accelerated in recent years. We're now reporting license revenue at 2x the level that we did 2 years ago. The order backlog, which is a contractual order backlog, it's not the discretionary drawdown item, is more than 2x what it was 2 years ago. So that underpins continuation of that trend. And that is driving long-term royalty opportunity. We're increasing the value because we're bringing more to the party. The government is being helpful. And all of those things are driving a very promising outlook for our earnings growth over the next few years. Thank you.
D. Warren A. East
Thanks, Tim. So we are going to do some Q&A in a moment. This is Slide 69 in the pack. Obviously, 69 slides is quite a lot to remember. So if you just had to take away a few slides from the pack, then we'd say these are the 4 key messages to take away.
The first is the importance of ARM's partnership business model. That's the key differentiator and that's what delivers the innovation and the economic benefit. And we have to partner with a huge range of different companies right from manufacturing technology to high-level operating systems.
The next thing to take away is that the growth of this business comes from, amongst other things, engaging with new partners, new customers, new markets and there's a huge range there, whether it's changing the way people mix their concrete, and we have smart intelligent concrete, right through to changing the way people design their servers going forward. Whichever end of that spectrum it is, we're bringing ARM's low-power D&A to the party. And that delivers greater benefit to our partners and their customers. The example on the slide there is a the big.LITTLE slide, an example of the system design but we could've also chosen other design -- other slides of the pack where we're bringing our low power D&A. Start over in the front left corner there, we've got some microphones going around and we'll try and keep up the pace.
[indiscernible] If I go back to Simon's presentation, Slide #6, where you described the multi-decade disaggregation of the Semiconductor industry, obviously a very long-term trend. But if you look at the current -- the 2 big successful handset OEMs, Apple and Samsung, there's an element of actual vertical -- going back to vertical integration, both with application processes and within Samsung's case, of course, also memory. And I'm told they're even interested in baseband. Is that -- how should we look at that -- those 2 features in the context of a long-term trend?
Well I think to every rule there are exceptions, of course. But I think the thing to look at is to get the most out of this. Those who are going to lead are going to understand that compete supply-chain from top to bottom. And I think that's the really important thing. It isn't just a case of taking what supply is keeping your bottleneck together and shipping out product. It's about understanding how the software works and how the transistors are made and how to get the most out of that all the way along. So sure, there are going to be some companies that do integrate. Some of those people, they mention manufacturer, others don't. But the key for me, I think, is understanding the whole supply chain top to bottom. Whether you have to do it all yourself or not. I don't think you do to get the most out of the cost benefits of this aggregation. But I think to get the best solution, you have to understand every step along the way.
Francois Meunier - Morgan Stanley, Research Division
So Francois from Morgan Stanley. The first question kind of what's the rule [ph] -- the question is about production capacity. Will the ARM partners have enough production capacity to produce chips based on ARM 28-millimeter on and below, or is it okay, is that -- at some point, they will have to beg Intel to get their chips at a much higher price. So that's the first question. The second question is about Windows 8. Yes, we are there again. Very simple question, does it work? Does it work well? And what's the incentive for OEMs to use ARM over Intel for Windows 8 tablets in particular. Is it just by raw consumption? Is it because they would generate profits or with Intel they won't generate any? So if you could elaborate on those 2 questions of Window 8.
D. Warren A. East
Let me talk about Windows 8 while Simon's coming up with an answer on capacity. So Windows 8 -- this is important to say. This is a Microsoft product and Microsoft controlling the launch of this product. From what we've seen of the product and we have played with the product, of course it works. It works very well. It's a nice operating system and I'm sure that they'll find many customers who want to use it. In terms of advantages of our ARM versus those incumbent designs that already use the Microsoft operating systems then yes, low power is absolutely an advantage in these thin form factor products. Like, for instance, tablet, like clamshells. Keeping the electronic small is very important and to cite the example of the very thin television out there as well. And another example where keeping electronic small is important and low power is the key to that. But I think we also talked about the business model delivering economic benefit. And we believe that the innovative yet cost-competitive supply environment that comes from the ARM business model will certainly be advantageous for people who are building Windows-based products. And most people at the moment, of course, don't have the benefit of that supply environment. So it's going to be a new thing for those manufacturers and that's certainly a benefit that they'll be getting from it. Your other question was about production capacity. If you like, I can comment on...
Yes I think [indiscernible] so I believe your own partners are going to get capacity for 28 nanometers and beyond. I mean, whatever exceeded this today are going to get sold, and there's a lot of capacity being put in place around the world by many different companies to fulfill demands for the future.
Sandeep Deshpande - JP Morgan Chase & Co, Research Division
Sandeep Deshpande, JPMorgan Cazenove. Firstly question to Pete on graphics. You've talked about these 2 different screens of graphics processors that you are getting into. Would you say that, I mean, that when you've had a key market share in the TV graphics market that it's having these 2 different streams is going to help you in the handset smartphone market to gain share. And then a follow-on to that would be what do -- how should we be modeling the royalty rate in graphics, is it a different way from how you do it in the microprocessor, how should we be looking at if -- I mean, given that those products will have ARM processors as well, will there be discounts on the graphics processors or graphics processor when an ARM-based processor or multiple ARMs are on that same chip?
Okay, and let me try and answer both of those. So in terms of smartphones gaining share, you asked about the 2 different programs we have, then yes you can see smartphones are segmenting super phones. There's mid-level phones. There's entry-level smartphones. There are smartphones at all points, and having the wide spread of products enables us to target all of those markets. You wouldn't take one of our top-end GPU computer cores and put it into an entry-level smartphone. You're not going to get the benefits out of that and you're going to get some overhead. So yes, it'll help us gain share in smartphones. In terms of how you model it, it's just the same as the processor. It's exactly the same financial arrangement. It's exactly the same kind of licensing arrangement. And as Tim said, they do tend to -- you get royalties for the processor as you get royalties for the graphics.
Sandeep Deshpande - JP Morgan Chase & Co, Research Division
Yes, define that. Because on the processor, as we understand, that subsequent processors get discounts on the processor, so would be classified as a subsequent processor on the chip?
D. Warren A. East
Short answer, no. It would not be, it's an incremental royalty as presented on the slide in Tim's section, I think Slide 65, 64 in the pack. And we're also sticking to the principal and we're starting to illustrate that in Slide 64 in the pack as well, that more functionality in the graphics processor means more value added by the graphics processor. And so if there's more value added, we're replacing a higher cost that would otherwise have to be done by an internal design or from an alternative external supplier, and that means an opportunity for increases in the incremental royalty, as well as the base incremental royalty from the graphics processor. Let's have the next one. We need to keep cracking through these questions. There's quite a lot towards the back of the room.
Gunnar Plagge - Citigroup Inc, Research Division
Gunnar Plagge from Citi. Could you talk a little bit more about your structural power advantages, in particular, this idea of combining different costs? To what extent are there barriers to entry and do you see any adoption of this in your competitive landscape? And secondly, from the moment you've developed processor architecture to putting it into siliconized -- and I think you talked about this that you're working with foundries at 20-nanometer, 40-nanometer, you're developing a lot of know-how and I was wondering you're monetizing this at the moment mainly through sometimes hard macros, mainly through these performance optimization packages. Do you really have the feeling that you efficiently monetized this know-how or are there any better ways and new business ideas?
D. Warren A. East
Tom, do you want to do the first one and Simon and I could crack at the second one?
So the big.LITTLE concept and are there any -- I think the question was really are there any technological barriers. The answer is honestly no. That's one of the really smart things about this because today's phones embrace 2 things. They already embrace MP cores, multiple processors. The other thing they already embrace is they have the intelligence to do this thing called dynamic frequency and voltage scaling, so if the software load goes down, the systems back off. That basic control capability is all you need to enable the big.LITTLE so that on the MP capability and enablers, they're already there. And the software is over and above that completely unconscious of the switching that's going on. So there are no barriers. To the question of adoption, I think to-date 11 partners have both the A7 and both -- and the A15 and product is in the pipe and coming through. We have silicon in-house now, and so it's just a question of a few months and things will start to emerge, I think. Simon you're going to...
Yes, on the monetization of physical IP. I think that there's 2 things to look at. The process optimization pack really does help a lot of our customers get to market sooner, so that pulls through a royalty stream that we get on our processors earlier. And if we can help more of our customers deliver higher performance and lower power have more competitive products, that is just generally good for R&D economics. In terms of the actual pricing, the way that we do the process optimization pack, we've had these products in the market a little while now, and it feels right. We've got the level of licensees the way the royalty works about right to ensure that we do get good uptake and continue good relationship with the foundries. So kind of happy with how that is working.
Didier [ph] from Merrill Lynch. A couple of questions, maybe first question is related to the above 90% market share by 2016 in smartphone and processors. What I'm wondering is what makes you comfortable putting this target or even this prediction out there, given the notoriously short cycle for handsets and obviously the entry of Intel in this marketplace. The second question is related to the PD licensing CAGR in the high-single digit. What are the reasons behind that number? And can you talk whether the potential for licensing with the analog and sensor companies are encapsulated in this number?
D. Warren A. East
Both Tim or you do the second one? Yes, I'll do the first one then. 90% shares that we talk about in 2016. Well who knows exactly what that share is going to be. Today, we know that it is, to all intents and purposes, 100%. And we all understand that Intel has introduced some much more power efficient chips and I know these chips are getting design-ins in some smartphones, and we believe that they will continue to get design-ins in some smartphones. We've played with the Xolo phone, it's a perfectly adequate phone. It does the job. However, as and when Intel chips get even more competitive with ARM chips and now they're still a couple of generations behind ARM chips at the moment. But as they get more competitive, then they will continue -- they will become one of the about 20 suppliers who supply chips into the phone space at the moment. And on a purely arithmetic basis, they will get some share. But by 2016, we don't expect it to be any bigger than arithmetic suggests, which is in the order of 5% to 10%, and hence, now greater than 90% number. Sorry, Tim?
I mean on licensing, I mean, there's obviously -- well, the way we do it is we look at our existing licensees. We look at what we think those existing licensees will want from our technology. So if you're an existing licensee operating in one vertical, you will be coming back to ARM every 2 or 3 years to upgrade that technology that drives additional licensing. If you're an ARM licensee that has increasingly and will probably more take ARM into other end markets, which is actually most of our semiconductor licensees, you will be coming back about for different flavors of ARM technology over time and that we model that out, too. And of course, to your last point, we're trying to identify, at least in terms of scale, new companies that are likely to be able to take advantage of ARM technology as these new markets open up. And when we model that out as we did in the 2007 time frame, we envisage a period over time of sort of high-single digit growth. Now in reality, you are going to get periods which may be related somewhat to our product cycle or our engineering delivery work or even the macro environment where it's going to deviate from that. Sometimes it's going to be much stronger than you've seen recently. Others, like in 2009, is going to be negative. But when we look through it, that's a pretty detailed exercise, which we've obviously been roaming in sort of -- in granularity of it, but that's kind of how we do it. So it is actually based on a fair degree of science and understanding of how the semiconductor industries will want to deploy our technology.
Ambrish Srivastava - BMO Capital Markets U.S.
Ambrish from BMO. Two very quick ones, one for you Pete for clarification. Is the roadmap on the GPU compute points us to going after to the street graphic businesses?
No. No, I don't think we're -- well, we do not intend going after the [indiscernible].
Ambrish Srivastava - BMO Capital Markets U.S.
And then one for you, Ian. Again, a blocking and tackling one, which is on the server roadmap, is virtualization -- and I believe VMware is a partner, is virtualization there, the support there or what's the timing for that?
Yes, good question. So from the A15 onwards, we have hardware virtualization support. So the cost Tom talked about as we move forward into v8 will have hardware hooks, so we're traditionally seeing in that data center area other virtualization, things like Xen, KVM initially. But certainly, we have a partnership with VMware today and we'd look to extend that hopefully into the service space.
Ambrish Srivastava - BMO Capital Markets U.S.
So when KVM comes out with their silicon, it'll have virtualization?
[indiscernible] with virtualization.
Ambrish Srivastava - BMO Capital Markets U.S.
Okay. And I'll go with the trend of going with the hard one with Simon. Same sets [ph] transistor cost on a nice little linear curve that Intel showed and then obviously, you guys are not standing still and you have architectural innovation. So what's the right way to think about it in terms of a framework whether it's at the same power x performance, and just help us out there. So the first question is do you subscribe to that, that the foundry camp is going to have a difficult time and you yourself said that it is getting harder as you go down the node. And so 8 years strapped to that then the answer is no, then what is ongoing with the architectural innovations that kind of helps to offset that advantage -- that so called advantage that Intel has?
So I think the roadmap for process technology and the foundries is pretty clear. They're all going to introduce them at some point. They've all come out and said that. The question is about when. The question is about how a particular process technology is suited for the type of product that you are going to wrap it around. So in building an SoC today, people would typically mix up transistors, a lot of analog integrated onto a digital chip doing analog with FinFETs has some new challenges. How you can get all the IT in place to support an SSC industry is going to take a bit of time and then there's the question of getting the mixture of performance and power consumption of the right point as well. So I mean, yes, FinFETs are going to come along from the foundry players, and it's about putting together the complete package of technologies that you need to enable an SoC so they're going to be built using those technologies. So you see it's going to happen. There are clearly manufacturing -- new sets of manufacturing challenges associated with that. But the foundries haven't been looking at this problem for a long time. I mean, Intel did not invent FinFETs, they came out of University of California at Berkeley a long time ago. The guy that piloted the research on that used to be TSMC CTO. So it's also -- this technology has come out of Intel and everyone's been scrambling to catch up. It's been around quite long time now and lots of work's been done across the industry on how you tame them and how you get them usable for the right set of products.
Nick Hyslop - RBC Capital Markets, LLC, Research Division
Nick Hyslop from RBC. You chart a couple of times where you segmenting the smartphone market and super phone mass market and entry-level. Could you just give us a bit more of an explanation of what you mean by those, and in particular, the amounts of ARM content that you're expecting in each of those 3 segmentations either by obviously chips or preference by value. I know it's not an exact science, but it's interesting to hear what you think.
D. Warren A. East
Yes, I think -- I mean, this is something which is going to change as smartphones evolve. A little while ago, we had feature phones and we had smartphones and then we talked about really high-end smartphones and simple and more basic smartphones. And now we've sort of got to 3 levels. So I think what we mean by the entry-level smartphone is, in a few years' time, that's going to have things like the A7 processor in it that Tom talked about in his section of the presentation. It's going to be targeted at low price. This is enabling the next billion people to connect to the Internet with an entry-level smartphone. And the super phone is going to have GP compute in it, it's going to have big.LITTLE. And it'll probably be quite sophisticated big.LITTLE implementations, maybe too big cores and 4 little cores, those sorts of things showing up, may even have multiple graphics engines as well. This is going to be the highest priced type smartphones that are bought in the developing world by the people that have to have the latest and greatest. And these chips will be interchangeable with the chips used in tablets and computers. They will be effectively the same chip. And then you've got something in between, which is for the slightly more cost conscious but still quite sophisticated user, and that's where we're defining via our middle range. So from a financial point of view, the entry-level phone is going to have a modem and an applications processor in it. And an applications processor is going to be a low-end apps processor in the $10 to $15 range. At the high-end, you've got an expensive applications processor and lots of other chips around it. You're probably getting towards a multiple of 3x to 4x the first one. And then in the middle one is where most of the volume would end up being, and that's your $20 chip.
Nick Hyslop - RBC Capital Markets, LLC, Research Division
You equated that this is your $0.20 to $0.65 range across those.
D. Warren A. East
This is what I was quoted saying on Monday -- Tuesday, yes.
Nick Hyslop - RBC Capital Markets, LLC, Research Division
Great. I have one more question, just could you give us -- you mentioned 2 things on the graphics processor. You talked about close to integration with the CPU. You talked about low power as being Mali's key characteristics, were you've been successful and one against your competition, what would you say it was that made them select Mali?
[indiscernible] because we have the best power, the best performance with that particular power. Other customers have chosen us -- again, the roadmaps are different, right? Customers on the GPU compute side have chosen us because we've made no compromises at all. You can have full profile GPU compute running, you can get every single OS support. So it depends on the customer. Some, as I say, it's low power, some it's functionality combined with low power.
Gareth Jenkins - UBS Investment Bank, Research Division
Gareth Jenkins, UBS. A question for Simon on the manufacturing side. In your discussions with foundries, how confident are you that they can get back on to the shrink roadmap once again, or do you think there's a risk that going forward at the lower geometries that the gap between foundries in Intel is going to -- could extend a bit further and maybe a potential risk there for market share? And the second one for Pete on the graphics side. Pete, you spoke about a focus on proprietary graphics, maybe you can give us some time line or you might update us on that end. Also, if you could just speak briefly on your DirectX support?
Okay. So I think in terms of the shrink roadmap since 28, 20 to 14, I think that, that is going to happen. I don't think there's any notion that, that roadmap has stopped at all. There are some challenges facing everybody in exactly the same way who's trying to manufacture about the equipment that enables a shrink to the new node. Everybody is adopting double patenting for their 20-nanometer points because EUV haven't come to industrial scale production and still isn't and doesn’t look like it's going to be a way off. That's going to impact everybody, whether you're a foundry or an IDM, it's going to impact everyone. As I said a moment ago, the foundries need to -- the foundries and Intel have different business models. The foundries are supporting hundreds of different customers; they're running many, many different processors simultaneously in their fabs. They're able to do that in a very high yield way and they're able to offer a very cost-effective bench high-performance solution. And so there are a whole lot of trade-offs to support that business and that has underpinned the SoC industry for the last 20-odd years. And I don't see any of them stopping that. Nobody, as far as I can see, is going to throw in a towel and say, "You know what, it's all too hard. I give up." That is just not happening. If anything, I see the focus on R&D development going up and not going back.
Gareth Jenkins - UBS Investment Bank, Research Division
[indiscernible ]so we had one proprietary in 2Q and one on DirectX.
I think, as I've said, one of our main opportunities as proprietary GPU and I think what we're seeing on the GPU side is very similar to what happened on the CTU side. Simon said there were a lot of proprietary CPUs over time. Gradually, people realize that outsourcing and doing a buy versus make decisions, it was just better to buy. We're hoping we're going to see the same on the proprietary GPU side. In terms of DirectX support, we have full DirectX support. We have support for DX9, we have support for DX11.1. The hardware is available. The software is up and running and we're ready for OEM launch.
D. Warren A. East
So we're moving back there. Somebody hand the microphone there.
Gareth Jenkins - UBS Investment Bank, Research Division
Yes, I've got 2, if I could. I just want -- on the cost side, can you provide us some very good revenue signals? And I just wanted -- given the proliferation into a variety of end markets, given that the PC OEMs aren't used to maybe the same business model as they're now discovering, what you feel, on the cost front going forward, whether you feel that you can grow costs -- whether you feel you have to grow costs to slightly more than half your revenue growth or whether you can actually -- whether those costs will be borne by others in the industry. And then secondly, I just wondered if you could talk about graphics. You mentioned graphics POPs. You've been very clear in terms of the benefits of POPs historically, and I just wondered whether you could maybe give us some of the benefits that you see on power consumption exception on the graphics POPs specifically. And then finally, just on Intel pricing. Pricing versus Intel, you've got -- the interesting side earlier about $20 versus $100 of CPU just on the fifth, can you tell us how you see that trending over the next 5, 6 years?
D. Warren A. East
I can do the first one and then the POPs, unit cost -- the unit cost and I'll do the --
I don't know the costs, Gareth. I mean, it's kind of what I said when I was standing there, which is I don't see any fundamental change in the shade of our R&D trajectory, which is why I think the operating leverage still comes through, but I did make the point that we will obviously be investing to optimize our opportunity. And therefore, if we see the need to invest in certain things that have appropriate returns, this is why I said maybe in 5 years' time we're more valuable at 50% margin than we are at 60%. And I think we just have to -- but we don't see anything -- our model, the way we go to market, the way we operate with our customers in the ecosystem, it doesn't fundamentally change. I mean clearly we need to grow, we need to grow our infrastructure within the business, we need to grow our commercial fleet on the streets. But this is what's been happening and this is what we take into account in the guidance. But I don't see that the model changes shape fundamentally.
Okay and on the graphics POPs, really good question. So I think one of the questions earlier was are we going to go into discrete graphics, well no. The thing that's driving our increased complexity in the roadmap is their new performance requirements out there and the size of the GPUs are increasing constantly. So what we've done with the G POPs, the aim is to take about 10% to 15% of area and power out of the GPUs and the size of the GPUs, that's worth anywhere from $0.50 to $1 per chip.
D. Warren A. East
Okay. And you had a question about the pricing chart that we showed there. I think it's important to extract that ARM does not sell chips; our semiconductor partners sell chips. And the growth with which they sell them is up to them and not up to us. What ARM provides, though, and what we showed on Simon's slide was that ARM provides a business model which delivers a whole eco-system that encourages innovation and generates a competitive supply environment. And when we look back, what that's done to the handset market is it enables a huge increase in functionality and still produces chips that get sold in the $15 to $20 range. And as we look forward, we can see much greater functionality going into smartphones, and we expect that same business model to deliver the competitive pricing environment so people will still be able to buy smartphone chips for around $20. And when that smartphone chip is capable of doing everything that it takes to drive a PC, then it's very much up to the supply and the market dynamics between the people who are buying the chips and the people who are selling them to say, well if I can buy a smart phone chip that does everything I need for a smartphone chip price, then that's what I'm going to pay for my smartphone chip that I'm going to stick into a mobile computer. We'll have to see what actually plays out, but that's what the business model delivers. We're moving towards the back of the room now and we have about 5 minutes to go. Okay?
Julian Robert James Yates - Investec Securities (UK), Research Division
Julian Yates from Investec. Just a quick one, Tim, in terms of the royalty outlook you put up. I'm quite interested why you didn't remove the lower end of that 10 to 15 range in terms of growing just the industry, just looking at the licenses that you've signed in the previous few years and those royalties obviously yet to come through in a material way taking market share gains, wouldn't it be normal to sort of assume that maybe you're looking more towards the higher to that range rather than the lower end of that range with those dynamics?
Well, as you know I try and position myself as a quarter sort of guy. And I mean you saw from the last period that we did actually -- the out turn was at the very top end of the range. But I think when you look in a crystal ball so far out, I think you're getting caught up in the excitement of these sorts of occasions. It can take you into areas where you create hostages to fortune so I think as long-term guidance, I think that's right. But I did say that if you look at the rate at which we are gaining market share or likely to gain market share in those markets, it could well be at the upper-end of what's been normal. But obviously, some of that comes from the very large microcontroller unit opportunity, which comes at a lower royalty per chip.
D. Warren A. East
We have time for 2 or maybe 3. Let's see how quickly we can go.
Brett Simpson - Arete Research Services LLP
Brett Simpson at Arete. I had a question, really, on the Cortex-A unit shipments. They seem to be, on a quarterly basis, at least they're getting dominated by 3 players, principally Samsung, Qualcomm and Apple, and its happening in a way we haven't seen before in the industry and it doesn't look like many other chip-makers have much prospects to make money in this environment, given how much they are taking share right now. So I had a couple of questions on that. How does ARM see this trend sort of building? She would be expecting a sort of shakeout in wireless semiconductor and if so, how would that really impact your licensing business? And then second question, are these 3 players that are doing so well right now, are they paying the same royalties to ARM that the rest of the market will be paying?
D. Warren A. East
So I'd sort off on that one that's in ring. What people pay in royalties to ARM, is we talk about a band and you talked about a very small number of semiconductor companies there, so we're not going to get engaged in a discussion about how much individual companies pay. But everyone pays within the bands that we talk about, when we talk about our royalty rates publicly, and that means the Cortex-A processors, they pay more than they paid for processors that are less than Cortex-A. And typically, Cortex-A high-end processors now are commanding royalty rates, getting towards 2%. I think in terms of do we see multiple players able to compete in that space? Yes, we do. In fact, I presented a slide yesterday and we were talking about the mobile ecosystem and picking out Cortex-A partners there, and the reason that I talk about roughly 20 players supplying into the space of mobile phones and mobile computers is that when we look at the people who actually license Cortex-A products and who are either shipping or whom we know have serious plans to ship, otherwise they're committing commercial suicide because they've invested quite a lot of money in their development programs, then the number is about 20 players. And so I'm sorry, we don't quite buy the theory just because Apple and Samsung has a very high share of the smartphone market, so that translates into what happens with people who are shipping Cortex-A products because Cortex-A goes into a whole load of other products besides the high-end smartphones that come from Apple and Samsung. So, sorry to shatter that delusion.
Brett Simpson - Arete Research Services LLP
Maybe just a quick follow-up for Simon. In your presentation, you talked about this world and design that's changing where 20-nanometer, there's a lot more verification, a lot more software that's going into these types of chips. Is there new opportunities for ARM to build fresh revenue streams on IP? Do you see any opportunities to get into the software business or adjacent areas where you can leverage that ecosystem you talk about perhaps in verification or in software going forward?
I mean, potentially, yes. I mean, we've looked at embedded software a few times over the history of ARM. But at the same time, we are -- we like the fact that we have a broad ecosystem of partners that we're working with to do some of those things that we are not expert in. As I said, we can't do everything. We do some things really well and there are some areas where we can leverage our historic strength and our business model and repeat over again. There are some things that we choose not to do or choose not even to try and go into, because there are already a bunch of other players in our ecosystem that are doing it quite well. So we're going to kind of carefully look at the opportunities around this. As you say, verification is a big problem. Our historic approach that has been to work with EDA companies who are really good at solving that sort of thing and it enabled that for the ARM ecosystem. And I think for now, that's the approach that we would take there.
D. Warren A. East
Very last question.
Lee J. Simpson - Jefferies & Company, Inc., Research Division
It's Lee Simpson from Jefferies. Maybe a couple of quick ones for Pete, if I could. Pete, I wonder if you could maybe give us some comps for Skrymir versus the next-generation architecture coming out so far of a rival who's talking about a baseline of 100 gigaflops for course of the next 12 months. Maybe alongside that as well, we're hearing increasingly about HSAs, AMD making a big noise about that back in February. I wonder what that does for or how that sits alongside your philosophy of optimizing next to CPU and looking to make some clip there on the space sides, too?
Okay. I'll be very quick, so I'm not going to give any details on Skrymir just now. We'll be doing a launch on that specific product later on. So it's a nice roadmap blog, that's all we're going to talk about, just not -- you would expect it has a lot more performance, that's what driving us there. On HSA, we like the ideas that AMD and the HSA consortium are coming up with. We have talked to them quite extensively and it's very interesting and it's completely in line with our ideas as well.
D. Warren A. East
Okay. With that, I'm afraid it's 1:00 and we have to terminate the session. So thank you, all, very much for coming along and we hope you enjoyed it.