Altera Corp. - Shareholder/Analyst Call

| About: Altera Corporation (ALTR)

Altera Corporation (NASDAQ:ALTR)

November 07, 2011 2:00 pm ET


Scott Wylie - Vice President of Investor Relations

Danny Biran - Senior Vice President of Marketing

John P. Daane - Chairman, Chief Executive Officer and President

Ronald J. Pasek - Chief Financial Officer, Principal Accounting Officer and Senior Vice President of Finance


Nathan Johnsen - Pacific Crest Securities, Inc., Research Division

Unknown Analyst -

Scott Wylie

Welcome. We're glad to have you here this afternoon. I'm Scott Wylie from Altera. And I'll start in this afternoon with a few opening remarks. And then turn the agenda over to some of my Altera colleagues. But first of all, let me introduce my Altera colleagues who are in the room. Cliff Tong, who you may have met on the way in. Standing in the back is my associate in the Investor Relations group. And then if you look over to your left, along to your left, along the wall to your left John Daane, our CEO; Danny Biran, our VP of Strategic Marketing; and Ron Pasek our CFO, in the back of the room.

Let me start, don't worry, I'm not going to read it all to you, this presentation contains forward-looking statements, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking information generally refers to any information relevant to a future time period. Investors are cautioned that actual results may differ materially from these forward-looking statements and that these statements must be considered in conjunction with the cautionary warnings that appear in our SEC filings available from the company, without charge.

Further this discussion and presentation may repeat elements or prior guidance and in so doing the company is neither reaffirming or modifying that prior guidance.

A couple of other organizing thoughts, we will, as has been previously announced, issue our mid-quarter update in early December, about a month from now. We will not be talking about the fourth quarter today. But I would say mark your calendars and as per usual, early part of December will be out too with that update.

With regard to the handouts that you have, the handouts are actually in 2 portions. You've got the leading portion, the trailing portion will be available to you as you leave the room this afternoon. And just look for Cliff or me if you've got any questions in terms of where the handouts may be.

Let's see. If you were to take a look at what it is we intend to walk through, which is very early on in your slide presentation, here is an overview of the agenda that we're going to walk through this afternoon. John will open and will talk about the company's current business position and performance. Danny will follow with a deeper dive in terms of technology and our markets. Ron will then enter with a perspective from the financial side of the business, and he will also offer next year's spending guidance as well during his presentation. And then John will come back and wrap up at the end.

If you have any other needs during the session just look for Cliff or me, we can help you. At the tail end of our prepared remarks, we will start a question-and-answer session with you all. If you would hold your questions until then, it will be much appreciated. Cliff and I also will be walking around live mics, which we'll try and get to you, so that the Internet audience can listen as well.

And at the conclusion of this part of our afternoon, we will be in the room right next door to host an informal reception, where we will be delighted to engage in any further conversation that you're interested in with us at that time.

So once again, welcome. Glad to have you here, and John?

John P. Daane

Thank you, Scott. Good afternoon, everyone. Let me move the mic down a little bit. Over the last decade I think if you take a look at Altera we've certainly significantly improved our competitiveness to become relevant in the FPGA industry, in order to preserve and grow our position. Certainly with that, we've also during this period of time enhanced our business model and significantly increased our overall operating margin. I would say over the last 3 years certainly what you've also seen as a beneficiary of the tipping point that we've discussed in the past, we've also significantly outgrown not only the semiconductor industry but also a lot of the technologies that we compete with, ASICs, ASSPs, outgrown our customers as we begun a larger proportion of their bill of materials as well as the PLD industry itself. So pretty good financial results over the last few years. I'm going to spend some time going into some detail around these to sort of discuss how this past affects our future going forward.

A lot of rewards or awards came from various industry associations, magazines, newspapers. For the results, I would say the one we're probably most proud of is the recent Forbes listing us as one of the top 100 innovators, just recently out. They had a group of professors, including Clayton Christensen from Harvard, who looked at corporations that had innovative cultures that resulted in superior products and business models and recognized Altera as one of 2 semiconductor companies in combination with QUALCOMM. So we're very proud of, I think, what we've done over the last decade, and again leading to what we think we can do over the next decade.

Just on the business model again, I think if you look at us from a long-term guidance perspective over the last 8 or 9 years, we started with an expectation of gross margins at about 62%. We've now taken that up to 67%. There are a number of innovations that we have in terms of the way that we use process technology, the way that we do yield enhancements. But certainly the tailored architecture product approach that we use has a big impact here.

We've remixed our spending from an OpEx perspective to move more into R&D, to lower SG&A to slightly lower our overall spend by moving more into R&D, we think and expect and have seen better growth trends, and we would expect that to continue. And then overall, with the increasing gross margins combined with lower OpEx obviously, a much higher overall operating margin expectation for the corporation.

In terms of growth by market, there's a lot of detail on this foil, and I'm not going to spend time on all of this. And I'm just going to concentrate on the bottom. But we left you the material, so you could go back and refer to this.

First of all, on the first column what I'd point to is the end markets. And the markets that we are in are mainly the infrastructure-based market. So what we've done is we have separated out PCs, handsets and the games and included all other end systems business. And over the last 3 years, including this year, according to Gartner Dataquest, these end equipment markets are growing at about a 1.4% compound annual rate.

The next 3 columns show the total semiconductor industry the, ASSP industry and the ASIC industry. And what you'll notice is ASSP and ASICs are under-growing the semiconductor industry actually in these categories. And in particular the ASIC industry has actually grown at a lower rate than the end equipment industry, and that's because ultimately ASICs are getting replaced by other technology predominantly programmable logic.

If you look at the second column to the right-hand side, you'll see the programmable logic is growing almost 11% during this period of time. And in the far right, Altera growing 15%. So we've grown about 10x our end customers, certainly a significant rate faster than the end industry itself, ASICs or ASSPs, and really we're the growth driver behind the PLD growth rate over the last 3 years, including this year.

In terms of products, the reason I wanted to come back to the gross margins, the gross margin increase, is a lot of this has to do with our overall product strategy. So if you go back into the 1990s, this was mainly a prototype low-volume industry. So what we would do as companies is build full-featured products that is a product that would have everything that any customer could desire because they would pay a lot of money for the devices.

What we recognized in the late '90s is there was a volume component, there were some industries that would like to take FPGAs into a volume or some sort of volume application. But at the high-end, we just weren't providing the price points necessary for them to do that. So what both companies did within the programmable logic industry is we took our high-end architecture and we re-branded it, same die, same package, called it a different name, sold at a lower price.

And we achieved success. So you can see by 2002 almost 20% of the revenues of the PLD industry were based in the sort of lower cost products. But because they had the cost basis of our high-end products, they also became a margin drag for the industry. And you see that overall, the margin for PLDs during the period of 2002 was 60%.

What Altera did in 2002 is we decided that there was a volume component. There were a number of industries that we wanted to participate in that would rather choose FPGAs over ASICs and other product lines, but we needed not only to have the right price points, we needed to have the right costs. To have the right costs required that we have different products with different features. So we created a high-end part, again, for prototyping, high-end customers. We also developed a low-cost industry's first true separate silicon, fewer features, ultimately much reduced cost basis.

We've now evolved that into having 3, by the way I should mention during this period of time, our competition within one generation followed suit having a separate high end and low cost. We've now done 3 separate products for different industries with different price points, different costs, different features.

The important thing here is it's allowed our industry, first of all, to significantly grow from a revenue perspective because we're able to penetrate a lot of new applications and new markets with this strategy.

But because also we had tailored products with the right price points, cost points, we are also able to significantly expand our gross margin from 60% as an industry to 67% over 8 years. So the strategy of having a tailored product lineup certainly resulted in very strong revenue growth and very strong profitability growth for the industry. And this is something that we're continuing forward to evolve.

Now we have this advantage in 28. It's interesting, our competition is actually going the reverse direction by having one product. We think this -- by continuing the tailored approach, we have a better cost structure to preserve our overall operating model that we have been able to evolve to. We also think this allows us to continue to address the varied markets that do need different products, different features in order to be successful.

With that, we have 3 products in 28-nanometer that we're doing. Stratix is our high-end device, developed for applications in telecom and military. Arria is our midRrange family, and typically used in wireless and broadcast. And then Cyclone, our lower end, lower cost used for industrial and automobile.

What we've done here is used different process technologies, different architectures, different memory blocks different transceivers for each family to optimize the set for those specific end markets, to provide us the best return and provide that end customer with really the right product lineup that they're looking for.

This is something similar to what we've been doing for many generations, and we'll continue to do going forward. We've also introduced the SoC products for both the Arria family, as well as the Cyclone family. Those are dual arm embedded microprocessor products. Danny will talk a little bit about -- more about in just a moment for some of these end markets. So 5 specific 28-nanometer families tailored very successful in the industry. Obviously with the market share that we've had in 40-nanometer, we're really the incumbent from a design perspective, from an engagement perspective with the customer base. We were able to ship $2 million of revenue leading the industry in 28-nanometer.

Now a lot is made about market share in our particular industry, so we wanted to spend a second, and just talk about it because it is fairly complex. And it is very different from what many people are used to in a consumer world. We sell into infrastructure equipment. It may take a customer 4 or 5 years from the time that they engage with us to do their design, debug their system and software, start ramping and selling a box into their end customer base.

And because we're selling to military, telecom, industrial applications, that equipment lives longer than 10 years. So it's a very, very slow ramp in terms of a process node, and then it's a very, very long tail.

What we've done here is we've separated the business out into 3 specific areas. The red line represents the 150-nanometer and older technology. These are products introduced in the year 2000 and before. The dark blue line is the products introduced from 2002 through 2008 130- through 65-nanometer. And then the bottom line is 40-, 45-nanometer and 28-nanometer together in the lighter blue.

What you see on the pie charts on the right-hand side is our relative market share for those nodes versus the competition. Our main competition being Xilinx in this case.

And what the line does is it tracks from what percentage of the overall business do those particular nodes make up? And you'll see even though the red line represents products introduced in the year 2000 and before, by 2006, it was still half of the industry's revenue. So our product live for a very, very long period of time.

So many discussed last year that Altera gained a significant amount of market share because of 40-nanometer. And the reality, 40-nanometer was not a driver at all for last year, it's a driver in the future. What happened last year is simply these old products for which we had only about 20% market share are slowly decreasing. And the dark blue line, where we have about 46% market share, has been increasing.

And so with that, we've been gaining market share roughly every year for the last 9 years. Now 40-nanometer last year, the reason I say it really didn't have an effect, 40-nanometer was under 10% of the industry's overall revenues. So it wasn't the driving factor as much as it was. Altera has significantly higher share in the darker blue line, much lower share in the red. As those 2 trends have happened, simply we've gained more market share every year.

Now going forward 40 certainly has a play because it is in its ramp. Typically years 3, 4 and 5 in a product's introduction or a node's introduction is where you see the revenue ramp for those products, and you're seeing that in 40-nanometer right now. That's expected to be close to 20% of the industry's revenues. And this is a node for which we have about 2/3 of the overall revenue flow.

So we would expect as the old products in the red line continue to decrease, now as the dark blue line has peaked, the light blue line is continuing to increase. And by the way, we've still yet to hit our overall market share in the dark blue, that we will simply gain market share for years to come.

28-nanometer products introduced now will start to ramp really in a few years from now. If 28-nanometer follows the pattern that we've seen really over many generations, there's a lot of similarity between them, you'll see 28-nanometer start to get to about 10% of the industry's revenues in 2013. And start to really ramp and be considerable, from a revenue perspective, 2014, 2015.

And again, not only because we were the incumbents, with strong market share in 40-nanometer, but also because we have the tailored product lineup. We expect overall that we're going to continue to enjoy a similar market share of about 60% plus in 28 nanometers we have in 45, and continue our market share gains for many years. And at this point, again, just looking at the data, it's inevitable that we'll be #1 in this industry.

Put a different way graphically, if you go back, we had gained market share quite significantly during the late '90s, and then mis-executed with software, and to be honest, some hardware issues that we had in the late '90s. And that resulted in market share loss. We then came in with 130-nanometer, have increased our market share from a design win perspective sense. And you've seen 9 straight years of considerable market share gain. And again, with the position that we have in 40 and 28, we expect this trend to continue for many years.

Now that's market share between Xilinx and us. Ultimately we don't grow as an industry just swapping revenue back and forth. The main thing that we're competing with is ASICs and ASSPs as corporations. Again if you go back to the data, obviously for the last several years significantly outperforming.

What we have here is our total opportunity pipe going from 2008 in 65-nanometer to '09, '10 where you really got into 40-nanometer to now in '11 where you're looking at 28-nanometer.

And what you've seen is our opportunity pipe. In other words, designs that we're seeing. Design value has significantly expanded during this period of time, which is exactly what we would expect as our products become higher density, more fully featured, as ASICs become too expensive for people to implement, as ASSP companies retreat and look for volume oriented markets because of the lack of return on investment, ultimately we should open up more market and we've seen this trend happening.

So we're very excited about the fact that each generation of technology has opened more market. The design wins, of course, track that with each generation of process technology. We're also booking more designs, higher value designs, again, which we expect to fuel our growth in the future.

Now the tipping point we've talked about for a long period of time, and this is the fact that ASICs are really becoming too expensive for people to implement. Many ASIC companies try to preserve lower nonrecurring engineering charges by using older generations of technology. As we move forward very aggressively with newer technologies, we've been able to make our components much lower cost. Ultimately, therefore, being able to compete and displace a larger proportion of the overall industry. This trend is still underway and will be underway for a long period of time, as Danny will talk about just from a -- how much ASIC business is out there.

But we're adding a new trend to this that we think is going to continue to help us grow at a very strong clip going forward. And that trend is the idea of -- it's not only replacing the ASICs but integrating ASIC, ASSP functionality along with microprocessors and microprocessors with our FPGAs to really create a new class of products under what we call silicon convergence.

And to talk about, I'll introduce Danny Biran.

Danny Biran

Silicon convergence complements the tipping point to allow us to compete better with ASICs and ASSPs. And just as a reminder, we don't need to replace every ASIC and every ASSP in order to grow much faster than we are growing now. If you look at these numbers, every percent of ASIC and ASSP that we are taking leads to a 10% growth in the PLD industry. So there's tremendous leverage here, and this is really what we are focusing on.

Historically, there were very clear definitions of different semiconductor products with very clear boundaries between them. And the definition apply to things like the implementation, the business model, the customer usage of the product. So if we look at what we have here on the left, microprocessors and DSP processors, they were addressing a wide range of applications.

The same product is sold to many different customers, and the way customers use microprocessors and DSP processors is they buy the devices and they program the software using languages like C. And in the case of DSP, sometimes they are doing it using modems.

On the other side of the spectrum we have ASICs and ASSPs that are addressing the very narrow range of applications, and that's why they are called application specific.

In the case of an ASSP, the same device will be sold to a number of customers in that market segment. In the case of an ASIC, it will be sold to a specific customer, just one customer. Historically they had no programmability or very limited programmability, and the way they started to address it was by incorporating some DSP cores or microprocessor cores that started to provide some programmability.

FPGAs were unique or are unique because it's the only product that also provides hardware programmability. And because of that, customers can buy the product and obviously, we sell the same FPGA to a wide range of segments, to a lot of customers. The way they use the products is they buy it and they customize the hardware. And they customize the hardware using tools like RTL, whether it's Verilog or VHDL.

Each one of these product categories has some advantages and disadvantages. Obviously, nothing is perfect. And here there's an interesting foil that Intel has created and you can see here one type of trade-off. You see flexibility on the horizontal axis against efficiency as measured in performance per power consumption, or operations per watts. And as you can see what Intel is showing here, on the left-hand side, the general-purpose processors, obviously very flexible. You can develop any software to do anything. But even Intel acknowledges it's very inefficient. The microprocessors either give you high performance at a very high power consumption or low performance if you want to lower the power consumption.

And on the other side of the spectrum, we have the dedicated hardware or hardware accelerator. And there, you give up any flexibility, but you get excellent efficiency because now you have hardware that's dedicated to the function that you want to perform.

In real-life applications though, you can't just have one thing or the other. In most cases, you need to have more than one thing. And so what you start to see now is more and more products that start to combine products from different worlds, and you see that the boundaries are starting to blur.

We have here a product that Intel announced at the beginning of this year, and it combines an Intel Atom processor and an Altera FPGA in the same package.

Now what is it? Is it a microprocessor, is it an FPGA. Intel obviously has the ability to take that device and program, the FPGA part of it, to perform a certain function. And by doing this, you take this device and you turn it into an ASSP.

The other example that we have here on the right side, this is a product, an FPGA product that Altera is selling today very, very successfully to the surveillance camera market. And what we do there, we sell the FPGA with soft IP that has all the functionality, imaging functionality and DSP functionality that you need for that application.

As far as what we are selling, it's an FPGA. What the customer is getting, it's an ASSP. For the right customer the right opportunity, it's very easy for us to create something that's custom for that customer, and now you have it as an ASIC.

And another example that's not from the FPGA world, at the bottom, this is a wireless infrastructure product from Texas Instruments. And you see there block diagram, DSP processors, ASSP blocks. Again the idea here is that boundaries are starting to blur because applications need more than just one thing.

What is really happening is we start to see the thing that we call silicon convergence. The ASIC and ASSP guys, because of the tipping point issue, because they have to get away to get better return on their investment, are trying to move to the right. And at the same time, the processor guys because they have to get better efficiency, because most applications need more performance but can't tolerate higher power consumption are trying to move to the right.

And so what you end up with now is one product that has the best of both worlds. And you can see that we are now talking about silicon convergence to create a mixed system fabric. It's a single device that has programmable fabrics so you can customize the hardware. You have DSP processors and microprocessors, so you can also develop in software. And you have application specific IP, so you get the efficiency that you need for the applications that you are targeting. And obviously, this is something that caters very well both for the hardware engineers and for the software engineers.

And we anticipated it in the FPGA industry. And if you look at the progression of FPGAs over time, in the '90s, we had logic and some memory. As we went into the 2000s we added blocks like transceivers and PLLs that are implemented as ASIC blocks. We started to add speaking abilities. And what we have now with 28-nanometer is really something that combines everything. We have programmable logic. We have the ability now, and we are integrating hard processor cores. We have DSP capabilities, but we enhance them to include floating point DSP capabilities. We have analog components like PLLs and transceivers, and you have now this single-chip that can really do a lot of things. But because it has the programmable fabric, you can use it in a large number of applications.

And if you look specifically at what we did between 40-nanometer, which is very successful for us, and 28-nanometer, we increased the amount of logic because we do it from one generation to the next every time. But more than that, we significantly increased the number of other resources. If you look at what we did with memory, we doubled the amount of memory. We tripled the amount of DSP resources. And not just that we added resources, we can now support floating point, which is a lot more complex.

We are now hardening processor cores. And if we just look at hard IP or application specific IP, that is obviously application-specific. But in some cases we more than 10x increase the amount of IP that we now provide with our FPGAs. Now that leads to something that's very powerful, but we also need to think about the usability of that device.

And you know, that to address the usability because you want to take something that is powerful but also allow customers to use it very effectively, we aren't stopping in investing in the devices themselves, we have to provide things around the devices so customers can really take advantage of the power of that converged silicon. We introduced earlier this year a very unique tool called Qsys. What Qsys allows customers to do is to take different blocks of IP whether they get it from Altera, whether they develop it themselves, whether they get it from third parties, and very easily it allows them to take all those blocks and create a system on a chip.

And that is something that contributes significantly to the productivity of hardware engineers. But as we create this converged silicon, we also need to think about the software engineers. For this, as we introduced the floating-point capabilities in the devices, we also introduced in our Quartus II design software, ways to take advantage of the floating point capabilities. We have a tool called DSP Builder, which allows DSP engineers to develop algorithms using the way and the tools that they are developing algorithms all the time, and very automatically convert it to an FPGA implementation.

And last but not least, an effort that we started earlier this year and already shows very promising results, we have a tool called OpenCL. OpenCL is really an emerging industry standard that allows customers to develop in C, write code in C, regardless whether at the end it's going to run on a graphics processor or CPU process or an FPGA. And we provide the tools underneath this API or this language that takes this description and converts it to a very efficient FPGA implementation.

And you add to this our embedded initiative where we allow customers to use our FPGAs with microprocessor cores from ARM or MIPS or Intel in the case that I showed earlier, and IP that we provide for things like memory controllers, video and image processing. Sometimes it's application-specific like in the case of OTN. And you see that what we now provide customers with is a very, very unique type of device, but we also surround it with tools and flows that allow them to really take advantage of it.

And so what we end up with now is something that has the best of both worlds. It has efficiency when you need efficiency. It has programmability where you need programmability, whether it's hardware or software programmability. The thing that's important to emphasize though, is that the companies that are best positioned to create this converged silicon are the FPGA companies. And we are best positioned to take advantage of it for both technical reasons and business reasons.

There's a lot of technical reasons. I'll just mention 2 of them. One of them is the metallization. Because the FPGAs are field programmable, our metal structure is very complex. It is very easy for us to take this complex metal structure and embed on the FPGA other functions whether it's ASIC, IP or microprocessor cores because their structure is a lot simpler than ours than for any other implementation to take the metal structure of the FPGAs and embed it on their device.

Another example would be production testing. When we test the FPGA in production, we don't know what functionality will be implemented in the field. So we need to have very unique ways to test the device against all combinations that it may implement at the end. So our production test scheme is very unique. It's very easy for us to take ASIC blocks whose functionality is well defined and add it to our production testing. It is very difficult for anybody else to embed our testing into their production test flow.

So these are some of the very different technical reasons. There's also an important business reason why we are best positioned for this. If you need a microprocessor core, you go out and you license the microprocessor core. You can license it from ARM, you can license it from MIPS, you can license it from a number of other companies. You can license DSP cores. You can definitely license logic libraries and system-level IP.

The one technology that you can't license is FPGA technology. So if you look at this converged silicon, this mix system fabric, and you know that part of it has to be hardware programmable for many, many applications, the only companies that can really implement it are the FPGA companies. So the trend is happening, and we are best positioned to take advantage of it.

I'll show you some examples of how this manifests itself in a number of markets. But first, just a reminder of -- these are the markets the way we categorize them, and without going into too many details what are some of the growth opportunities that we see?

Telecom and wireless continues to benefit us from the migration from 2G to 3G to LTE. As we said before, every generation there's more FPGA content.

The other thing that's happening now that's very positive for us because there's an increasing demand for bandwidth all the time, there's also a lot of investment that goes into the backhaul of the wireless system and there's a lot of FPGA usage in backhaul.

And now as carriers start to look for new types of architecture to improve the spectrum utilization even better, we start to see new architectures that resemble what's happening in the cloud computing space, and that creates some new opportunities for us as well.

In the computer and storage area, we see primarily opportunities in 2 areas that I will go into some more details on. Acceleration of algorithms on the server side and storage as it goes to a solid state, you see a lot of FPGA usage there. We also see opportunities in military, and there, we are benefiting from the fact that focus and budgets now are moving from battlefield, moving to infrastructure, things like cybersecurity and intelligence, there's a lot of FPGA usage there.

Industrial, I'll show you an example later, but really benefiting from the automization of factories. And we continue to see growth, and we already see it now in things like driver assistance, in-vehicle entertainment system and electronic vehicles as well.

And the other things, which we bucketize in the other category, we see some opportunities in broadcast as we move to high-definition. In the other areas, it's more moderate than that.

There's some more details here. I won't go through this, you have it in your outline. But I would start to show you now some examples of specific segments and specific applications and what happens there with the FPGAs.

OTN, or optical transport network, is a great example of an application where over time, there's more and more requirements as those systems move from 10 gigabit per second to 40 gig to 100 gig, a lot more complexity.

And if you look at what we can do now with our FPGAs there, it started by us just being on the interface side, taking advantage of our transceivers. Because of the capabilities that we have now in terms of DSP and higher performance fabric, we can also take care of things like the framing and the mapping. And with the floating-point DSP capabilities that we introduced for the first time in 28-nanometer, we can even do very effectively things like the forward error correction. And this is an area where you always need FPGAs, because standards keep evolving. Different customers want to customize the way they do the forward error correction. But if you can provide this converged silicon, where you do, not just the programmability of the hardware but also the software, the memory, the DSP, we can really capture and we are capturing a bigger and bigger part of the bill of material.

And here is one specific example of a customer that recently demonstrated 100-gig OTN system to one of the major carriers in the U.S. It's Infinera, and you can see the quotes here. Selected Stratix V because of the technology leadership, because of the transceiver capabilities. And with our execution, remember they demonstrated this one quarter after we started to ship the device. And this is why as John showed earlier, we could generate more than $2 million in 28-nanometer in the first quarter of shipment of the product. It's these types of products that Stratix V is going into and it's doing it very successfully.

I'll come back to the converged silicon and give some more examples there. But it's important to emphasize that we just continue to see more and more new opportunities because of the good old value proposition of FPGAs.

And what I have here is a part of an article in New York Times where IBM is talking about their future server strategy. And we highlighted a number of things here on the foil.

So you can see IBM identifies the need to have specialized accelerators, and we see it in a number of different industries like the financial industry, for example. They are seriously looking at using FPGAs for those types of accelerations because FPGAs are faster than CPUs. But the thing that really they need to focus on and we need to focus on is to make FPGA programming like any other programming. In order for FPGAs to succeed here, we have to be able to allow customers to program their software without worrying about the complexity of the hardware.

And this is a place where the OpenCL tool that I talked about earlier really creates new opportunities for us. This explains why we need to invest in tools like OpenCL. But when we do that, the combination of those tools and the converged silicon that we are creating at 28 is what opens up these types of opportunities.

Another emerging application is storage. And it's important to emphasize computer and storage are now small markets for FPGAs. We start to see here the growth potential into the future. What happens in storage is this old phenomena of big data. Databases that used to be very structured for many, many years are not structured anymore.

And one of the thing that it leads to, random access to storage or to memory is becoming very, very important. And that's why you see moves from disks to solid-state drives, and specifically to flash memories. Because flash memories is the best technology if you need random I/O access to memory, and you care about performance, and latency and power consumption.

It also so happens that flashes are great for FPGAs, because there's different standard for flashes and they keep evolving. And because of that, you can't implement effectively a flash interface in an ASIC. And we see more and more adoption of FPGA in order to go into those types of systems.

And this is one specific example of a customer, Violin Memory, and you can see their quotes here. They looked at doing it with software running on a microprocessor, and they saw that by far hardware or FPGAs provide better performance, and that's how they implement their system.

Back to the converged silicon and why it's so important for us. Another application that benefits from it, military radar. If you look at the progression of what we have done in military radar at the beginning FPGAs were used mainly for some small connectivity on the side.

As we started to add some DSP capabilities and higher speed transceivers, we could start to implement some of the digital receiver functionality. And at 28-nanometer where we have the floating-point DSP. So we can do all the signal processing, and we start to add processing general purpose processing capabilities, we can really capture a very, very significant part of the bill of materials. Military radar is one of the applications where we see a lot of traction for Stratix V, and this is the primary reason behind it.

The last example I want to show is from the industrial space, and I wanted to give this as the last example for a couple of reasons. First of all, industrial, pure industrial factory automation is not a big market for FPGAs today. But we think that great growth opportunity is there, and I'll show you in a minute why. The other reason I wanted to end with this example, it really highlights the significance of this phenomenon of silicon convergence.

So if you look at what happens in these applications, the factory floor is full with motors. What not everybody knows, the motors are the primary power consumers in the factory. And because of that, and you can see some of the numbers here, about 60% of factory power consumption are associated with the motors. The way the system works, you have those motors, you have feedback for device called encoder. It goes into a drive. The drive looks at this feedback and sends commands back to the motors.

And the faster you can do it, and the more precisely you can control the motors, the better your power consumption will be. So there's a very direct correlation here. Higher performance leads to better precision, leads to less power consumption and obviously higher profits for the factories. With this in mind, let's see what happens to FPGAs in this application.

So initially, there was very minimal usage of FPGAs in these types of applications. All the processing was done by DSP processors or by micro-controllers. The connectivity to the network was typically done by an ASIC because in those times you needed to support just one specific protocol. In this example it's PROFIBUS.

The next thing that happened, customers wanted to move to industrial Ethernet. So it's not just PROFIBUS, which you can do with an ASIC, now you want to support various flavors of industrial Ethernet. And that's why we started to see more usage of FPGAs, taking advantage of the hardware programmability, so we can support a variety of protocols. But still the heavy-duty processing was done by DSP processors and micro-controllers.

What we start to see now with a new generation of products is that we have DSP performance that's significantly better than that of the DSP processor. And that is because of the parallelism of the FPGA. So our DSP performance in these applications can be orders of magnitude higher than that of a DSP processor.

We obviously now integrate also the processor core, so we can do all the control functionality. And what is nice about it from the customers' perspective, not only do they get higher performance, because they get higher performance they don't now need to have one drive to control one motor. You see in this example, there's one drive that controls 3 motors.

So the customer gets a significant reduction in the bill of material cost of the system. The end product is much better. And see what happens to the FPGA content, it goes up by 50% to 100%.

It's this thing about our ability to provide on a single device, the processor cores, the DSP capabilities, the hardware programmability and the software programmability that allows us to take advantage of the silicon convergence.

So to summarize, silicon convergence is happening. It's happening because different applications need both efficiency and programmability at the hardware level and the software level. It is something that complements the tipping point to allow us to grow much more aggressively after ASICs and ASSPs. And the FPGA companies are best positioned to take advantage of this. And with this, I'll hand it over to Ron.

Ronald J. Pasek

Thanks, Danny. Handouts for this next section, you can get at the end as we leave. So in addition to talking about some metrics and guidance for next year, I'm also going to talk about our capital structure and how we're thinking about our cost structure as well.

So a little bit on financial benchmarks, so we took a cross-section of companies in high-tech. Some are direct competitors, some are competitors in the semi-space and some are just household names in high-tech industry. And what we looked at is how do we compare on various forms of ROI.

So -- and by the way, we update this chart on a quarterly basis. This is trailing 12-month revenue but we do it every quarter. And the results are very, very similar over the last couple of years. We probably showed this chart 2 years ago at the conference. You can see in 2 of the categories return on invested capital and return on assets less cash and goodwill are actually the highest performing on an ROI basis, no worse than #5 in any category so quite good.

Okay. Moving on to guidance for next year. I just want to point out there were a couple of questions on our last earnings call about next year. And at the time we didn't have an approved plan by the board, and we do now, so what I'm showing you is the approved plan by the board.

For gross margin, we're seeing gross margin at 70%, plus or minus 1%. I may remind you incidentally this is the exact same guidance we gave last year for 2011. So we're sticking with that for 2012 as well.

For R&D, we see spending or investment of roughly $378 million. It's a little front-end loaded. There's some mask and wafers at the beginning of the year, a little bit of hiring towards the beginning of the year as well. And I'll talk about that in a minute.

For SG&A, a small increase to $293 million. It's going to be fairly consistent through the year, no bubbles or large blips.

In other income we see a slight increase, very slight. This is due to the fact that our credit facility comes due in August and we anticipate refinancing that sometime in the first half of next you. And I'll talk about that in a little more detail.

Tax rate, we're assuming 10% to 11%. And again, right now we have no R&D tax credit for 2012, so we're assuming that we won't.

And diluted share count was about 328 million.

Okay. So a little bit of note on this. This is 2011 OpEx. Okay? This is not the ramp for 2012. And what I want to point out here is that this was how we hired in mainly R&D last year. And the important point here is that as we went through the year, we told you we had our hiring staggered, which we did. It was a 25% increase in R&D from 2010 to 2011.

And you can see we ended the year, this is the midpoint of guidance for $92 million for R&D OpEx. That includes a little bit of mask and wafers. But that's the peak of hiring. And you can see SG&A was really roughly flat.

So the significance of this is the fact that when you annualize that going out Q4 run rate for R&D, you get $368 million. The guidance I just gave you for the year, next year is $378 million. So you can see there's a little bit of additional investment next year. But it's really not a lot more than the going-out rate for 2011.

SG&A literally the same thing, we annualize the going-out rate, you get a number. Add $9 million, you get the full year number. So there's a little bit of -- very little bit of hiring then in SG&A next year.

So what that looks like from a historical standpoint is the following. You can see last year, we had committed to a 25% increase in R&D. I think it works out to be 24%. And I think the original plan for SG&A was about 5% or 6%. We, in fact, had a couple of things that we had to add to that, so about 10%. Next year we're increasing R&D 15% in total. So slower ramp in the rate and actually a slower ramp in the dollars as well. So it goes from adding about $60 million in R&D from 2010 to '11, to adding $50 million. Again most of that is in the going out rate for Q4. SG&A at 5%.

So this is consistent with what John showed you on our long-term business model. In order to keep R&D at that 18%, we have to keep investing. In order to get SG&A down to 11%, we have to grow it at a slower rate certainly than R&D.

Okay. Just bridging you from 2011 to 2012. So take the midpoint of both SG&A and R&D guidance for Q4, you get $607 million.

Focus on the green negative $15 million for a minute. I'm going to talk about this some more, too. So we're continually doing cost savings. These are continuation of things we've always done. We're looking for areas to be more efficient. We're taking a different approach to things around process, around efficiency, et cetera.

And in a lot of cases we're just investing and reinvesting. So some of that $42 million you see on headcount benefits, bonus and SBC is the reinvestment of some of that disinvestment or savings.

So again, that's a small amount of headcount. Some increase in stock-based comp, as you know, the stock price went up, so our charge for stock-based comp is going to go up. That's actually part of the bridge.

The other significant part of the bridges is the $22 million prototypes masks, wafers and what is essentially EDA tools. And this is really just to support the increase in the R&D investment.

And then on the far right, you see an increase of $15 million infrastructure, other. This is essentially -- we set up a technology center in Austin this last year. We expanded Toronto -- we're expanding Toronto. And we acquired a company up in Newfoundland called -- well anyway, so there's a bunch of increase in infrastructure mostly supporting R&D, and so we get to $671 million next year.

Okay. Moving on. Capital structure, so just a couple of reminders here. We did increase the dividend 33% to $0.32 per share. That's a total of $90 million, we'll distribute in dividends for this year. And as I've said continuously, we're going to increase the dividend over time. We did buy back some shares in Q3 and committed to buying back shares over time as well opportunistically.

And again, reminding you what I said earlier, we are planning to refinance the banknote we have. That will be done probably in the first half of next year. Just got some public credit ratings from both Moody's and S&P. AA by Moody's, and A- by S&P.

Okay. So how do we think about cost? Now John showed a long-term cost model for SG&A, that's considerably below where we are now, down to 11%. And as I said, we've continually done things that were fairly straightforward and fairly easy taking out cost. And we're still doing those things, but we're also focusing more and more on business process.

So focus in the middle of the chart there, on the 4 yellow rectangles. These are very similar to what you would see for any company that's about to do a new ERP system. You have to go through and understand all of your business processes. And that's what we're doing now. We've done an ERP system, we're not about to do another one. But we're trying to do a cleaner scrub of how do we create demand, how do we fulfill demand, how to support our customer and how to report our results.

The reason this is important is because when you're -- we're really focusing on architecting cost out, not just spending less but really taking cost out in the structure of the business process that we do. So we're looking at and evaluating other world-class business processes. The benefit to this ultimately is that it allows us to scale better, moving to under $2 billion company to a greater than $2 billion plus company over time.

And again, it prepares our organization for the next ERP transition, which inevitably will happen. But you really can't do anything with these processes at this level. So I'm going to click down into what's suspect to order, and these are the details of sub-processes over that.

And in this case, this is where you actually pick a couple of these where you think you're inefficient, which is exactly what we're doing, and we're working on them. So we've done a couple of BPIs on inventory, on pricing, on forecasting.

And we're currently doing one, several, but I'm going to talk to you about specifically on what I'm calling alternative logistics.

So some of you may be familiar with this. When we ship to the distributor, we ship essentially at a list price for a couple different reasons. Number one is we don't recognize revenue until they sell out. So unless we know who the customer is when we ship to them, we have to recognize the revenue when they ship out and we know the net price. So what that yields is an initial list price shipment. And over time, as they ship out, they tell us who the customer is and we have a established net price and we give them a credit back.

So there's a lot of activity going on with charging them list and essentially debiting them back on a net basis, it's very, very, very efficient.

We don't really like it. Candidly, the distributors don't like it because they have a lot of working capital tied up in their inventory and their primary metric is return on working capital. So we looked at this one and we worked with the distributors, all our major ones. And we actually went and created a different process where essentially we get rid of one of the steps.

And essentially, now what we're shipping only at -- things at list price that they don't know who the end customer is -- essentially the things they are happy to hold in inventory, we're shipping those at list. Any other shipments, either going to a customer or what's called a cross dock, we're shipping at a net price. So we think over time we can reduce the amount of working capital, we have tied up and they have tied up in this DPA process by roughly half. And that's how we're going to eventually focus on the SG&A metrics we need to get to get to that long-term business model.

With that, I will hand it back to John to a wrap-up.

John P. Daane

Thank you, Ron. So we talked about today kind of a new concept, something that we've actually been working on internally for a series of years, which is the convergence of silicon. And I think probably some of you would say, "Why is that a new idea?" I mean, you've heard about SoCs for a long period of time, that's actually the convergence of a lot of different technology onto a chip.

What we wanted to do is come up with an analogy, so we came up with the smartphone, and probably not any of you remember, but the first smartphone was actually introduced by IBM in 1992. And it had a series of applications or technology which it implemented, not very interesting to the average consumer, that sold for $895, truly not a consumer price and was a commercial flop.

But what their idea was, is that you could take a phone and combine it with other applications into a single phone. So you go forward to 2005 and you get the smartphone back again. Ultimately, I think, to a large degree, what you're seeing is the integration of functions that are actually much more interesting to the average consumer and certainly introduced at a price point, which was viable for consumers to buy.

So if you go through the success, and I'm sorry I'm having difficulty advancing these, why was this successful? Well, first of all, you started to integrate lots of functionality, which was important for the end system to be usable. And you did it at a price, which was necessary to have the application work. And I'll talk about how this applies to what we're doing in a moment.

It also provided a truly flexible vehicle that could be customized for different end applications. So you could take what is one handset and through software make it look like it had different functions or different features to meet different price points or different markets in terms of the functionality required.

And then, of course, because it's also software-programmable, it has the ability to be refreshed at a very fast pace by lots of developers in the world. And I think there's a strong correlation to this to now what we are offering with the silicon convergence in the industry.

As Danny talked about, a difficulty with ASICs is the rising costs in the industry has made it very difficult for anyone to afford to develop, added to the fact that the development cycle is very long. The difficulty with ASSPs in the infrastructure space, and I'm sure you've studied many of these companies' business models, is their R&D model is broken. They invest a lot of R&D and they don't get the payback for it because the volume isn't there.

That's easy for us to take the functionality from those technologies, which need to be embedded and put that within our chips. And as Danny pointed out, we've been doing that now for a few generations of technology where increasingly we're embedding more hard core technology into our chip to lower the power consumption and to lower the cost but also sweep in some of that functionality that perhaps was integrated in ASICs or ASSPs in the past.

We've also been doing for a long time the FPGA technology, that's obvious, but DSP technology for many applications because we have more multiply-accumulate structures combined with lots of granular RAM blocks. We will do DSP functionality at a much faster rate than DSPs and that's wonderful in a lot of applications for which we're in, not only industrial but also military, medical, wireless as examples.

And then, of course, microprocessors are very easy to license and get and include in the technology. As Danny pointed out, the only one that is difficult to do is the FPGA technology itself because it's not only difficult to do the FPGA fabric. But remember, the FPGA vendors develop their own EDA tools.

And that's something that if you're going to try to implement the technology, you not only need that core fabric, but you're going to have to also invent your own EDA environment. And as a company, we have about 40% of our resources simply working on our EDA tool environment. There's nothing you can buy for a third party that works with FPGAs.

So why the correlation? Well, first of all, we have the ability with the latest process generations to now integrate this functionality together at very good cost points. So you really can do the single chip technology. The advantage of adding the FPGA technology to this is we can make one device still flexible enough to address lots of customers and lots of markets to get the payback for our R&D investment where the ASSP companies can't. We also make this product flexible in an environment where customers can add their own differentiation either through programming the microprocessor, programming the DSP blocks or programming the FPGA.

And with the advancements that we're making in tools moving from HDL design into allowing parallel C-based design with accelerated blocks in our FPGAs, we're now automating that capability so that people can do FPGA-based design, systems design in a much shorter period of time to allow the rapid innovation, which is very important in a number of markets.

And in particular, if you go back again and look at communications as an example, you'll see a lot of communications vendors are deploying FPGAs with the idea that they'll upgrade the functionality through shipping new software features through that base station so that they don't have to do a hardware forklift upgrade. They just can replace the software, add the new features or content and be able to manage the product over its lifecycle.

So I think probably the powerful concept, something that we're stepping to with each generation, something that clearly we can afford, the strong growth from a revenue perspective allows us to spend more in R&D. The efficient model of developing platforms that apply to more than one market, more than one customer, allows us to generate the return on investment for that R&D and allows the flywheel to continue.

So as I invite my presenters back up to answer questions from the audience, I think I'd probably remind you of 2 things. We think we have the ability to continue to grow at a strong rate, not only because of the tipping point, and by the way, the tipping point, as Danny showed on the foils, the servable ASIC industry is still over 2x larger than the PLD industry. So we can replace ASICs for many, many years and continue to grow at a solid rate just by that.

We're adding the silicon convergence, and this is the point at which we can really enter the embedded space and the ASSP space and continue that strong growth with the goal as we have in the last few years of continuing to outgrow the industry by at least 2x.

The other thing that I would tell you is, financially, we've got a very strong business model. We think that will continue. It's hard to do what we do. It's hard to get in to our business. Our business takes not only the tools, technology but also the support network that we have in place. This business model allows us to continue to invest, to seize on new opportunities and new markets, something that we think we not only can preserve but continue to potentially enhance in the future.

So with that, we'll open it up to questions, and I'll let Scott and Cliff choose the person. Give them a microphone and one of us will answer.

Question-and-Answer Session

Unknown Analyst -

you mentioned -- your profitability metrics have been impressive and the benchmarks speak for themselves. And you mentioned Clay Christiansen recognizing you guys for innovation, but part of his thesis is that companies with big juicy margins like you guys have are actually at risk of being disrupted. And I wonder if you could share with us your view on -- to the extent that you guys -- there's a risk for you guys, where it is and where you guys could get disrupted and what you're most worried about?

John P. Daane

Well, I think as any company, you obviously have to be concerned about competitors. For many years, we've actually not just competed with the other programmable logic vendors. We've seen Texas Instruments and Freescale for DSPs. We now see everybody for microprocessor-related products. We've seen the ASIC vendors for many years. We've seen the communications ASSP vendors, so there's a lot of competition that we have. What we've seen though is our technology works for Moore's Law. And, therefore, we can take our products and technology forward, realize the benefits of lower power, higher levels of integration, lower cost, which allows us then to go in and basically encroach on more of their marketplace. Could there be other technologies that develop and compete with us? Sure. We're always out looking. We haven't seen anything today that we view as a particular threat either generically across the board or in any one particular market that we have. You are right that if you have high margins and high growth rate, you'll probably see lots of venture capitalists throw a lot of money at your space to develop alternatives. But what you are seeing within semiconductors is most venture capitalists have moved on. They view that the investment rate is far too high. You simply don't get the return on investment rate anymore, so we're not seeing a lot of silicon startups in our space or just in semiconductors in general. I don't know if -- Danny, you want to add anything?

Danny Biran

Yes, maybe just one thing that I will add, we definitely see that there can be the emergence of multi-core of many core implementations. And the thought there is FPGAs definitely provide superiority on the hardware side because while they are talking about 64 cores or 100 cores, we have hundreds of thousands of those cores. But the downside of the FPGA is that until recently you really needed to only program the hardware and, therefore, you had to understand the hardware complexity. And that's why something like OpenCL, which I mentioned, is so important to us because now we can really marry the ease of programming of a software engineer with the hardware capabilities of the FPGA. And that is one of the things that we are doing exactly to address that threat because we do see graphics processors and multi-processor products trying to come after our market, and that's how we deal with it.

John P. Daane

I'd say, in general, the advantage of the FPGA is still it's going to have lower power, higher power performance than a multi-processor would because by nature they have a very generic architecture. Ours is a little bit more dedicated. Multi-cores are not easy to design with. In fact, we talked about moving to OpenCL to be able to program. Reality, multi-cores, there is no generic compiler that works that allows you to take advantage of any algorithm in a multi-core environment. Otherwise, I think you would have seen Intel take advantage of that for their CPUs. So I think where you hear a lot about, certainly, they haven't solved some of the technical and technological gaps that they have. And I would say at the end of the day, there's nothing that prevents us from implementing a combination FPGA with multi-processor cores that would go and solve some of these things as well. Again, we've got the FPGA technology solved, that's the easy part. And it's the hard part for everybody else, easy for us. We can go get the other technology pretty fast.

Ronald J. Pasek

Can I just add one thing to that? So, Mark, to your point, some of those same companies that have fat margins have a fat cost structure as well, which we do not. We have a very lean cost structure.

Unknown Analyst -

On the gross margin side, long-term model still at 67%, can you walk us through the thinking or the math behind why you expected it to come down? Is it a cost per unit thing that's driving it down over time? Is it pricing? Is it the mix of your business? And then my second question is when you showed that cumulative growth of, I think, it's 15% for your company and 11% for PLDs over the last several years, what's the difference between unit growth and pricing? And the reason I'm asking is I'm just trying to -- I'm curious as you displace more ASICs and other alternative devices, are you seeing higher ASSP products going in the system? Are you seeing more units per box? Can you just differentiate on what was unit growth versus pricing?

Ronald J. Pasek

So let me take the first part of that. So, yes, I think over the long term, what we've said in the past is we're trying to balance growth and profitability, which is why we're giving you a 67% long-term margin. But as you saw, right now for next year, I'm saying this -- I'm giving you the same guidance for gross margin I gave last year. So in the short term, I don't see any margin erosion. Over the long-term, could it happen? Maybe. We're taking this year by year. Again, when we're setting a long-term model, we're saying, "Okay, if we want to still grow 2x semi, what's a reasonable thing to expect on the margin?" And that's why we're giving you the 67%.

John P. Daane

Again, there's no specific math that goes into this. It's kind of a long-term model that we're saying we've got lots of markets that we participate with, we sort of look at those. We've got a model of what we provide the marketing and sales. And this, we believe, gives us the best balance between growth and profitability. Then what we do is, every year, we challenge our operations group to see if they can manage the cost structure so that we can be above that. And what you've seen for a number of years is we've been able to maintain above it. 67% continues to be our long-term model. We have no idea if and when we hit that. Mix, certainly, would have a play there, but it could be that we operate above that for a long period of time. We don't know. To the answer of the ASPs and are things really shifting, if you look over the last 15 years, ASPs for the corporation have grown only very, very slightly. And really, what it is is our revenue growth has been because we ship more units into more markets and the profitability has been because of the tailored products as well as some of the techniques that we do for yield enhancement above and beyond what the competition does.

Unknown Analyst -

I guess first question, looking at growth by market segment, wireless and telecom far surpassed growth in terms of PLD relative to semi growth. So curious, looking forward, as you look at the silicon intensity for PLDs for that particular segment, do you think that we are still going to see that kind of growth led by backhaul cloud? Or is that transition from 2 to 3 to 4G slowing down in terms of content such that we won't see that sort of 3x growth for PLDs?

Danny Biran

So so far, it hasn't slowed. And again, the thing that we are now starting to experience is the migration from 3G to LTE. LTE is just starting. The backhaul we definitely expect to continue to see the growth. Everywhere you read about more traffic is going to be video, which needs more bandwidth. That all leads to continuous investment in both in backhaul. And backhaul continues to grow so we definitely see opportunities there. Then there's the question of what happens with new types of architectures in order for the carriers to take even more out of the spectrum. And there, there's a number of different directions, and since things are really just now they're beginning, it's hard to predict. The move to a cloud-based architecture, which some carriers are looking at, definitely creates more opportunities for us. I showed earlier what happens in the data center, things like storage and memory access, things like acceleration create new opportunities for FPGAs. There are thoughts about smaller base stations perhaps complementing the big base stations. They will never replace them. They may coexist with them. And then you go into some questions of what is a micro base station, what's a pico base station, everything is still up for a discussion and it's more difficult for us to predict what happens there. But clearly, some of the trends we continue to see, the way we saw them for a while, some things we believe will create even more opportunities. And this question of pico versus micro versus macro, we are looking at. We definitely don't see any carrier who suggests that the small base stations will replace the big ones. They may coexist with them, and we just need to see what the architecture looks like.

John P. Daane

A few comments, if I could. I mean, the dynamic that we see in terms of our dollar content growing with revenue generation continues, so none of that has changed going from 2G to 3G or dollar content doubles, triples going to 3G. And then for some of the newer systems, CMCC is aggressively trying to push the idea of LTE Advanced. It's about 3.5x to 4x off of 2G. Nothing that really disrupts that dynamic that we see in the marketplace, so as newer technology gets deployed, we should continue to see strong growth. As Danny alluded to, we're also in direct conversation or contact with the major carriers, so we've talked to AT&T, NTT, Vodafone Verizon, as examples, about the idea of macro versus micro versus pico to understand what they're doing. Many of them are just thinking about it. Most have realized that the existing discussion around pico doesn't make any sense because deploying more base stations does not necessarily get you better or more efficient use of spectrum. You're going to have to move to a more advanced system, and that's been discussed, as for instance, a heterogeneous network of base stations that communicate. All of that requires a level of complexity, which is far beyond what you even see in macros. And so that idea, I think, will include even more potential for FPGA content moving forward. So nothing that we see really taking away from us. Again, the dynamics that benefit FPGAs are still playing out very well in communications.

Unknown Analyst -

Question for Ron. Ron, your tax rate has steadily dropped over the last few years. Does this make your overseas cash begin to pile up? And at some point, do you bite the bullet and just pay tax and bring it back?

Ronald J. Pasek

So yes, we do have the majority of our cash outside the U.S. That is absolutely correct. But we have a considerable amount of cash here in the U.S. as well. And so, no, we have no plans to repatriate any cash in the near future. We have participated in a number of efforts to reform U.S. corporate tax code to go to more of a territorial system, which would help make the cash cross borders more easily. I don't see that changing anytime soon, so -- but no plans to repatriate.

Unknown Analyst -

So can I ask you 2 questions? First, on the silicon convergence. Obviously, you talked about how you are one of the best positioned in terms of taking advantage of this trend. But looking at it from the other side, how do you think a company like Intel or the microprocessor companies will react and try to play this? I mean, the scenario will be they buy a smaller FPGA company, and then what will be the limitations of such an approach and what makes you feel more comfortable that you'll still be dominant there if they were to react in such a manner?

Danny Biran

So yes, they can always buy a smaller FPGA company. They will still have to deal with the technical difficulties that they talked about earlier. They can take the FPGA and put the processors on it, or they will have to take the FPGA and put it on the processor and they'll have to deal with those technical challenges. Can a large company like Intel do it over time? Yes, but I think that will only confirm our theory that the world is going towards silicon convergence. And if you got to have FPGA technology in order to be able to participate there. But again, I can't comment on what Intel may or may not do. I will remind you that the foil that I showed that talks about the trade-offs between flexibility and efficiency is an Intel foil. So Intel publicly recognizes the limitations of processor cores and recognizes the need to have, what I call, reconfigurable logic, which is what we call FPGA fabric.

John P. Daane

Yes, I think, and another way to approach this is we know this is real and is a direction because basically every major semiconductor company has, within the last couple of years, contacted us with an interest in licensing our technology. It's not a business model we choose to pursue. I'd remind everybody that there's really today only 3 companies that have viable software, which is, again, I view the critical component. And of the 3, one really is only of a lower-end variety for lower complexity, which would work for some applications but clearly not all. So there's not a lot of opportunity to go get a technology because there's just not a lot that have the technology that are left in the industry.

Unknown Analyst -

That's helpful. Just one quick follow-up question on the same topic. In terms of CPU acceleration, you talked about FPGA and CPU acceleration. I think it was a year or 2 years ago, this was also a topic that GPGPU companies were addressing. Can you just remind me what the advantages of FPGA acceleration versus GPGPU computing in the same environment brings?

Danny Biran

So again, an FPGA is a lot more parallel that any other approach that starts with any core and having a lot of them. The challenge that FPGAs traditionally had was that it was difficult for software engineers to program them. And that's why we are so keen on this OpenCL development, and I can tell you that we started the OpenCL development earlier this year. We had some preliminary results, and what we see now clearly shows that the performance you will get for given applications on our FPGAs is significantly higher than what you would get on any other computing hardware architecture, whether it's processors or graphics processors. And now you can do it with the ease of programming of just writing code in OpenCL. So this is really the key behind OpenCL. You get all the hardware benefits of the FPGA while programming it as if it was another platform like a processor or graphics processor.

John P. Daane

And just to follow-up on that, I think, banking is a great example of this. Banks use a lot of servers to run a lot of proprietary trading algorithms. If a GPU is faster or an FPGA is faster than the microprocessor, there's advantage. The difficulty with the FPGA is you have to program it through RTL. And banks aren't going to hire lots of FPGA designers to sit next to the exchange to be able to update the algorithms just as ignoring the work. What NVIDIA did is they developed CUDA, which was a C-like language as you could program their GPUs. And so therefore, it naturally had an advantage because they solved the access and the GPU being basically a multiply-accumulate structures much faster at math algorithms than a CPU is. So they were used. What are the 2 downsides? Compared to the FPGA for a GPU, power consumption is out of this world. If anybody's ever opened a gaming server that your kids might buy, you'll see that the power cooling system for the GPU is quite exotic and far more exotic than the CPU. And that's a problem because power is expensive. The others, of course, will -- the GPU vendors support the product over its lifetime because usually GPU's lifecycles are about 2 years. FPGAs are much faster running the mathematics algorithms than the GPU at a fraction of the power. Our problem was we didn't have the C-like programming capability. As Danny says, now with OpenCL, which by the way is a standard, which was founded by, I think, IBM and Apple, which we're now a voting member, is a technology, which is available and really allows you now to abstract to a level where software designers can program the FPGA, get the benefits that it's lower power, it's faster and we support these things for a long period of time, which respectably has opened up a new market for us.

Unknown Analyst -

Could you talk a little bit about, as you move to integrated ARM, FPGA, your cost per gate relative to standard SoC. And then also, if you could fill in a little bit of you're not going to be able to program FPGA without HDL. Where that does break off? Do you just do the hardware accelerator and then shift to the customer and he can reprogram? Can you talk a little bit about the development costs as well?

Danny Biran

So in terms of cost per gate, I'm not sure what you compare against, but the own subsystem itself is an example of something that we embed as a hard IP. So the implementation is as effective as any other ASIC implementation. It is not implemented using the FPGA fabric itself. Obviously, the other part of the FPGA is the FPGA fabric. So if you compare it to a microcontroller that has the same core but doesn't have the programmability, our device will be bigger, but that's like any other example with FPGA. In terms of the development flow, exactly as you said it. You would program the ARM subsystem using any of the available ARM tools out there. That's one of the nice things about choosing ARM. There's a very developed ecosystem. The FPGA fabric will be programmed the way it's programmed today. So the idea isn't that software engineers all of a sudden will start to program the programmable fabric unless they use something like OpenCL. But now you have everything integrated on one device with all the advantages of a single chip integration in terms of cost and power and form factor and all these things.

John P. Daane

A couple of things go into that. One is, as Danny talked about, the Qsys tool that allows you to take the ARM microprocessor, combine it with a lot of IP that we generically have available, either Altera IP or third parties, to create subsystems without having to design the RTL. It automatically creates the RTL and interconnects these blocks for you. So it takes a level of abstraction up. OpenCL is a parallel programming language. What we supply is the synthesis tools, which are basically in -- maybe you want to think of the software world as a compiler, takes advantage of IP blocks or accelerators that we provide. So the customer will simply compile down using either an external CPU as the host or using the ARM subsystem that's on the FPGA as the host and then basically using the accelerators to accelerate additional code from your OpenCL code that you write. So it is a way actually that you can, for the most part, write code in C without having to do much or any HDL design in some applications and get much higher performance, much lower power. So what you're seeing from Altera and I think probably in some cases from some of our competitors is more investment in software tools to automate, to be able to access a larger audience that perhaps designs differently than the HDL from hardware engineers.

Nathan Johnsen - Pacific Crest Securities, Inc., Research Division

Nathan Johnsen from Pacific Crest. Just in turning back to the silicon convergence, you talked a lot about how the FPGA manufacturers are uniquely positioned to address that. I was wondering if you could compare and contrast how your guys' strategy to address that market versus your largest competitor?

Danny Biran

There's a lot of things that are common. The one key difference I will point out is we also have the HardCopy ASIC line, so we've done ASICs before. And one of the capabilities, for example, we have on some of the Statix families is what we call the embedded HardCopy blocks. And that is a way that allows us to add some of this application-specific IP in a way that's a lot more cost-effective than you had to do it differently and also allows us to create different derivatives of the silicon at a much lower cost. Other than that, the differences that John highlighted just in our basic approach to the FPGAs, they apply to any FPGA including those that implement the silicon convergence, which is we strongly believe in the benefits of the tailored architecture. And therefore, at 28, we have very distinct families for low cost, mid-range and high end. Now competitors choose a different approach, which they call the unified architecture, which basically means that they're all the same. And we absolutely believe and we've seen the results already that the tailored approach is the correct one.

Unknown Analyst -

So my first question is about OpEx. So quite a hefty growth in OpEx in 2011, I think about 24%, and you're guiding to another 15% growth. What is your revenue visibility for 2012? So at what point do you say that this level of OpEx growth is justified? How much of this is just structural OpEx growth? So in case revenues don't materialize, how much of this is like fixed cost versus variable cost? Any color will be helpful. And I have a follow-up.

Ronald J. Pasek

Yes, the challenge is, we've talked about this before, is we have this bifurcated P&L, right, because the amount of money we invest this year in R&D pays off in 3 to 4 years. So we have to -- you're asking a very relatively short-term question, but we have to think in R&D over a period of much longer. So if you look out that far, and most of the investment next year still is around 28 in the SoC product, those things pay off handsomely down the road. It would be very short-sighted to say we're not going to invest in those because we'd just be foregoing a revenue opportunity. I'm not going to answer the top line. It's kind of irrelevant from an R&D standpoint. There are very different looks at the P&L.

John P. Daane

I think if you kind of think of what we're doing now versus maybe what we are doing a couple generations ago, we're doing 5 product families in 28-nanometer, so we have the low end, mid-range, high end. Three different product families, different architectures, different process technologies, different transceivers, different memory implementations, not a tremendous amount of reuse between those, because in order to make a tailored product, they have to be different. Then we had 2 different SoC products on these, which are the ARM-based products. So generally, in the past, we might have done 2, maybe 3 products in a process technology node. We now have 5. We've added the microprocessor designers to implement the hard cores that we talked about. We've added more of the ASSP functionality. For instance, Ron talked about the acquisition of Avalon late last year for OTN IP. And Danny went sort of through an example of what that gives us to replace really what would've been a lot of ASSP technology in the past. We've talked about OpenCL as a new programming language that we're working with to create synthesis tools or compilers. So we've really kind of expanded our base. But the reason that we've done that is we see this very large opportunity in front of us. I mean, again, we can still compete against ASICs. We would naturally do that anyway because ultimately with every new generation of process technology, ASICs just become that much more difficult to do. But now we have the ability to really go after embedded and to go after some of the ASSP functionality, particularly in these lower volume industrial, medical, military, telecom, wireless applications where most semiconductor companies have moved on because they can't get the return on investment. So we think these investments make a lot of sense. They are evolutionary investments based off of our FPGA heritage and the FPGA advantages that we bring to markets by adding other IP in. We can sweep other functions to increase our average selling price, to increase the amount of material that we have within the box, to grow at a multiple of our customers as we have for many, many years, and so a lot more R&D investment simply because a lot more products than what we've been doing in the past.

Unknown Analyst -

Just one quick follow-up. So at 28-nanometer, Altera versus your main competitor, I understand that they may not have the lowest cost structure, right? So let's assume that they will have a gross margin disadvantage. But does their having a unified architecture give them some time to market advantage? Does it give them any advantage? Because what I'm leading with that is that, in the past, they had incumbency but then you were able to disrupt them. So what gives you the confidence that they won't be able to come and disrupt you at 28-nanometer?

John P. Daane

Well, I think it's kind of an -- we'll go through this, but I think probably all of you probably know this. It's pretty intuitive. If you're doing one thing versus, in our case, 3 things, the one thing theoretically is faster to do and from an R&D perspective would be lower cost to do. We're trying to do 3 things. Obviously, our cost structure should be higher from a spend perspective. The benefit you get later on is you get a tailored product should allow you to address more market opportunities. Because remember, we embed things like the 28-gig transceiver, which military and telecom love, but the automotive industry could care less about. And you don't want to carry that functionality over because they're not going to pay for it. It will become a very, very large cost disadvantage. But in reality, we can afford the R&D spend to do 3 things very well. I think we're a very efficient corporation so that -- so having to retreat to do one thing doesn't make a lot of sense to us, and we're not forced to do it from a cost perspective. We have seen the return from doing the tailored architecture, both in terms of revenue growth and profitability. I think there's a lot of history in that, so we definitely want to continue that and drive it forward. And from a product release perspective, it hasn't hurt us at all. What we've seen in the past is, first, to market is not necessarily what wins, it's the best product. Because if you look back at actually most of the product lines over the last 10 years, we were never first to market, and yet, we've gained market share with every generation of product that we've introduced. We gained market share because we had a better product. We spent a lot more time designing things in what the markets wanted or customers needed, and that's, again, what we're doing with the products that we have. I think if you go through it, there are some definite advantages where our competition is ahead in 28-nanometer. There's definitely some cases where we're ahead in 28-nanometer. I mean, if you kind of go through it, if you look at, for instance, the SoC product, which is the embedded ARM product, I think they probably got about a 6-month lead in terms of silicon. Software is about even between the 2 companies. If you look at the low end, it looks like we're about both the same. If you look at the mid-range, clearly, our competition was a few minutes ahead of us but not more than that. And again, you can see this through when did the companies actually ship via silicon, when did they ship for revenue? Then if you look at the high end, we were about a year ahead in terms of software and about 9 months for silicon. So overall, a few months one way or the other does not make a product. What makes the product, what makes the revenue is 2 things in our industry, which was a tough lesson for me coming into this. The first lesson was incumbency. It's huge. It was actually a term coined by our competitor. It's hard to overcome. Customers want to work with who they're engaged with. And additionally, part of the reason for that is they tend to reuse some of the functionality from the last design. And because our architectures are different, it's hard to port it over. You lose some time to market advantage. And there's other reasons, wed to the software, like your support, all sorts of things come up. And then the other thing is just, if you've got the better features, I think that's what drives the design wins. And ultimately, at a high level, if you look at it, we just went through this data a couple weeks ago, we've seen no shift in design wins or losses, opportunities, commits from the last few months, the last year. Our momentum in 40-nanometer is continuing in 28, so we haven't seen anything that has changed in the market away from us and towards a competitor. Things can be different, but right now, there's just nothing that's shifted. Danny, if you have any other thoughts.

Danny Biran

Well, the only thing maybe I will emphasize, that John said, you alluded it back to the 40-nanometer disruption. It did not happen because we were out there first us. It happened because we realized that the world would be all about transceivers, and we just had transceivers. And customers who never worked with us before had to do it because they need the transceivers. It was the product features and not the fact that we were a few months ahead.

Unknown Analyst -

John, historically, the PLDs have benefited from being able to leverage on the manufacturing side, kind of very few architectures through a fab and you had that cost advantage. Can you talk a little bit about silicon convergence and whether or not that's slightly changing that dynamic to a more customized solution at the manufacturing level so that we have to start worrying about manufacturing a little bit more going forward? Or is this still mostly customization at the software level? And granted you've talked about 28-nanometer yields being good, but it's -- I think you only generated about $2 million in revenue in the last quarter, how confident can we be going forward as you expand the product family that yields aren't going to be an issue?

John P. Daane

So I'll start backwards. First of all, yields are slightly better than they were in 28-nanometer, they're slightly better than they were in 40. Remember, back to 40, we had no issues with the yields at that particular period of time. And as a corporation, whenever we've generated new products, you've never seen our gross margins come down. So yields in 28 are actually doing quite well. We don't expect any impact from that or capacity in 28-nanometer from our major foundry. They've -- TSMC is always taking care of us. So you never know about the future, but so far, so good in 28-nanometer. We're actually consuming a lot of silicon right now because we've got a number of products that are out and shipping to customers, number one. Number two d2, the strategy is still to try to maintain generic silicon. So we're embedding functionality that is commonly used across a number of markets so that we can have as generic a vehicle as possible. And anything that gets really specific what we like to do is to reserve that for the FPGA fabric. And by doing that, we're still making a fairly generic vehicle. And so that's why I described in the 28-nanometer sort of product lineup, you see Stratix used not only for prototyping but also for telecom and military, the mid-range wireless and broadcast, as an example, the low end automotive, consumer, industrial, just because the features are different, we try to bucket them, 3 families. We're not going to do any more than 3. We think that still gives us a great return on R&D as it has for -- we've been doing this for a couple of generations now. And again, we embed only in those what is commonly used across those markets and then leave whatever generic to be used on the CPU, the DSP blocks that we have in the chip of the FPGA fabric. So we can continue to get that return on investment that we've seen in the past. So that's why we're not changing our business model.

Unknown Analyst -

Yes, you went very quickly over the -- you mentioned that there'd be a 50% reduction in the prepayment of -- the difference between list and actual sale price to distributors.

Ronald J. Pasek

Yes, this has nothing to do with the net we get. It's just the way we get there.

Unknown Analyst -

But you said it'd be a 50% improvement in working capital. Is that...

Ronald J. Pasek

For our distributors, yes.

Unknown Analyst -

For your distributors but not for Altera?

Ronald J. Pasek

Not for Altera directly, although we have -- remember, when we were sending DPAs back and forth, there was a lot of money being transacted between distributors and ourselves.

Unknown Analyst -

So it would mean they're carrying the inventory at a lower value?

Ronald J. Pasek

Well, we always carry the inventory cost, so -- but there's not a transit time. The snapshot of our AR at any given quarter, there's an amount of that AR that's actually uplifted for this list price amount. And in the next quarter, it gets credited back to the distributors.

Unknown Analyst -

So what benefit is that to Altera then?

Ronald J. Pasek

Well, ultimately, we can -- we increase their working capital -- I'm sorry, decrease their working capital, increase their return on working capital, and then...

Unknown Analyst -

And we wouldn't necessarily hold any more inventory?

Ronald J. Pasek

So hold on. So then, we can actually go back to them and ask for an increased level of investment for the resources they put supporting Altera products, which we've talked to them about.

John P. Daane

So the net benefit for all of us is they improve their working capital model. They're willing to deploy more direct resources because they invest that saving somewhere else. It helps improve their business model tremendously. And it's just another way to continuously improve. You can't stay static in any industry, right? You've always got to look at ways to do things better, not get trapped by your past. I think we've been pretty good over a series of years of doing that. You never want to stop, because every 5 years, what you did back then is -- just may not make any sense for the industry. So it's the continuous improvement idea. And I think in this particular case is quite brilliant and will save a lot of money. Is it new? No, there are other companies that are running this today. It's just us getting there and improving our efficiency and improving our partner's efficiency.

Unknown Analyst -

I have a question. During the recent quarter, you booked a lot of business with the Chinese company, Huawei. Can you comment which level of technology they purchased and why such a cost conscious customer would've selected your technology over a lower cost technology?

John P. Daane

Well, I think, first of all, what you'll see in any communications equipment is a lot of programmable logic. Why? Because ultimately for the volumes, it's the right technology to deploy today. So you may have seen ASICs or ASSPs used in prior generations. You'll see in telecom equipment a lot of PLDs increasingly used. You see the top-tier manufacturers tend to do a lot of their own development to differentiate from their competitors. Obviously, PLDs are a great way to do that because they allow you to program, do some features of functionality that's different from your competition. We talked about the idea that in some telecom systems, they'll do software upgrades and take advantage of the fact that the FPGA can be changed in field in terms of form, fit or functionality. And there are -- the communications industry, while large, is slowly concentrating around a few guys. So it shouldn't be a surprise long term if you were to see a couple of people above 10% of our revenue within communications simply because of the size of the business for us but also the fact that there's a lot of concentration around a few vendors, which are happening to be using a lot of programmable logic content. I don't know if you want to -- no? Okay.

Unknown Analyst -

A question for Danny. As you think about convergence and the SoCs, IP seems like it's a pretty important function. Where is the IP ecosystem? And given that Altera has done things like purchase Avalon, is this something that you have to drive to really get the converged market to start to ramp?

Danny Biran

So it depends. When we look at IP, we look at different types of IP. First of all, there's a lot of IP that is fairly horizontal in nature. And I mentioned I saw -- in one of the foils, I showed memory controllers. That's an example of something that you do across a wide variety of applications. But also, the video and image processing. So that's a library, of course, that we developed at Altera that is extremely popular in markets like broadcast, some parts of consumer, even medical imaging, a lot of those places. Sometimes we don't have the system expertise, and so we would partner with somebody. And there are a number of good IP partners that we are working with. In the case of Avalon, we decided it was strategically important for us to own the OTN IP, and the way we decided to do it was to acquire the company. And now it's obviously part of what we develop. So it depends on what type of IP, what market do we think. It's something that we need to own strategically, a lot of considerations that go into this.

John P. Daane

I think what you'll see is a combination of both. Some IP will develop, some IP will license. Some IP, we just referenced, sell the other company and they'll license it directly to our customer. And it will be a combination of those that, as we have for many years, we'll continue to do going forward. Scott?

Scott Wylie

Listen, we have time for one more, so I'm just going to walk back in the audience here and hand over the mic one last time.

Unknown Analyst -

You've done a very good job monetizing your R&D investment over time on 40- and 28-nanometer. As we look forward to the 18- and 20-nanometer node, how should we think about the step up for that? Maybe give us some broad parameters, that step up versus what it was for 28.

John P. Daane

Difficult to do today, and I'll have to say, we're -- we cannot outline that for you. It depends on how many families that we do, when we do those families, what else we're investing in. And part of the difficulty here, and I apologize for that, is we have not preannounced products. We've kind of given you an understanding of what we're doing from a next year perspective on R&D. We announced products as we have software availability. We really haven't talked about 20 yet. We haven't talked about what we're doing or how we're doing it. And it really is too early for us to do that. So can't give you a multiyear model from an expense standpoint. I apologize about it, but what I don't want to do is get into pre-announcing our products. We've sort of been pretty regular. I think it works really good with the customers, and that's what we will stay focused on, so again, I apologize.

Scott Wylie

So at this point, let me encourage you to join us next door. It's out the door just turn to your left. We will be, all of us, will be there, and more than happy to engage in discussion. Reminder, the hard copies are available as you head out the door. And once again, we are delighted to have you here, and your support of us and your interest in Altera is much appreciated. Thank you for coming.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to All other use is prohibited.


If you have any additional questions about our online transcripts, please contact us at: Thank you!

About this article:

Tagged: , Semiconductor - Specialized,
Error in this transcript? Let us know.
Contact us to add your company to our coverage or use transcripts in your business.
Learn more about Seeking Alpha transcripts here.