Mellanox Technologies' Management Presents at Credit Suisse Annual Technology Conference (Transcript)

Dec. 5.13 | About: Mellanox Technologies, (MLNX)

Mellanox Technologies, Ltd. (NASDAQ:MLNX)

Credit Suisse Annual Technology Conference

December 5, 2013 11:30 AM ET


Jacob Shulman - CFO


John Pitzer - Credit Suisse

John Pitzer - Credit Suisse

Good morning. Why don’t we go and get started? It’s my pleasure this morning to introduce the CFO of Mellanox Technologies, Jacob Shulman. He is going to walk through a presentation, it’s probably 15, maybe 20 minutes in length. We will have some time for Q&A afterwards. With that, I’ll turn things over to Jacob.

Jacob Shulman

Thank you very much John. Thank you very much for having us at this conference. Before we start, I’d like kind of -- to review our Safe Harbor statement. It’s pretty straightforward. I'm sure you're all familiar with it. So, what is Mellanox? What do we do? Mellanox is a fabless semiconductor company. We design, manufacture, and sell high performance interconnect products that help to facilitate efficient data transmission between servers, storage and embedded systems.

We headquarter – we're dually headquartered. Our R&D operations in Israel, our business headquarters in Sunnyvale, California. We have approximately 1,400 employees at the end of Q3. 2012 was a unique year for us. Our revenue hit $500 million benchmark, up 93% year-over-year from 2011. Our Q3 ’13 revenues were $104.1 million. We guided Q4 revenue from $105 million to $110 million, and with exited September quarter with $306.4 million in cash and investments.

So each compute and storage systems consists of three major blocks. CPU, memory and interconnect, the pipe that moves the data between CPU and memory. CPU is becoming more and more powerful, memory becoming powerful and faster that creates bottleneck on the interconnect. That’s why it requires high performance interconnect. So when we first came to the market, we operate 10 gigabit per second products, then the market progressed to 20, 40 and today we offer 56 gigabit per second InfiniBand and Ethernet products but also offer 10, 40 gig Ethernet as well.

The data is growing exponentially. Everyday new applications coming out that create data and help to consume data. That’s basically what creates -- that’s why people trying to deal with high -- very efficient data centers with high performance interconnect because we help to facilitate data transmission between CPU and memory.

While the competition is -- they are moving from 1 gig to 10 gig, we’re the only guys today that offer 40 gig end-to-end and 56 gig end-to-end solutions. We first started with HPC market, but what we noticed is that same architecture that is used by HPC guys is also implemented in Web 2 and cloud markets. That’s why in 2011 we made a decision to penetrate in those markets.

So Web 2 guys for us is the guys who built super computers, one super computer to run one application and cloud as we define cloud is guys that’s building one super computer to run multiple applications. All of then using same infrastructure as HPC and today we’re very uniquely positioned to capture those markets as well. HPC still the major market for us, although in 2013 revenue from -- deriving from HPC below 50% of total revenues. And today we also sell products in multiple markets in Web 2 database, cloud, financial services and storage. And there are various examples on the value our products provide in those markets.

So for example HPC, today InfiniBand is the most used interconnect in HPC market. Every six months, a list of TOP500 computers published and basically shows what systems are used in TOP500 and all of 205 systems or actually 207 systems based on November list are connected with InfiniBand.

Latest list was published in November ’13 and the number of InfiniBand connected systems went up from 205, in June 207 in November. InfiniBand today is the most used interconnect in HPC and the only Petascale proven interconnect. So if people want to build Petascale systems, they must use InfiniBand.

Our latest generation FDR products connected - the penetration of this product increased 3.3x from a year-ago, from 21 to 67 systems within list of TOP500. If you look at the chart at the top right corner all of this blue dots is the efficiency of systems connected with InfiniBand. So you see that the systems efficient to the rate of 80% to 90%. The yellow dots are systems that connected with 10 gigabit Ethernet. And the efficiency of those systems is around 50% to 60%. So basically if people connect their HPC systems with InfiniBand, they get 50% better efficiency from the system.

Purdue University is a great example of our products used in HPC. They were able to build 187 teraflops machine with only 648 servers. That’s half the server count and double performance from their previous system.

In the Web 2 market, we work with five major customers, we call [Wales] [ph] and we work with a number of smaller customers typically those guys try to be very secretive about what they’re doing and there are only few public examples of our success in that market. One of the examples is Bing Maps. So when Bing Maps try to build a system they had choice between our 40 gigabit InfiniBand products or competitive solution of 10 gigabit Ethernet. Again the decision was to go with 40 gigabit InfiniBand. So Bing Maps were able to get 10X performance from their prior system. They were able to achieve that at half cost compared to 10 gigabit Ethernet solution.

Next market for us is the Cloud. And those are -- this is a list of public examples of InfiniBand used in cloud. Today cloud for us is primarily connected with InfiniBand. The penetration in the cloud is very small right now, because typically the cloud -- our penetration in cloud lower than in Web 2 because cloud create some additional challenges to cloud vendors such as security provisioning and etcetera. And InfiniBand today helps cloud providers to build the most efficient systems, much better ROI on their investments in the system.

On the data center market, database market, Oracle has put on the map. Our InfiniBand technology today connects all of their Exa family products manufactured by Oracle. So any Exadata, Exalogic, Exalytics machine has our InfiniBand inside. So by selling the Exadata boxes or Exa family boxes to its customers, Oracle is able to improve performance by a factor of 10x and the customer reduce their hardware cost by a factor of 5x.

The next market is Storage. Storage is really Greenfield for us. We believe that our InfiniBand and Ethernet have a very good opportunity there, because we believe that fiber channel will go away. If you look at the transervers today, HPC, Web 2, Cloud, zero fiber channel in new installations. People still buy fiber channels for -- to support existing systems, but transervers today moving away from fiber channel and we believe that InfiniBand and Ethernet will take its place.

There are some various examples of our success in the storage business. EMC presented at our Analyst Day a year-ago and they said there are half a dozen programs we have together with them. Some of them public like Isilon, some of them went to production, some of them expected to go into production in 2014 timeframe. Teradata presented at our Analyst Day last October just a month and half ago, they said that they moved all of their systems, all of their boxes to InfiniBand. There are some additional examples of IBM XIV and Oracle. We also work with Xyratex, NetApp and some other major vendors in storage business. Today we’re working with all Tier-1 storage OEMs.

We play in the big market. We estimate the size of the market by the number of end points sold in that market. We could connect all of those end points either with our InfiniBand or Ethernet products. We estimate that on annual basis approximately 14.6 million end points sold. Of them 10.6 million end points in servers, 3.4 million end points in storage and about 600k in embedded. And if we apply our estimation of ASP per endpoint of $400, would arrive to $5.8 billion market [inaudible]. And this market is growing.

We expect that in 2015 approximately 16.8 million end points will be sold and even if we assume a ASP reduction of 10% from 400 to 360, we’re still talking about $6 billion market. And for us $400 ASP today is very conservative in estimation. In 2012, we shipped approximately 1.1 million devices with $500 million in revenues. The ASP per end point is at high 400s.

Many time people ask who are our customers, how much we sell to certain end-users? We do not sell directly to end-users. We fulfill or we work with hardware storage OEMs. So IBM and HP historically have been our major customers, more than 10% customers. Oracle we disclose our revenues from Oracle as well, because Oracle owns around approximately 9% of the company. Revenues from Oracle around mid single digits. Approximately 15% -- 10% to 15% of the revenue comes from distribution. And we work with multiple software partners to ensure compatibility with various operating systems.

We provide full end-to-end solutions. From PCI Express on the server through the switch to the storage, all of this connection provided by Mellanox. So we sell Silicon standalone that could go on motherboards, we sell adapter cards so that silicon could go in adapter card. Switch silicon could be sold either standalone or as switch systems. We connect all of those with our cables and with our standalone software that provides performance boost for the system.

And our gross margins really dependent on the mix of the products. So we gotten higher gross margins on sales of silicon followed by boards, systems and end cables. So the more silicon we sell the higher gross margins. The more cables we sell the lower gross margins.

In Q3 we announced acquisition or closed acquisitions of IPtronics and Kotura. IPtronics is a Danish company. Kotura is the company in Southern California. Both of these acquisitions are part of our plan to get to the next generation 100 gigabit generation and offer those products in 2014-2015 timeframe. Those as we move to higher speeds the importance of having end-to-end solution is very high. Even physics would work differently. So, we believe that copper will have limits on how long you could use copper at the datacenter at 100 Gig. We believe that copper would be good for connection within the rack, however outside the rack people would have to introduce fiber optic solutions and both Kotura and IPtronics kind of [inaudible] future offering in fiber optics.

Today we have fiber optic cables VCSEL based and IPtronics is one of the components on those cables. However we believe that as we move to next generations beyond 100 Gig silicon photonics is a better technology than VCSEL, and Kotura will provide those components in silicon photonics. Also these acquisitions will allow us to significantly improve our margins on the cable products because we will prevent margins [stacking] [ph] on those products.

In terms of the roadmap, again today we’re the only guys in town with 56 gig per second InfiniBand. We’re the only guys in town with 40 gigabit Ethernet end-to-end. Some people have 40 gigabit Ethernet switches. No one has 40 gigabit Ethernet NIC cards. Our main competition is Intel. Today we compete with Intel on the InfiniBand side at 40 gigabits per second InfiniBand. Intel is a generation kind behind us and they have this 40 gigabit per second InfiniBand product from their acquisition of QLogic, InfiniBand assets.

A few word’s about financial performance. So, assuming midpoint of our Q4 guidance, we expect to, our revenue in 2013 to be around $393 million down from $500 million in 2012. There are a few words I would like to say, explain in this what happened. So when we moved into 2012 we expected our revenue to be $360 million. But then what happened, 2012 we experienced very significant pent-up demand from Romley [ph] fair cycle. If you recall Romley, an Intel CPU was initially supposed to be launched in Q3 of 2011. It was finally (indiscernible) in March of 2012. So people delayed their purchase especially in HPC market. They delayed their purchases from 2011 to 2012 and that’s what created roughly $65 million of the pent-up demand. We believe that approximately $50 million of that number belongs really to 2011.

What happened again in 2012 is that, one of our largest customers, one of the OEMs we’re working with continued purchasing, assuming that the customers continued purchasing toward some of the customer programs that did not go through and that’s what basically created a build out of inventory at the OEM side, and we quantified that it's around $30 million and also there was a large Web 2 installation around $25 million in 2012 that was dedicated to some commercial application that did not work well for the Web 2.0 guy and that’s why the guy overbuilt around $25 million.

So, if we move $50 million to 2011 from the pent-up demand and around $50 million from the inventory built-out and overbuilt by the Web 2.0 customer, that’s what our revenue would have been. So instead of $260 million in 2011 our revenue would have been $300 million to $310 million. So the $500 in 2012 our revenue would have been around $400 million and instead of $393 million in 2013 our revenue would have been $430 million, $440 million that’s kind of more national spread of the revenues over the last three years.

On a quarterly basis, we have been growing throughout 2013. Obviously there is a big hump in 2012 because of the Romley refresh and those inventory build-outs that we were able to grow throughout quarter-over-quarter in 2013, and we guided our Q4 revenue between $105 million and $110 million.

In terms of the product mix, we’ve kind of seen product mix more or less stabilized. If back in 2008, 2009 timeframe we were mostly component vendor with revenues from silicon and boards accounted for 90% of revenues. Today silicon represents mid-teens of the revenues. Board’s around 30% of the revenues. Switch’s around mid 30s and the rest of cables, software services and other.

In terms of the data rate, this chart represents our distribution of the revenue by data rate. If you look the orange area is the data rate, is the FDR revenue. So today FDR represents around 50% of the revenue. Light green is Ethernet. Ethernet today is around 15% of the revenue. Around 25% comes from QDR our previous generation InfiniBand 40 Gig and I would like to note one trend that we kind of experienced here. If you look at the yellow area that’s our DDR 20 gigabit InfiniBand product, those products were launched in 2005 and in 2010 five years after the launch they represented around high single digit percent of revenue.

Our QDR generation 40 gigabit InfiniBand was launched in 2008 and in 2013 five years after the launch it still represents high 20s, they are 3X from DDR. The reason been is that we designed into some systems with major OEMs that create the stickiness, very sticky design wins. Guys like Oracle or [ph] Luxor Data all of them using QDR and that creates a very high barrier for competition to enter. So if, when we have competition from Intel at 100 gigabit per second , it will, they would fill out -- major OEMs will have to still forth [ph] port their software on Intel’s products which creates a very high barrier. Just to give you kind of a data point Oracle rolled to about 30 million lines of quote from our products. So when the competition comes, a significant portion of that we’ll have to be really pleased, and all of the -- have to be qualified. So that’s very sticky design wins that will help us in the competition.

In terms of the cash flow, obviously 2012 was a unique year for us. We generated $182 million in cash. We cash positive in 2013 we started the year with not effective balance sheet structure. As a result of the Q4 guidance mix we had very high inventory. To remind you we build inventory to the forecast, not to the orders and that’s how we ended with relatively high inventory levels. We worked through that inventory throughout the year-to-date. At the end of Q3 we’re back to our normal balance sheet structure and we expect to continue to generate cash flow going forward. We exited the third quarter with $306.4 million in cash and investment.

In terms of the long-term model; as I said 2012 was a unique year we overachieved 2013 due to the reduction in revenue in kind of fixed structure of the costs we underperformed as compared to the model. But in terms of the model we’re targeting mid-to-high 60s on the gross margins and today we are at the highest range for that highest end of the range.

In terms of operating expenses we target mid 40s, today we’re not there yet. We are at the mid 50s, and the strategy for us to get back to the profitability is to grow our operating expenses at the rate lower than the revenue. We still believe -- we believe that there are a lot of market opportunities for us. We want to continue to invest into the business, but we also want to get back to profitability. Therefore we’ll grow our operating expenses at the rate lower than revenue increases. This will allow us to get back to mid 20s in operating income and low 20s in the net income.

So just to summarize, really Interconnect is very important today to any efficient datacenter. As data growing exponentially, people try to build datacenter that could move very large amount of data and that creates the need for high performance Interconnect. We’re very well positioned in the markets we sell to Intel, HPC, Web 2, Cloud database markets. So far we have demonstrated solid revenue growth, 30% year-over-year with weighing in large market around $6 billion down. We’re building a large company. We have strong partnerships, channel relationships and we are very well positioned to grow in 2014.

With that, I'll turn to questions.

Question-and-Answer Session

John Pitzer - Credit Suisse

A question or two, maybe I can tick things off. You did a very nice job explaining some of the timing issues that cause sort of the revenue patterns ’11, ’12, ’13. I’m kind of curious given those issues, as we think about growth in 2014, should we be measuring you off of a $430 million to $440 million type number that you put up when you smooth out that path and apply whatever we think the CAGR is in your market or are there other timing issues that you’re still working through as you think about 2014 growth?

Jacob Shulman

So in terms of 2014 growth there are several trends that will improve 2014 growth. First of all HPC was significantly down for us in 2013. We believe that HPC will grow in 2014. Also today we’re shipping 100s of 1000s of units into Web 2 market. So the growth in that market will come with additional penetration in that market as well as you’ll start seeing 40 gigabit Ethernet deploying in that market and ASP per endpoint even from the market share we already captured is, will grow and will help us to grow the revenues and storage. That’s another market that’s growing very nicely and we expect some additional programs to go into production in 2014. So we expect to grow in 2014.

John Pitzer - Credit Suisse

Any question’s from the audience?

Unidentified Analyst


Jacob Shulman

So in terms of the market, if you look at the market -- so if servers $10.6 million end point, they doesn’t rock into the different segments in HPC $1.10 million end points sold. The market is growing at 8% CAGR. $2.8 million end point sold in Web 2.0 and Cloud and the market is growing around 11% CAGR and the rest is enterprise datacenter. So today our penetration in HPC market is around 45% to 50%, so growth in that market will come from growth -- it's initial growth of the market and also new generation of products that will go introduce in 2014, 2015 timeframe. The Web 2.0 we’re just at the beginning of the penetration. So growth in that market will come additional penetration because we’re around maybe 10% of the market today, and that market is also growing much faster. So, the growth should be again from both additional penetration into the number of the units sold, increase in ASP per end unit and initial growth of the market.

John Pitzer - Credit Suisse

Okay, with that I think we’ve run out of time in this session. But I want to thank Jacob and everyone for attending this morning. Thank you, Jacob.

Jacob Shulman

Thank you very much.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to All other use is prohibited.


If you have any additional questions about our online transcripts, please contact us at: Thank you!