Hello. Good afternoon. My name is Jean Hu, Chief Financial Officer of QLogic’s. Welcome to QLogic’s Analyst Day. We did appreciate that you joining us today. So here is our agenda today. We’ll start a program with Simon to review our strategy and vision. Then you will hear from the Founder of Taneja Group, Arun Taneja to talk about the dynamics in the data center. Roger Klein from our Host Solutions and Craig Alesso from our Network Solutions they will talk about their business and opportunities.
After the break Shishir Shah will discuss in great detail about exciting technology announcement we had this morning. You also will hear from our customers. I’ll come back to review a financials and Simon will provide a closing summary. In the end, we’ll have a Q&A session to answer all your questions.
So, before we start, just a quick reminder, our comments and the presentation today will be subject to our Safe Harbor statement, you can read on the screen.
And, with that, please join me and welcome our CEO, Simon Biddiscombe.
Thank you. Will turn my microphone on. We will turn my microphone on. We will turn my microphone on. Can you hear me?
You can hear clear.
Put lines off.
You turn to see his microphone on, not mine.
You got mine on.
Mine is on. Is this turned on?
That was working.
That was working.
Got to be some joke about not been able to get Facebook, stock trading and not been able to get my microphone working. But I don’t have time for that. So, Jean gave you the Safe Harbor.
Thank you for joining us this afternoon. It’s a two years since QLogic last had an Analyst Day, and two years ago, we gave you an overview of how we saw the evolution of the data center associated with the explosion of data that was ongoing at that point in time and that we certainly believed was going to be a continuing challenge for those who manage an architect data centers moving forward.
The critical point we tried to make two years ago, is look, we don’t see the world differently than anybody else. We see the world in substantially the same way in terms of the technologies that are going to be relevant and the deployment models that are going to relevant.
And I think a lot of that has played like over the course of the two years, albeit not at quite the pace that we had expected it to for certain technologies and albeit in a different micro economic environment than we had expected.
With that all said, today the challenge is associated with implementation of solutions to process, move and store data, so servers, networks and storage are incredibly different than they were two years ago. And the data itself, the explosion of data itself continues to be a bigger challenge than any of us ever anticipate, would be the case.
So those trends have continued to give rise to new technologies over the course of the last couple of years. We’ve see incremental use cases associated with solid state technologies, be the PCI based or be there arrays, we’ve seen incremental technology development with storage and associated with network, and flat networks, flat networks with much higher bandwidth and lower latency than we’d expected to see by this point in time.
QLogic tends to be a beneficiary from every one of the trends that we are going to talk about today. And QLogic tends to be a significant beneficiary from the explosion of data that continues to be the challenge that our solutions enabled management within the data center.
And today with announcement of our latest products that simplify the deployment of server-side cache and server-side SSD capabilities. We believe that we are once again demonstrated thought leadership and bringing highly innovative solutions to market.
And if you remember, I told you two years ago, the vision was to be the market leader in high performance data center connectivity, that’s completely unchanged. I still believe we are the market leader in high performance data center connectivity and I expect us to remain the market leader in high data center connectivity moving forward.
So let’s start with data, let’s talk about what’s going on at this point in time. More data, the most data, most data center architect not just can even come close to be an able to manage let alone start to interpret applying intelligence too.
We live in the big data age, that’s changed dramatically over the course of the last couple of years and we’re dealing with data sets for the so large and complex, extraordinarily difficult to bring capabilities that come our traditional data management set to bear on big data.
So it’s difficult to be associated with capturing the information, storing the information, searching, sharing, visualizing the information, whether that information comes from web blogs or SAN networks or RFID, the social network and so on and so on. The challenges associated with data are just growing incrementally more complex by the day.
Just couple of anecdotal points; number one, as the Wal-Mart now handles more than a million customer transaction every hour and the data bases of Wal-Mart has 167 times size of all the books and library of Congress today, and data at some point tomorrow its going to be radically different.
Everyday humans create 2.5 quintillion bytes of data. It’s a 1 followed by 18 zeros. And since our last Analyst Day, since we last stood here in New York, almost two years ago to the day, 90% of the world’s digital data has been created.
So we are living in the world with enormous opportunity for those who can bring differentiated solutions today and those differentiate solutions can be hardware centric, and those differentiated solutions can be software centric.
So all the information that’s generated has to be processed and it’s moved and it’s stored. And its not just processed once, its processed on multiple occasions, its processed on our iPhones and our iPads and then its on PCs, it’s processed on servers and it’s processed on adapter and it’s going to be processed on storage arrays.
So any individual by today that is now being processed on multiple occasions, it’s moved across multiple networks, it’s moved across wireless networks, mobile wireless networks, cellular networks, moved across enterprise storage networks, it’s moved across enterprise Ethernet networks, it’s moved across your home networks.
And then ultimately it’s stored and ultimately -- it’s not stored once, stored on many, many occasions, whether it’s stored on personnel devices then it’s stored and backed up in cloud and so on and so on and so on.
So more data, driving more infrastructure across the multitude of protocols, across servers, networks and storage. And everything we do in our business lives and in our personal lives continues to drive that explosion of data as we move forward. Structured data and it’s unstructured as well.
It is my favorite example actually, so don’t pay too much attention to what it says on the screen. So we start this from tera data. So what that says is 1 terabyte of data is generated by commercial things like. It’s kind of interesting that’s actually just a commercial data, that’s reservation systems and the processing of the FAA information and so on and so on and so on.
Every one of those engines has the ability to generate 10 terabytes of data through the sensors that exist in it, every 30 minutes of life time. So imagine the challenges for a Boeing or an airbus or an airline data center architect or manager as he tries to capture, process and store the amount of information that’s been generated in that kind of environment.
The other one and my favorite examples recently was BMW, BMW’s data center managers went from 64,000 unique users to 10 million unique users and every vehicle have the ability to send BMW its sensory information. Have you ever start as a data center manager or architect think about the challenges associated with capturing, processing and storing that amount of data.
And the answer is that we have started from servers to networks of storage that been in the whole multitude of critical technology advancements over the course of recent years and QLogic’s been an important part as to how many of those advances have occurred. And I’ll show you how we think about that as I work through the remainder of my presentation and then each of the presenters you hear from later today will also give you a view on how we think we contribute to the evolution of technology within this regard and importantly, how we think about that moving forward.
But let’s take a quick look at what we’re seeing across each of these technologies, so we’ll start with servers. So long gone are the days where single socket, single core, CPUs were the servers of choice across much of the enterprise market and across environments where data management is critically more complex than has been in the past.
The mega physical servers that have multiple sockets, six and eight processor, each of those processors is capable of having 10 cores or whatever the latest numbers at this point in time based on Romley.
And that will continue grow as we move forward. On top of those cause relating VMs, those VMs are running unique operating systems, those operating systems are all running unique applications and every one of those applications is part of what’s driving the digital economy and it’s driving the digital lives that we all live on a day to day basis.
But that serve as a hungry, very hungry actually, long gone are the days where you would assume that you can feed that process capability with 1-gig Ethernet wire or a couple of 1-gig adapters, or with 2 or 4-gig Fibre Channel capabilities. The applications demand data, demanding quickly, those applications demand data essentially in real time.
Within the context of the enterprise and within the context of the data center time is money and one of the critical value proportion associated with our technology announcement a little earlier today, our Mt. Rainier technology is the beauty of eliminating time to data, bring data closer to the processor and allowing applications to have access to data far faster than they have at any point in the past.
We think this is one of the key elements of what has made QLogic a successful company and we think the continued innovation that will put in forth the key element as well as continue to drive us the success across the market that we are serving.
By the way, behind every one of those tablets and iPhones and whatever other products we are holding today. There is an enormous demand for servers, every 600 smartphones or 122 tablets require a new server to be installed.
Servers that are behind the scenes processing emails, web searches, Facebook updates, instagram, instagram I still don’t understand, two years ago it didn’t even exist, today my kid think instagram is the coolest thing on earth, when I look at the actual amount of processing and storage associated with it, extraordinarily high.
So high performance servers and lower end servers all demand and drive for -- all demand -- driving demand for incremental I/O. Since more high performance connectivity which is all that QLogic brings to the market.
Its different network technologies, so network clearly have to be scalable and they clearly have to continue to be agile. We’ve really three distinct technology trends within the context over the last couple of years.
First, is clearly about speed, speed being the combination of bandwidth and latency. On bandwidth, we are seeing networks, on the Ethernet side go from 1 to 10 to 40 to 100-gig and now for the first time we’re talking about 1 terabytes Ethernet.
And on the Fibre Channel side we’ve gone from 8 to 16 to 32, whether or not there will be 64, I think only time will tell. We didn’t think there will be 4-gig Fibre Channel, but it clearly has a longevity that’s way beyond the expectation that people had.
Fibre Channel continues to be rock solid, it continues to be the storage protocol of choice within the enterprise and you’ll hear that as we move through presentation this afternoon as well.
Both Ethernet and Fibre Channel networks are being optimized for latency and we believe that that continues to be offer a significant opportunity for our company as we think about differentiated solutions that leverage capabilities that exist within QLogic.
That’s the first trend speed, the second trend within networks is clearly convergence and the converted convergence clearly isn’t what people expected it to be, but we still all believe within converged networks. We still believe that ubiquitous Ethernet world and the value of in particular FCoE our case is something that will continue to gain traction as we love forward.
We’ve seen post strongly. We’ve seen anecdotal information in talking to our major OEMs we know there has been something of an uptick in FCoE post Romley launches and part of how we always characterize our expectations for that market was you need to 10-gig and with Romley came 10-gig and with 10-gig comes the ability to move FCoE more effectively than ever before. So we continue to believe within conversions in networks.
And then, finally, we continue to believe is in the flattening of the networks. When we talk about the flattening we’re talking about taking at hops, we’re talking about the removal of individual physical switches that cause death to the networks.
And we do that, we believe in that for two reasons; number one, latency, the more hops there are, the more latency you introduced; and number two, management, the more data that you’re trying to move from switch to switch, to switch, to switch, before you finally get to a storage device, you’ll finally get to a server, the more complex, the management of that data becomes.
So you’ll hear from Tom Joyce, who is the VP of Marketing for HP Storage Business how we have enabled a much flatter perspective of storage networks in particular in this case for HP in their flat SAN implementation.
So we’re going to see in three trends, the three trends have continued to be prevalent within networking. Number one speed, number two convergence and number three the flattening of networks. And we continue to believe that every one of those can be a positive driver for QLogic.
As explosion in the number of users and devices, and every one of those users and devices ultimately requires network connectivity, more ports than ever before part of the lead system around Fibre Channel that people still haven’t got the grips with this, despite everything that is going on from a technology evolution over the course of the last 15 years the Fibre Channel existed.
As an industry we ship more Fibre Channel last year than ever before and in 2010 we ship more Fibre Channel than ever before. I’m not sure it will be the same this year, having the macro impact is certainly dampening the demand environment this year. Fibre Channel hasn’t gone away and we shipped more last year than we’ve ever shipped before.
So you got the explosion in number of users all requiring access to networks and ultimately that access to networks is about mobile networks, it’s about enterprise networks and it’s about carry wireline networks and so on, and we continue to expect that will be an explosion in the number of port.
And then finally, this clearly going to be an explosion in the size of the pipes, fatter pipes, the only way you are going to be able to deal with expectations associated with the amount of rich media the people expect to continue to drive all the networks is with fatter pipes.
So in 1 million minutes of video across the network every second in 2015, not sure, that’s what Cisco expect to see in 2015. The network requirements associated with that absolutely enormous.
I’ll come back to what zettabyte is, 2010 was the first year of zettabyte of data was actually generated and I’ll come back in a couple of slides, we may have look ahead to try to figure out what is zettabytes is.
Suffice to say PowerPoint doesn’t actually know how to spell zettabytes, caused us tremendous amount of consternation so, as 4.8 zettabytes by the end of 2015. So we got more data, driving the needs for more I/O, it’s very clearly, clearly what QLogic is all about.
And then finally, within the context of what’s going on within storage environment, everything retained, even if you think you deleted the chances are its being retained somewhere, maybe retained to compliance, may just be retain to future access.
And so what we are seeing is significant technology advances on the storage side and it is things like PCIe based SSD, it is things like flash-based arrays, it is things like thin provisioning and so on and so on and so on.
So in the storage market we continue to drive technology evolution in ways that haven’t previously anticipated. QLogic’s a critical part of storage I/O. Regardless of which protocol it is, Fibre Channel, iSCSI, FCoE, Ethernet.
QLogic’s a critical part of storage I/O and you are going to hear more from Roger on the target market very specifically and the opportunity that exist for us in the target market as we work our way through the presentations.
This is zettabyte, can’t even see at terabyte, but that’s a zettabyte of data, all demanding high performance storage I/O, which is all of what QLogic is about. So whether its servers and evolution of servers, whether its networks and the evolution of networks, or its storage and evolutions of storage. So it’s clearly being critical technology advances over the course of the last couple of years since we last talked.
And QLogic clearly stands to be a beneficiary of each of those based on the solutions that it has in the market today and that it will continue to bring to market moving forward. But unfortunately, that doesn’t solve the problems associated with the explosion of data.
Network traffic is expected to grow by 32%, storage capacity is expected to meet to grow by 50% and that’s between now and 2015. But we are going to have that handle against the backdrop of an increasing spending of somewhere around 7% and I will grant you, that they maybe a little bit more or less that goes to network and storage technologies, okay.
But there is a huge disconnect between the rates of growth of traffic and the rates of growth of spend and that’s given the rise to different deployment models. It’s given rise to virtualized data centers. It’s given the rise to cloud. It’s given the rise to the converged enterprise.
And with our announcement today, it gives rise to what we characterize as information access optimization. It’s our new way of thinking about a new trend the deployment of technologies within the data center. So three of them were extraordinarily familiar to you, the virtualized data center, I don’t need to explain in any data -- in any detail. I know you will understand what a virtualized data center is.
I also recognized that you will understand what cloud is. I also believe that you understand how QLogic participates in each of those trends. I think you know that in virtualized data centers, networking of storage is a critical capability when it comes to moving VMs across servers and allowing access to storage associated with VMs.
I know that you know within the cloud and it depends on how you characterize it, private, public or hybrid. That means different things to all of us, but it’s more clear than it was two years ago that’s for sure. That much of this is really a 1-gigawatt but it’s increasingly transitioning to become a 10-gigawatt.
There is also a Web 2.0 part of, how we think about this market. There is also a Taiwanese ODM part of how we think about this market and there is also our traditional OEM rights to market as to how we think about this market as well. And also lot of 1-gig that went down this part of the world, that’s increasingly transitioning to 10-gig.
I was joking with Aaron before I started. We see different opportunities out of Taiwan on multiple occasions per week. Solutions that are optimized for individual customers, solutions that are optimized for specific applications, we don’t see that level of activity other than HP or IBM or Dell. The activity we see with them tends to be very much married to a processor update.
So if you are in, you are in and you are in for a couple of years. You are in for one processor generation and if you are out, you are out for one processor generation. The Taiwanese guys in the cloud market and Web 2.0 market offered different set of opportunities and you could be out today and back in tomorrow. And they all try to solve different problems and price is a critical path of how do you think about solving those problems.
And today, we announced information access optimization. It’s going to solve the I/O performance gap and you are going to hear more about the I/O performance gap as we work through the course of the afternoon.
We are bringing, we are taking advantage of the advancements in CPU performance and the increased demands, applications for real-time access to data, to bring new and highly innovative technologies to the market. That will help to solve that critical I/O bottleneck that exists in the data center.
I’m going to continue to believe that we are uniquely positioned within the context of information access optimization based on the 15 years worth of investment in networking technologies and the last three years worth of investment in the various specific technologies that we announced today.
And then finally the converged enterprise, we are bringing together multi-protocol hardware capabilities with personalities that allow you to move any protocol across any piece of hardware essentially.
So we see the world in substantially the same way as anybody else who would stand up and talk about virtualized data centers, clouds in the converged enterprise. We see it differently within the context of information access optimization because we believe that we can bring a truly differentiated solution today.
So from servers to network to storage, applications are demanding real-time access to critical information and that access, demands, infrastructure it’s agile, it’s flexible, it’s scalable and it’s on demand.
So why do we play? How do we play? Where do we play? We play across the entire server market with our adaptive base technologies. Fibre Channel adapters, the Ethernet and iSCSI adapters, converged network adapters and then at the bottom, mezzanine form factors for blade environments of the same technologies.
And we play across the network path with converged switches, the blade modules and with stackable and modular switches. Fibre Channel, FCoE and you will hear more from Craig today on precisely what we are doing in this regard.
We are little less visible in this part of the world. It’s typically got an HP badge on it or it’s got an IBM badge on it, but little less known than we are in the more dominant Fibre Channel business that we serve.
We are dominant in the Host business that we service is what I should have said. Our presence and our ability to win in this market continues to expand on a day-to-day basis. But I’m actually more confident that we’ve got the right goals and strategies in place for that switch business to date than we have at any point since we started investing in 2001. Very clearly to me what we are going to achieve and how we are going to achieve.
And then on the storage side, a market that opened up to us when the incumbent walked away and you will hear more from Roger on the target set. Our router products, the part of the business that Shishir runs that gave birth to data migration and then gave birth to the Mt. Rainier technology announcement that you heard from today.
We are uniquely positioned with each of our OEM customers. And that when I engaged with HP or IBM or Dell or Taiwanese guy, I’m engaging with the server groups, I’m engaging with the networking groups and I’m engaging with the storage groups, not all of my competitors can say that. Typically they are engaged at one point or another.
Part of what we have to continue to do as a company is take advantage of the fact that we are engaged with everyone of the groups within the hardware businesses of our principal OEM partners and drive continued differentiation based on end-to-end capabilities. We are adding value to every part of the data center.
Let’s jump little up, just to confuse you a little. You are going to see this throughout the course of the afternoon in terms of the other presentations you are going to see. So, Roger, who will talk about his piece. Craig will talk about his piece and Shishir will talk about his piece within the context of the products that you see at the top, and more importantly, within the context of the products that you see at the bottom, which is today’s announcement associated with information access optimization, SSD optimized SAN or our Mt. Rainier technology portfolio.
We still have a belief system around long-term formula for success. Let me tell you what success is by the way. So there is no debate, success is about shareholder expansion, value expansion, but I think we got to two things to be able to receive that. Number one, deliver on solid business model, I’ll tell you I think the days of 30% operating margin, probably behind us at this point in time.
The target model that we provided to you that Jean will talk through in more detail says, we are at 20 and that’s better be the bottom. And it will continue to see the business recover towards the 25 point operating margin, which is kind of the target if you will as time goes by. So the way it’s in your package, it says target 20 to 25. I think the target is below point and that is today at 20, and we will continue to see that margin expand as we move forward.
We are going to invest in R&D. We have to invest in R&D because otherwise we won’t be able to deliver on the other element of what we believe will drive shareholder value, which is growth.
Bringing in with these new solutions that allow us to participate in the expansion markets today requires investment. So we will continue to invest in R&D against the backdrop of a squeeze on sales and market and G&A expenses.
But against that, we continue to believe that the core business is solid. We continue to believe that the Fibre Channel market is going through something of a macro malaise at this point in time.
But that as I said, last year we ship more Fibre Channel than ever before and moving forward, we expect that the Fibre Channel market will continue to be relatively stable for an extended period of time.
And the expansion opportunities that you will see, the team talked through this afternoon are really where the significant growth opportunity comes from. And each one of our expansion opportunities is tied back to the transformational trends that are going on in the data center at this point in time.
So my view of the world is unchanged. Two years ago, I stood here and said we are the market leader in high performance data center connectivity and I expect us to stay as the market leader in high performance data center connectivity. It’s unchanged. I expect QLogic to be and continued to drive, being the market leader in high performance data center connectivity. So that’s my view of the world.
Challenges associated with data are absolutely enormous. The explosion of data is driving an incremental set of opportunities for QLogic, associated with high performance network connectivity.
And it’s our job to make sure that we execute to those opportunities to deliver growth and to deliver a compelling business model that will ultimately drive long-term expansion in shareholder value.
So, with that, it’s my pleasure to introduce to you -- where’s he? It’s my pleasure to introduce to you, Arun Taneja and I did say his name correctly because I practiced it bunch of times, who is the founder of the Taneja Group and who we at QLogic have known for an extended period of time. He will offer you his perspectives based on a wealth of experience that he has across the storage world on current dynamics in the data centers.
So, with that, Arun. Thank you.
Thank you, Simon. Thank you. So, first of all, thank you very much to QLogic. Is this on? Is this coming through back there? Okay. So thank you very much for inviting me to speak and I know I met some of you. I know, we probably read each others materials that we send out. But some others may not know who Taneja Group is, so for them it’s a boutique analyst/consulting organization.
This is the 10th year that we are operating as Taneja Group and at the center of our work is storage, everything to do with storage from one end to the other, whatever software, hardware as the whole thing.
The surrounding storage, we also do server virtualization because sever virtualization, recognized back in 2008 when I first started this company that server virtualization is going to impact storage more dramatically than anything else that ever done in the past and I didn’t realize how true that was going to come out, that’s finally come out.
So we do cover that and we also in addition to that do WAN optimization because that helps many storage repercussions and I do have expertise on the team for eDiscovery because once again, it has a very strong flavor to archiving and governance and so on and so forth.
So today, really my points are very, very simple. At the macro level, that data center is going through a massive deconstruction. I haven’t seen this level of deconstruction in my entire career and I have been in this industry for more than three decades.
I saw the minicomputer revolution happened obviously that was a paradigm shift, impacted the data center are created, what you might consider departmental data centers at that time if you, for some of you that who were in the industry at that time.
PCs came in, client service came in, fundamentally changed, how computing was done and how data centers were deconstructed and then technical workstations came out from Sun Microsystems and others. That had huge impact and then on the web based applications and everything that happened in the mid 90s where the [dot coms] and dot coms had a major impact.
I think we are going to a similar major transformation in the data center today. I mean if you look at all of the layers, right. What’s happening at compute, that damn thing keeps going at Moore’s law, even breaking Moore’s law. And now you have more cores than most designers know what to do with, so that’s causing -- we won’t talk about that today here but that’s causing hyper conversions.
Something that we wrote about this last week when -- with Nutanix and SimpliVity and a bunch of other companies that have come into that thing, but that’s more of SMB play, total hyper conversions.
That number of course is so much compute capability available that you just don’t know what to do with it. People want to try to do things beyond compute and clearly storage was coming into play, but another topic for another time.
A big impact at the compute level, look at what’s happening at the server virtualization levels, fundamental changes because of VMware, VMware was stalled essentially. They won’t admit to it, but we saw that there was a -- the number of workloads that came under the purview of VMware. We saw that go up like this and then it was sort of two years ago, the guys were hovering about 30% to 35%, and we saw that we were calling it a VM stall.
And so, why was that VM stall occurring and the I/O has been the biggest culprit, right. I mean for three decades or more, we’ve been trying to solve that I/O problem. We haven’t been able to solve it. This is the first time ever that I can say that we are on the verge of, we have but we are in the early stages of solving the I/O problem.
So here is the huge stuff going on like that and here is the, that you saw Simon’s curve at the cost level. But you can also draw that same curve at the HDD or storage level when this gap continues to widened, network keep continue getting faster and faster.
The basic mismatch, I/O has been say last frontier so to speak and obviously, QLogic is breaking some new barriers on that front today with Mt. Rainer. But so any ways, so that, if I/O does not happened at the stage, I think we would have with this, this deconstruction would come to a screeching hall in spite of VMware and in spite of some of these other things because what has taken VMware, last week at VMWorld, they said that they believe 50% -- 55% to 60% of the workloads are now under the purview of VMware.
Okay. I think that’s a little bit high. We are estimating somewhat in the 45% to 50% level. But the changes that have gone from -- taken it from 35% to whatever it is right now whether it’ still 45%, 50%, 55% is because we broke the barriers on the I/O front. So that’s amazing news of today with Mt. Rainier, and obviously the whole industry has been creating a whole bunch of products, all flash arrays, PCIe cards with that act of the cache and when you buy software from somebody else and so on and so forth.
So a lot of activity in that space, but it’s very important for the deconstruction and the reconstruction of the data center. So that’s what happening at the fundamental level for the data storage, just break in a part and redo it. You talked about storage software defined networks. I’m not a networking expert. So I’m not going to go there but that’s another element that’s going to hit it over the next three years, tons of activity in the market for that.
So, having said that now let me turn to where the QLogic fit into all of these. At least from my perspective, you’ve certainly got Simon’s perspective on this. My perspective is that there is really three things I want to say about QLogic. First of all, every time I talk to the vendor community, Fibre Channel has been wanting to go away now.
If you remember when we went from four to eight, when we were going from 4 to 8-gig, it was -- hey, this is a perfect time. 10 giga was going to takeover and Fibre Channel is going to be history.
It’s not happening, guys. I’m telling you, every time I talk to the IT guys who have real jobs. They have fundamental issues, they’ve got mission critical applications. Mission critical applications are still not under the purview of VMware, and so mission critical application require a whole bunch of other things and so where I’m I going with this. I think, I just kind of lost my path there. But anyway, so QLogic is I/O and I/O is flash. So the fact that we’ve got the flash now here, I think is going to fundamentally help us take us forward in that dimension.
So Fibre Channel is not going away. IT guys are telling me, for mission critical applications they are going to move from 8 gigabits to 16 gigabits, knowing SANs in parts. So that’s one thing that I just wanted to put to bed from what we see. Forget the vendor community, forget the pundits, the IT guys are definitely wanting to move from 8 gig to 16 gig. So what QLogic has done is moving forward with 16 gig, I think in my view is the right thing to do.
So the second point I wanted to make about QLogic is like I said, I/O is equal to flash and I/O equals QLogic. That’s what these guys have fundamentally done since the beginning of time, is know how to move this from here to here in the more sufficient fashion. So in my view, the product that they are announcing today, Mt. Rainier which combines Fibre Channel host bus adapter with flash. The first product of its kind, where else would you expect the product like that to come from, right.
Everybody else is doing like I said, all flash arrays. They are doing PCIe cards and so on and so forth. You would expect I think, obviously they are not HBA people. But if you put yourself in the IT guy’s shoes, if somebody came to you and said look, you are buying HBAs from me already, you will continue to buy HBAs. You were telling me Fibre Channel is most important to you for mission critical applications. Okay.
So now, what if I were to give you a storage that is so close to compute, right? And its solid-state flash and it’s going to have one driver with it, not three drivers which is what I need if I wanted to do it in another way. And everything else would be transparent. There is no changes to the applications, nothing required.
You just plugged it in and you get full Fibre Channel HBA activity and you got caching and storage. Storage as a real storage rather than just the cache, which means I could create a LUN out of it and make that available to a virtual machine or set of virtual machines. That combination really hits all of the major things that I just talked about, right.
Fibre Channel continues on. I/O is the major problem. I got to solve that I/O problem. Flash is my answer to the I/O problem. So, I’ll bring these things together into one unit. And as long as I’m not making my 16-gigabit or 8-gigabit or whatever it is, it will go any slower then that. I don’t know where I would take care of that thing. And in addition to that, I solved your I/O connectivity problem by giving you a decent amount of gigabytes on the flash side. I think that combination is very, very strong.
And as you can imagine, they aren’t more than a couple of companies that could, maybe three that could potentially come up with it. Obviously, QLogic is the first one to do that. So net-net, Fibre Channel is here to stay. Net-net, Mt. Rainier does solve a genuine problem, the way I see it on the I/O side. I/O is the biggest problem in the data center today.
So those are really the key points that I wanted to make today. I will be here for the rest of the day and certainly happy to take some questions later on. But I will wrap it up for the moment and my pleasure to invite, Roger Klein who is Senior Vice President, the General Manger of the Host Solutions division. Roger?
Thank you very much. Good afternoon. My name is Roger Klein and I’m responsible for the Host business unit, Host Solutions Group. Host Solutions Group or HSG is one of the three business unit in QLogic. I will be leading off and I will be followed by the Network Group, then by Shishir’s Group and Storage Solutions Group.
A little bit about myself, before I dive into my presentation. I’ve been with QLogic a little over 11 years, about 11.5 years. I have about 40 years, I’m just short of 40 years in the tech industry and a variety of capacity driven management, server, technical roles and I spend the better part of the last 20 years in I/O and in storage area. So this was home for me.
I’ve been with HSGBU for most of my career at QLogic and became the general manger of this fine BU in 2006. So, I’m pleased to be here today and thank you for all your attendance.
So our business unit is a leading provider of data center connectivity and solutions. We can trace our roots back. Over 25 years into the initial products that began the company and began the business unit. If I take you on a quick trip through our product families, we began with parallel iSCSIs back in the late 80s. We moved into SAN products, Fibre Channel in the late 90s. We powered that up with our first storage over Ethernet, 1 gig iSCSI investments in 2001.
We began investing in 10 Gig Ethernet in convergence around 2006 and deployed products in 2008 and 2009 and that takes us to today where we offer one of the most fully featured product families available covering high-speed networking for both server and storage connectivity.
If I look at it from a kind of a market point of view, we’ve moved from vast connectivity into SAN connectivity into general Ethernet and became a leader in the iSCSI operate business. And then moved more broadly into 10 gig Ethernet into convergence. So as a business unit, we are really focused on leading server and storage OEM and expanding that business with new partners as SAN goes on.
But QLogic does standard data center. This is the diagram that Simon showed and the HSG business unit plays a key role in that coverage. On the left hand side of the screen is the server connectivity marketplace. That server connectivity marketplace for us means new sale of adapters and ASICs and that’s really divided up in the two areas.
And the lower portion are rack and tower, where we sell standard form factor adapters, converged network adapters, 10 gig Ethernet adapters, a Fibre Channel adapters, and then on the top left which represents the marketplace for blade server, we were an early innovator in that marketplace and we saw the same protocol adaptors however in those marketplaces as well as custom form factor.
But I’m going a little bit deeper as we go into our presentation. So server connectivity on the left and on the right hand side is our new market focus and that’s the storage connectivity or the target marketplace. This really represents an exciting new opportunity for us. It’s great leverage because there is a lot of commonality in the firmware and the protocols that we use the servers, both the server connectivity and the storage connectivity marketplace.
And as you can see down on the left hand side and the right hand side, we have significant customer acceptance of our products and very broad design wins.
I just spend the few minutes talking about how QLogic differentiate itself in some significant strengths that we really bring to the high performance networking in terms of benefits. And those benefits really are multiple generations of trusted and hardened firmware and software.
This may seem like that maybe a little bit of a subtle point, but we offer a diverse set of operating system and diverse set of hypervisors. We offer variety of different hardware and hardware support APIs numerous features and standards on OEM specific. And all of those things are certainly essential that diversity is really key to addressing the needs of data center.
But it’s really what’s below the surface that counts in terms of being little reliable, robust mature vendor and a trusted vendor in the marketplace. And I would like to highlight some examples that I think simplify that. So I think one of the things that differentiate us is that our firmware and software are Tier 1 OEM hardened, resulting in very mature protocol stack.
And what we mean by that is that our products are almost continuously subjected to OEM qualifications and those OEM qualifications bring the futures to market. They bring new operating system version in the market then we’re constantly verifying and re-certifying our software and every subsequent use of that software by all of our customers gets the cumulative benefit of that maturing and the hardening that occur.
We also have very deep engineering expertise and that deep engineering expertise really -- when we look at it from a time standpoint where it can be quantified by probably in the neighborhood of 3,000 plus man years of development and tests. A good portion of it in the area of firmware and software and that really result in significant liability and significant maturity for our customers.
And then from future standpoint, we’re very rich in standard features. We also spent a lot of time in custom features. And that’s actually becoming more of a trend today. There’s a lot of the OEM kind of verticalization issues. We do a lot of custom features and net of all that we add significant value in terms of both capabilities.
And then to close this off and to really highlight the significant emphasis that we have on quality and tests and our interoperability is the fact that we have in excess of 3,000 servers and in access of a 1,000 targets that are used on a continuous basis for product verification, product tests and product interoperability and this is very significant emphasis.
So these roots really run deep in terms of capabilities and they really provide significant customer benefits and significant leverage. So, these apply to all of our products, all of our protocols and all of our high-speed networking connectivity products.
I mentioned the diversity of the data center and what kind of requirement it takes to be successful. You need to start with the diverse set of operating systems. It is highlighted here in five boxes that really would be still around probably hundreds of different variations and versions.
And likewise on the hypervisor side, hypervisors are in every good operating systems today as anything in the operating system around and the combination of the two bring very significant capabilities. So operating systems in hypervisors are very key. In addition, having the right hardware platform support with the right boot support and the right infrastructure support are really key.
So in a sense, some of this is table stakes. These are things that you need to be successful. And I think on top of that, we’d like to highlight three areas represented by the trickling bars that we think represent fundamental capabilities that are essential to be successful in the marketplace. And I would like to go through each one of these a little bit and just kind of highlight what we think is a relevant with respect to each one of them.
So, concurrent network protocol really drives a diverse set of capabilities and the ability to operate to what the means of the application environment and what the -- these are to the end user environment what they dictate, what the product dictates we think is really key.
Secondarily in the area of virtualization, Simon talked about virtualization a little bit in his presentation. We offer a set of features and capabilities that enhance and enable sever virtualization both within hypervisors as well as within OEM specific application. And really, this list of capabilities that we show here are reflective standards based virtualization, standards plus base virtualization as well as OEM specific. They can really get a sense for the complexity and the flexibility that these bring to the market. When you start to look at what really represents a partial lift.
And then lastly, the management in interface layer is a key layer, because it gives us the ability to support industry standard API and upper level management tools that we think really round out the picture. If we kind of get a sense of the overall complexity that’s really required to be effective to address the data center, I think it’s typified by this diagram.
We’ve done some calculations here and there are literally hundreds and hundreds of different combinations and permutations that you can have, which really are reflective of the flexibility and the adaptability that we need to have as a supplier in the marketplace today. But really having this portfolio of software is crucial to meet the increasing complexity of data center, but what really boils down to it having it worked together in harmony is the difference between just another supplier and a market leader. And in these markets QLogic is the market leader.
I’d like to move onto the data center trends that Simon highlighted and perhaps give a little bit of color around with respect to how it affects our business unit, and how these trends affect our product development and our priorities. There are really four key trends that are driving our business and our products. The virtualized data center is clearly a key trend and the visualized data center as we saw from the previous diagram is driving significant capabilities and features into our products.
Some of those products are generalized from a standard standpoint so that any hypervisor or any operating system capability to take advantage of them. Some of them are standards plus that we do specifically for customers with unique needs and some of them are very OEM specific, because as we know today a lot of the OEMs themselves are trying to drive additional value into their solutions in the marketplace and this certainly gives them the mechanism to do that.
There are the cloud, public, private and hybrid, there’s really a pretty diverse set of dynamics that are affecting us. A lot of the effects can fall into our traditional marketplace. So if you look at private cloud, a lot of what we do historically applies to that.
Our public cloud for us has a completely different set of dynamics and hybrids that represents the combination of the two. And to meet this diverse and rapidly area of technology, we have a lot of new development underway and we’ve got a lot of products to market over the last year or two. And this is an area that we see constantly evolving as we go forward.
Information acts of authorization, what it means to this business unit is the ongoing need for faster access to information, driving lower latency, driving higher speeds moving from 10 gig to 40 gig and a 100 gig, moving from 8 gig to 16 gig to 32 gig and perhaps beyond. We’ll always be on that never ending quest to drive higher and higher seeds. Having said that, we would fully expect the 10 gig Ethernet and 15 gig Fibre Channel probably we have many, many years of life and some of the higher speeds will be adapted selectively as time goes on.
Then lastly the converged enterprise, the converged enterprise really typifies the album of flexibility in today’s environment. So if you look at multi-protocol, the benefits of converging some converged enterprise can bring, it really represents significant shifts and paradigms and how things are looking.
And all four of these trends are really affecting to a significant degree are our product development, our feature development, our software development and our sense of priorities. So, we stay very close to these and we’ll continue to drive this information back into our planning process.
I’d like to shift now over to the business opportunity that we represent. I said earlier that, the HSD business unit really reflective of server connectivity and storage connectivity. We actually break it up into three different and very similar markets as we look at it from a market point of view.
The Fibre Channel server marketplace, which is the column on the left, that’s about a $700 million a year SAM, which we see being generally stable and flat overtime. There are new generations of products that we have in the mix to keep this to be a healthy market, because we foresee it having good longevity.
The bar on the center represents the 10 gig Ethernet and converged. This particular marketplace is about a $900 million marketplace with about 21% CAGR. We were early investor in this marketplace. We’ve established a good solid long-term position in it. And this is a very key market. It’s one of the two expansion areas that we see for our business.
The one is in the center and then the last one which I’ll talk about is the bar on the right which is our storage connectivity marketplace. This is essentially providing ASIC connectivity to storage that systems across the protocol and this is about a $160 million market in that timeframe with about a 10% CAGR.
And in the aggregate, this is about $1.8 billion SAM and what I’ll do is now go through each of these three individually highlighting what we see our key trends in the marketplace is as well as some of the leadership activities that we’ve undertaken. So diving a little bit deeper into the Fibre Channel marketplace, this is our traditional core markets. It’s represented to a large degree by standard cards in rack and tower servers. And also mezzanine cards for blade server.
Today, it’s predominantly an 8 gigabit Fibre Channel marketplace with PCI Express Gen2 and we’re now in the beginnings of a transitioning to a 16 gigabit Fibre Channel Gen3, particularly with the Romley launch as Gen3 products begin to come out. If we look a little closer at the rack and tower marketplace with standard cards, this is a significant portion of our business. We have a wide variety of customers that we represent with our products and as well as our own QLogic brand.
And on the mezzanine cards within this marketplace, we have numerous design goes to also PCI Express and this is also significant part of our business and that we represent leadership position in this market with all of our blade server partners and customers. In fact, with blade server marketplace with mezzanine cards is the market that we pioneered in 2002 and for the past four years we’ve averaged about 70% market share in this marketplace about 45 points we have with our nearest competitor.
So it’s been a good market for us and then that’s a good example of where we’re going to innovative the market, take the leadership role and then have dividends in that for quite some time. I said earlier, this is about a $700 million market. It’s a core market for us. Most enterprise customers continue to deploy Fibre Channels to meet their needs and we’re confident that we have the product and the technology leadership to maintain this share and even grow it selectively in certain geographies and certain new customers.
If we look at the Fibre Channel server market trends and dynamics, let me start a little bit with a discussion of the stability. We’ve seen strong stability in this marketplace and with that in mind, our units are actually growing. Simon mentioned in the beginning of the presentation, if we look at the stats on 2011, the industry shipped a record 3.4 million Fibre Channel adapter ports and that was on top of a record year in 2010 with 3.2 million adapter ports.
So we see a good growth in terms of the units in the marketplace, which contributes significantly to the stability. If we look at the area of brand preference, brand preference, I think has been the hallmark of this marketplace. We’ve gained steadily in share and I’ll highlight that again a little bit in other slide or so, but there’s also clearly a very strong supply of preference.
So if you look at the leaders in these markets, you’ll see that while we’ve gained, largely share has been very stable as the market has been stable. And this brand preference is really important for two reasons for us, one, this is a very important business for us. And secondarily, the brand preference that’s exhibited in the Fibre Channel marketplace translates into the brand preference that we’re seeing in the converged and the FCoE marketplace. So we think this is really key to continue this. There is stick and use of sword to these products.
And then lastly, with respect to the trend, the enterprise class nature of Fibre Channel as Simon mentioned and Arun mentioned in his presentation, really continue to make this be the interface of choice due to the robustness, due to the maturity, Fibre Channel continues to be the safe and trusted choice for data centers.
In fact I read an article recently that suggested those in access of $50 billion of assets in use in Fibre Channel today. So we look at that and we look at what feedback we’re getting from our customers and it clearly indicates a significant longevity that we see.
And then in closing on the Fibre Channel server trend and dynamics, I want to let you know that we’ve completed our testing of our 16 gig Fibre Channel and we’re in active qualification with a number of our OEM partners, those OEM partners will GA their products in varying time. So that’s a significant note.
And then also while we’re on the beginning of the 16 gig cycle, we’re also firmly committed to 32 gig and we’re getting strong customer feedback of the desirability of that next space. So that one perhaps will be the subject maybe of our next analyst conference, but 32 gig standards are done this year and we’re committed to and beginning our work with customers.
I’d like to move into the leadership side. 2011 was the best year where we’ve had in Fibre Channel adapter market share. We achieved almost 55% revenue share that’s a 15 point lead over our nearest competitor.
We actually began to gain share in 2001 and in 2001, we started to march towards share leadership, which we achieved in 2004. And we’ve held that share leadership for the past eight years, in fact, last six years we’ve actually gained share in absolute value each and every year and it’s a really great testimonial to the products to the engineering team and to the customers that we work with.
This kind of share and these kinds of achievements wouldn’t be possible without a really strong base of OEM partners. We have very close relationships with our OEM and just as importantly, really strong relationships with our OSV partners. So we were very early partner with VMware.
We’re early partners with Microsoft or being tied closely with those very industry leading as well as Red Hat, to say in the Linux community and a wide variety of partners that we don’t have listed, are really fundamentally important to be able to serve the needs of the enterprise today as well as have the right feature content and the right capabilities going forward.
And then lastly, the culmination of this is the fact that we as a company have shipped over 12 million ports, Fibre Channels adapter ports where 3 million of those ports being mezzanine. So there is a tremendous incumbency in the industry today. And as I mentioned earlier on the previous slide are really typified by a strong brand preference. And really what’s that resulted in is that new entrants really had a significant difficulty entering the market and the market continues to consolidate really around two primary participants.
I’d like to move now to the 10 gig Ethernet and converged server market, that particular market place is one that we address with mixed CNAs and ASICs and like Fibre Channel also standard cards and mezzanine cards and plus the ASICs. And this marketplace, the ASICs are more for LANs and perceive on. I also have a couple comments about that in another slide, because there’s been a transformation that’s occurred in that area.
In the rack and tower and blade environment, the form factors are common. So the standard adapters we use here of course are common to standard adapters we use on the Fibre Channel side. And likewise on the mezzanine card, so even though the protocols maybe different for producing a CNA mez card or an Ethernet mez card, the form factor is typically common across the OEMs and across the specific platform.
So for us, there is really very good leverage as well as interface to our customers in the common areas markets. So, there is a more significant diversity in this market. So if we look at the Fibre Channel marketplace is pretty straight forward with respect to mezzanine card. If you look at this market place, you will see a lots of different combinations and lots of different capabilities and for us we are very well suited to produce a diversity of cards.
So it’s actually a benefit for us. I mentioned really, it’s about $900 million market and growing at about 20% CAGR. And then we service all of the leading customers as we do with FCoE, you can connect through that across the bottom of the screen.
In the trends for Ethernet, 10 gig Ethernet and converged, it’s a little bit different than the Fibre Channel market we just talked about. This is a very fragmented market compared to Fibre Channel. Depending on how you count on those roughly seven players that are active in this marketplace today. Some of these are niche players and some of these are players probably if you look out over time that may not be in the future picture.
So we do foresee consolidation occurring amongst the seven players, although we don’t really see the number reducing to a small number anytime soon. So we would expect, probably four, five players will be in the next turnout for this. And most of the OEMs that we worked with and another dynamic is the most of the OEMs that we work with beginning with the last cycle move from traditional LOMs and cLOMs being settled down to using flexible Network Daughter Cards.
And that’s a pretty important trend for couple of different reasons. One, I think it’s reflective of the fact that the strong brand preference there are for product by QLogic products as well as perhaps some of the historic networking product, really reflective of the need to offer users a lot of flexibility.
So what traditionally have been a fixed marketplace now is one that has pluggable card. And I think that’s really good for end users. It’s also good for suppliers like QLogic, because it gives us the ability to be able to address the wide variety of needs a little bit more on flexible basis.
Another difference in this pretty significance is the need for 10 gig Ethernet and converged marketplace networks to be more adoptable. There is more variable workflows. There is change in work load requirements. There is multi protocol and there is just a lot more diversity that’s reflected in this marketplace. And the product we will be able to reflect these capabilities of being adaptable, of being multi-protocol, of being virtualized, of being able to address various areas of the marketplace.
And then lastly on next gen speeds. In the area of next gen speeds, I mentioned we will see the steady March towards faster and faster speeds. Although as I previously said we think that the 10 gig is probably here for a very long time, certain applications we used it for full quite a bit.
So in summary, on the trend slide we have some tremendous technology and market leverage that comes to us from our Fibre Channel. We’ve made early investments in Ethernet and early product deployments that we think position us well. We have very good OEM relationships. And we’ve established a strong market position in both 10 gig Ethernet as well as the converged marketplace.
If I look at that leadership activity, we will exit 2011 calendar as the number two revenue share positioning adapters for non-captive adapters only behind Intel. And we have good flexibility which exhibited and that’s really translated into that leadership position in 10 gig Ethernet including FCoE.
On the FCoE front, we exited 2011 with a number one stock, non-captive with over 53% revenue share and we have significant share position that’s really mimics what we’ve seen in the Fibre Channel. So to the extent that we certainly see optics and convergence in FCoE, we would expect that market share from the Fibre Channel side, that strong brand offering that commonality of software, interoperability trust we have to be reflected in similar optics in FCoE side.
So we have new products. We will strength both these positions and again this is a very dynamic and flexible area. The last area that I want to focus on is the storage connectivity market place. This is a target market place. This is our newest focus. This particular market place for us consists of 10 gig Ethernet. It consists of converged and it consists of Fibre Channel.
And essentially, it’s an ASIC marketplace where we would sell ASICs to storage providers to provide storage connectivity and sometimes it’s I/O adapters, live I/O adapters depending upon the design of the storage system can either be standard cards or they can be custom cards.
This is a pretty significant market for us because the storage market itself is growing. And therefore as the storage market is growing and response to the data explosion, big data like the storage connectivity is growing. That’s good for QLogic. Our products are being selected by many of the leading storage OEMs. In fact, I’m going to go into a little bit more detail on that I have in some of the previous markets.
This is about $160 million market growing at about 10%. And as Simon indicated, this market really opened up to us. It’s an opportunity because the previous long-time incumbent exited the market place. It’s really good leverage for us. So lot of the protocol and lot of the development that we apply in the Fibre Channel market place that 10 gig Ethernet converged or could be derived -- applied directly into the storage connectivity marketplace. So it’s really become a very strategic one for us.
Look at the market trends and dynamics for the storage connectivity market place. Basically, we’re hitching out to a very rapidly growing market place. So all of the growth today that we see needs to be stored in some place as storage, subsystem grows, storage connectivity grows and we see both significant growth in units and storage which translates the significant growth in storage connectivity.
The architectures here tend to be a little bit different than the server connectivity market. There are longer lived architectures. Any given storage provider tend to have their own unique design, their own unique capabilities. There tends to more custom work done and the supplier to that market place, it allows us to develop stronger and deeper and longer current relationships with those customers providing future customization and capabilities that really give us long-term value to that storage supplier. So we’re really pleased to be showing some early success in this market place. And we’ll show it in a moment what some of that success is.
And then lastly our strategy of multi protocol, focusing on 10 gig Ethernet converged Fibre Channel, all the protocols associated with it really lend themselves very well for this market place as they look towards unified storage, all the storage providers want to be in the same situation to be able to address whatever the connectivity requirements are of their market place is and also they respond to increasing I/O requirements, bandwidth and the like.
I will share with you a slide that we typically have not done in the past. The storage market place is relatively new market place for us. And candidly, none of the analyst from a storage connectivity standpoint really crack this marketplace from a port standpoint.
So we thought one way that we should share some of our early success with that. It highlights generically our design win activity. And if we look at the design win activity that’s associated with our entry into this market place, we carried over 40 design wins amongst many of the leading storage providers in the market place.
Now, we are not in the position to announce design wins. We’ll announce those design wins when they come to qualification. But the list you see represents design wins that have occurred. In some cases, some of these have already gone through revenues, some of these are little bit earlier. And in many cases, those will come to revenue as we complete qualification as we move out in time.
But we really build a strong foundation of design win success and we believe that foundation will catapult us into the leadership position of this market place which will comp and have excellent leverage as we look across the portfolio and the opportunities for the business unit.
I like to summarize with a little bit more detail view of our business units, long term formula for success. We’ve highlighted the three market places as you see in here. Fibre Channel market place holds as a stable market place. We hold the dominant position in it and it has been a very profitable business for us and that profit is key because we need to invest not only in that business because we want to maintain that investment for leadership position but we also want to be able to invest aggressively in our other expansion market place as it’s shown in the two boxes on the bottom.
And the two expansion market place is represented by 10 gig Ethernet in conversion that the growing, very evolving market place. We’ve established a good leadership position in it and has a lot of leverage from our existing markets. And our intent is to grow that market place with innovative and adaptive solution.
And on the storage target side, the market I just covered, we want to be able to provide leading storage connectivity so that as that market grows, we can be the beneficiary of servicing all those major customers. It too also provides significant leverage from an interoperability, from a development from a test and from an OEM relationship.
So we’re very confident that we have the right plan and we’re very confident that we’re on track with our execution. Thank you very much for your time.
And with that, I’d like to introduce Craig Alesso from the Network Solutions Group.
Thanks Roger. Good afternoon. Again, my name is Craig Alesso and I’m responsible for the Network Solutions Group that’s part of the senior management team. We’ll move it up a little bit.
How’s that? A little better.
So that’s a good portion of being with QLogic for about four years. Prior to that, I spent 13 years with Cisco, looking after number of their edge routing platforms that use different access technology.
Over the course of next 30 minutes, I will like to step you through the strategy of the Network Solutions Group to take advantage of the transition taking place in the network storage market. So again if you take a look at the core of what we do in the Network Solutions Group, it’s very simply to simplify connecting servers to network-based storage.
We developed embedded switching solutions that provide an adaptation layer if you will that allow our OEM customers to essentially simplify connecting a server to any network based storage technology. So if you take a look at the likes of HP, IBM, and Huawei, they use our embedded switching solutions if you will as part of their product offering to give them again that flexibility of connecting servers to storage.
Our custom-embedded products if you will, are really built around 80% of a core switching technology and 20% of essentially custom features that we developed for each one of the OEM. And if you take a look at whether its IBM, HP or Huawei, they really rely on the Network Solutions Group to be the innovation for their storage group.
So for example, if you take a look at HP Virtual Connect, HP H-series SAN switches, Huawei OceanStor SAN switches, IBM FlexFabric Expansion Module, IBM Intelligent Pass-thru modules, those are all OEM branded products but essentially they are QLogic powered.
And they further illustrate that at HP’s user group meeting discovered in June of this year, they really focused on a new set of capabilities. They call flat SAN and what this is essentially is they take in their c-Class Blade Server and we’ve developed the switching blade that not only gives them connectivity to Ethernet and the Fibre Channel but also to directly connect to their new three power range.
So this one integrated switching module, they are easy to manage and deploy, provide all these connectivity in a single blade and really what that does is it gives them an OpEx and CapEx profile that’s pretty weak in the industry and has really set them apart.
So Simon indicated if we’re doing our jobs right within the network solutions group, you really won’t know that the particular switching solutions is actually powered by QLogic. If we take a look at, again, Simon talked about the need for a broad product portfolio and again Roger covered very important pieces on both the host and target side.
If we just think about our group is kind of grew in the middle if you will. And if I kind of simplify this diagram, there is really two things that OEM are looking from us, number one, it’s interoperability with the leading LAN and SAN vendors, meaning that we connect with Fabric.
We don’t need to -- they don’t have to worry about where the fabric came from what other vendor. So clearly we provide that. And then second is to give them a set of value-added differentiated features that support their brand activity. By doing that, again it’s a two pronged approach for the OEMs in terms of being able to connect anything and then demonstrate their value with products that we’re providing.
Specifically, if you take a look at from the Network Solutions Group, there are really three major areas that we focus in on. First of which is the bladed environment. And again the Network Solutions Group is actually designed and developed for generations for Fibre Channel Blade switching products for the likes of IBM and HP.
In addition to that, we’ve also developed our first generation set of converged switching blades for those platforms. And what that does is, is that it gives our OEMs the opportunity to take that edge device, integrate it into the server environment at a lower cross point and a higher level of integration for the management standpoint to again give them some opportunities to simplify things for their customers.
The second area that we focus in on is to essentially do a set of companion products if you will that can be used with rack and tower type servers, meaning that you just standalone boxes typically one racking it or multiple racking it if it’s our larger modular switch. And again it’s a same situation in terms of these products are often used in what’s called Top-of-Rack applications where you got a rack of servers, you got a Top-of-Rack networking device either provide connectivity to the storage and/or LAN environments.
And going forward, we think this is a real opportunity that you will see here shortly in terms of these point of delivery solutions where OEMs and cloud providers are dropping for full racks into customer environment. We believe we’ve got a tremendous opportunity to essentially provide the Top-of-Rack box for not only our traditional OEM customers but for a number of the new cloud providers that are coming online.
And then lastly what you see is the depiction of taking of base converged technology both our ASIC and firmware and essentially integrating very tightly into existing Ethernet layer two and layer three switches. So if you take a look at this, this is a relatively new market for QLogic’s Network Solutions Group.
The example of this is H3C has a Top-of-Rack product that’s got Fibre Channel over Ethernet set of capabilities. Those were actually designed and developed by QLogic to essentially give them connectivity now to the storage area network, namely Fibre Channel over Ethernet.
Going forward, you’re going to see us much more active in this space in terms of working with Layer 2 and Layer 3 switch vendors and again it’s probably not going to be very obvious because when we do these types of development, they are tightly integrated.
In the case of H3C, it’s actually a module of plugs into their Top-of-Rack device but in many cases going forward, it will actually be embedded within the ship metal and frankly you will never know that it’s there.
So in sum total, if you take a look at what we’re trying to do, it’s very simply this and that to allow our customers to essentially marry up to anything that exists in the particular enterprise or cloud environment, any host, any storage, any network fabric be it Ethernet or Fibre Channel and to be able to support any protocol to connect those elements, if you will.
So again very, very important to be able to win and leverage the existing assets in those customer environment and then equally important, there used to be migration path as customers moved to more of a converged environment to be able to support the new sets of servers or the new FCoE storage devices that are coming on line.
And so as a results, one of the things that I hope you take away from this session this afternoon is that our products as you’re going forward is, that is Core, a Fibre Channel, Switch with the path with the low cost software license to provide a full set of converged network and capabilities.
And what allows us to do that is innovative technology such as flex ports. We were the first company to essentially develop and bring the market of flex port, meaning the same physical port can support 1 gigabit Ethernet, 10 gigabit Ethernet or even 16 gigabit Fibre Channel. And so we have the ability to configure those ports on the fly.
Customer doesn’t necessarily have to know how much Ethernet or Fibre Channel they need in their edge devices. Again, they got the ability with the QLogic products to configure the product and to change that configuration as their needs change over time.
So again if you take a look at the trends that are taking place in the data center and in the cloud, Simon touched on that. It’s interesting to point out that there has been dramatic changes in transformations really over the last 25 years. And if I could just spend a minute and touch on four trends and really how they impact the network fabrics, if you will.
The first trend again is this whole concept of virtualization and the economics involves with the ability to virtualize any workload. And what that means really for the network fabric is really simply this. We need to essentially take the complexity out of having a heterogenous environment where many vendors equipment is interconnected. And so again the part of what we’re doing at QLogic is very simply that we’re able to connect to any LAN or SAN vendor and to be able to move that complexity such that the workload can be virtualized without having to worry about the underlying network.
The second trend is cloud. And it’s interesting when you go and talk to customers, end customers and also our OEM customers, it’s interesting as cloud is almost viewed as the economic equalizer meaning that if we go to China, they are doing a tremendous amount of cloud build out because they think this is what’s going to allow them to compete more effectively moving forward.
And when you cut again down into the network layer what that net solid is, is that they’ve got a very flexible infrastructure. And what that means is, is that network fabric be it storage or local area network frankly it’s got to be built on standards because it has to adapt and evolve and change overtime. So again part of what we do is transparently connect to what already exist and provide the gateway to some of the new technologies that are coming from the market.
And then finally in terms of information access, in this day and age, where we’re doing high speed trading or you’re making instantaneous decisions on unstructured data. What this means is for the network is you want your data, you want it instantaneously and you really don’t care where it resides in the network.
So it’s all about latency bandwidth and assignment point is out minimizing hops. So this is one of the reasons why in addition to provide connectivity to the SAN and LAN fabrics, our edge devices also have the ability to take those same devices and locally attach them. So that customers have the flexibility is that they only want a single hop to get to a particular storage device. They can do that and again that’s what this Flat SAN concept that HP came up with is all about again we have powered that into the market place.
And then finally, the converged enterprise. The one mantra that I’m hearing from end customers and OEMs or like is, is that it’s all about doing more with less. And they’ve got to get more [oomph] and bang out of their infrastructure and ultimately I think that’s what’s going to drive this whole concept of convergence.
And my grey hair probably shows my age a little bit in this technology space but if you take a look at for example, IT telephony that took eight years really to take hold in the market place and those first years, frankly it didn’t live up to expectations. Anything as it moves it way to Ethernet, it frankly is going to take roughly about that same amount of time but I can tell you is I want 100% assurance that storage area networking will be done over Ethernet.
I can’t tell the exact time but it will in fact happen. And again it’s really economics that are driving that particular need there. So now if we move to what the business opportunity for QLogic and specifically for the Network Solutions Group, it really kind of breaks on the two fundamental areas for us. The first of which is a traditional Fibre Channel business and this really have two separate components.
The first of which is again we developed four generations of late switches or the likes of IBM and HP. That’s roughly about $100 million of market opportunity for us in fiscal ‘15 and then if you complement that again with the box product that are used more in terms of the rack deployment that is roughly about $250 million to $300 million opportunity for us.
And again if you take a look at for our box products, today we really have three routes to market. HP H-series, Huawei OceanStor SAN product and we have a QLogic branded product for the open channel. Going forward, there is a tremendous opportunity for us to grow share in this traditional Fibre Channel switching space.
And again as Roger pointed out, it’s a stable market, it’s one that we demonstrated the value of our products. And frankly we will be using it as a launching pad as we move into the right side of the market opportunity, namely converged space. It’s interesting to take a look at on the converged space, the blade aspect is roughly again about $100 million business in FY ‘15 and we’ve got very good -- we’re well positioned with both the OEMs and the emerging cloud providers in this space.
And then if you take a look at the Top-of-Rack or converged box opportunity, it’s a pretty significant market place again. This was roughly about $500 million market place in FY ‘15. Again this is primarily driven by the cloud deployment. In this whole point of delivery, we’re again -- people are bringing in racks of solutions. Our products really fit well in this particular market place.
And again as Roger pointed out, for the most part the Fibre Channel business is flat, moving forward to FY ‘15 and again there is significant growth almost 50% year-over-year in the converged space. So in some, if you take a look at these opportunities for the Network Solutions Group, it really represents about a $1 billion market opportunity and it’s one that again excites us as a team going forward here.
If we take a little bit deeper look in terms of each aspect of market opportunity for the Network Solutions Group, again this is the Fibre Channel component. The thing that I pointed out here is that there happens 16 gigabits Fibre Channel products at the director space that were introduced roughly about six or eight months ago.
The real demand for 16 gig is going to happen when the host product become available and the target products also become available in the calendar year ‘13. We think we are well positioned in terms of winning new 16 gig blade opportunities as well as fixed configuration box opportunities moving into FY ‘13 -- excuse me -- calendar year ‘13.
And if you take a look at the trends in dynamics in this market space, it’s clearly a stable market. Our customers that have mission critical applications are in fact using Fibre Channel as part of their base configuration. And if you take a look at fixed devices that resides at the edge of the Fibre Channel network, there is a record number of port shipped in both calendar year ‘10 and calendar year ‘11.
I think that really speaks of the fact that this is well understood, well used technology. The second market trend in dynamic really revolves around the fact that there are three well established providers.
So again this space has been in play for more than 10 years. What I will tell you is that the OEM partners and cloud partners are really reevaluating who they are working with. And again, we are extremely well positioned as part of QLogic because we don’t compete with our OEMs or our cloud providers, meaning that we don’t sell directly the customers. And so as a result, it forwards us some opportunities that our competitors don’t have and again you couple that up with the fact that we’re developing in a way features for them really puts us in good stead in terms of gaining larger market share moving forward.
And then lastly, if you take a look at our installed base, again we’ve earned our strides in terms of showing that we can bring product to the market that have the stability and performance that’s required for those mission critical applications.
And as Roger indicated, we’ve announced a 16-gigabyte switching product as well that we call universal access point again and it’s core, this is a 16-gigabyte Fibre Channel switch, if you add a low cost software license to it, its also converged product which I’ll talk about here shortly.
So, again, a single product line going forward to serve both markets and like our counterpart in the Host Group we will be working on instead of 32-gigabytes switching products for the Fibre Channel space as well.
So if you take a look at expanding a little bit in terms of our strategy going forward to gain more share in the market. Again, everything evolves around building on top of the set of products that are currently being offered.
If you ask, why the customers choose QLogic products? It’s very simply this, performance and the lowest latency. If you take a look at anyone that’s developing content or moving content, they choose QLogic because, again, because of the performance.
And then the other big differentiator for our Fibre Channel product is, we’ve got a set of features that essentially provide a very quick Fibre Channel backup capability. And again, whether its performance for backup with online transactions is really what separates our products in the marketplace.
And again, if you take a look at the end customers using QLogic products it’s very, very significant, spans a number of industries in addition to the content generation and distribution.
The second thing, I’ll point out is that going forward you can expect that we’ll be expanding the set of OEMs that we are currently working with. As I said before, if you look today, we’ve developed products primarily for HP, IBM and Huawei.
Going forward, you will see us expanding the number of OEMs that we are working with. Again, those are the traditional server OEM, as well as some of the new call providers.
And then finally, and certainly not least, is the fact that, our Fibre Channel products going forward will also have the ability again to support converged networking. Why is that important? Because if you take a look at the way those customers are evolving their environment today, it’s probably not going to be pure-play, meaning just Fibre Channel or just converged networking.
There are going to be customers that have a very substantial Fibre Channel environment that they want an edged device to essentially provide connectivity from a converged server standpoint into this Fibre Channel arena. And similarly, we’ll have customers that will want to be able to add after we target devices to their existing environment.
So going forward, we’ll have a common product line if you will that can span Fibre Channel, converged and we are calling hybrid fabrics, meaning that customers will have both environment in place and being used in online transaction. So again pretty excited about the opportunity in front us here on the Fibre Channel side.
If we switched gears for a moment and focus our attention on the converged based, couple of things that are little bit unique about this environment again, you see here in terms of the blade environment, again its roughly about $100 million in terms of opportunity in FY ‘05.
One of things you’ll notice is, is that, in addition developing blades for blade servers we are also developing modules that plug into a layer 2, layer 3 Ethernet switch and in some cases will be providing our innovative application specific integrated, as well as our Firmware that will be tightly integrated into those layer 2, layer 3 switches.
And then we will be complimenting that again with an edged product, universal access point, the UA5900 which we’ve announced and again, that particular opportunity is almost $500 million in FY ‘15, again driven primarily by those rack or point of delivery based solutions.
And as I indicated before, so we will be working with our traditional set of OEM partners and in addition to that, we’ve got a number of unannounced partnerships at this point. And again, it probably is going to be real obvious which of the products from our partners going forward are actually QLogic-powered.
So if you take a look at, what’s taking place in terms of trends and dynamics in the converged space, little different than what we see in the Fibre Channel. The first of which is, there is really a move effort within the Ethernet space for vendor specific implementation.
So if you take a look at just everything over Ethernet, monitor that’s taking place. What’s happened is those providers that have a broad product set for Ethernet are essentially developing a set of unique features on top of the standard.
Our challenge if you will to be able to tap into that, we’ve been able to do that historically. But, again, that’s one of the things that we’ve got to recognize going forward and adapt to the -- as part of that, we believe our rule again in this whole situation is to be the adaptation layer.
So the tremendous amount of change taking place at the server environment, we’ve got this Ethernet environment where there are in fact vendor specific implementation. We’ve got Fibre Channel, data center bridging Ethernet, our job, frankly, is to make the fabric and the server environment independent of one and other.
So I can upgrade my servers, independent to my fabric and this piece in the middle if you will, the switching element, this edged switching element has to be able to adapt and change to what’s taking place on both side of the environment if you will. And again, we think that’s where we can play a very, very significant and unique role going forward here.
And then lastly, as everything collapses on to Ethernet, there is clearly going to be a need for next-generation speed. So Simon pointed out the fact that we’ve got, our OEM partners already asking us about 40-gig and in the case of the server provider, they already deploying 100-gigabyte Ethernet in their infrastructure.
So going forward, we’re committed not only to 16-gigabyte Fibre Channel but also the 40-gigabyte Ethernet and 100-gigabyte Ethernet as we think that that’s really an important aspect of making this convergence a reality.
And then in terms of, again our leadership, it’s all above this fabric freedom been able to connect any host to any storage over it, any protocol or fabric. Fundamentally, that is what we are all about in terms of the Network Solutions Group and as part of that this is what gives our platform the legs if you will not to be a solution that is single solution mean that it can evolve overtime.
Again, like the Fibre Channel market segment, you’ll see us expanding the number of OEMs that we are working with both traditional server and cloud. And then in addition to that, you will see us working with layer 2 and layer 3 Ethernet switch vendors, again giving them the technology to be able to support storage area networking over the same infrastructure that today is carrying the local area network traffic.
And then lastly, with our core technology our ASIC and Firmware, we’ve developed a product such that it can features about any form factor. If the customer wants to integrate our integrated circuit with our Firmware we can do that.
We can also build them a small module that can plug into their layer 2 or layer 3 Ethernet switch. We can build the full switching blade for the server vendors and also compliment that with the full top of rack implementation. So, again, we’ve got the ability if you will to span the entire market opportunity and we can fit any footprint that our partner requires.
So in summary, again, very excited about the opportunity in front of us, we’ve got a core business, Fibre Channel that, again we’ve got a rich history of bring products to market with our OEM and essentially a very excellent track record in terms of demonstrated performance and reliability.
What are we doing is building on top of that, essentially again creating a common platform that can be used not only for Fibre Channel switching but more importantly for the evolving converged space and by doing that, again we’ve got the ability to essentially enable customers to set their own space for convergence.
They don’t have to buy a product that can only do Fibre Channel or it can do convergence. They can buy products that essentially will allow them to set that pace this whole concept of adapted convergence where they can change their environment overtime.
And by doing that, we think we’ve got an opportunity to grow faster than the converged market space and again, one to really reenforce our position in terms of being able to provide that adaptation layer at the edge.
And so with that, I believe we’re going to be taking a 10-minute break here, and so I think that puts us back here at about 25 minutes to the hour and look forward to continue our session. Thank you.
Hello. Please take a sit. We’ll start our next presentation. Okay. We’ll start now. Our next presenter will be Shishir Shah, General Manager of our Storage Solutions Group. He will talk about the exciting technology announcement that we had this morning. Okay, Shishir.
Thank you. Can you guys hear me? Great. Good afternoon. And welcome back from the break. Hopefully, you guys had a little bit, little bit chance to have some sugar from the desert tray and all charged up.
I’m, just to kind of introduce myself, repeat myself, I’m Shishir Shah, responsible for Storage Solutions Group. In the storage market for 28 years of my career, storage and systems software market, I have been with QLogic for last 16 years, when Fibre Channel was still on a drawing board. And first nine years, I have been working on the Fibre Channel HBA, iSCSI HBA, software and software strategy for QLogic.
In 2005, I took few people from my team, to be exact four people from my team and formed the new business unit. Mission was to build on our QLogic’s core technology, new solutions, new avenues to revenue for QLogic.
So using the QLogic’s core technology foundation, how can we build new solutions and new open new markets, adjusted markets for QLogic that was the mission of this business unit.
And so we built intelligent data mobility solutions over next four years, mainly focused on the Enterprise Applications, targeted towards to working very closely with storage OEMs, we built a platform called Intelligent Storage Router.
And as you can see Intelligent Storage Router right on the top of the screen, this Intelligent Storage Router provides three different applications, so we -- over four years we built three different applications.
First application is on protocol routing with the emergence of iSCSI, then emergence of FCoE, you had a unified storage requirement was building and we build the multi-protocol routing solution such that our storage OEMs products can be a multi-protocol product.
Then we built SAN-over-WAN connectivity solution to support disaster recovery, these disaster recovery solutions for but based on storage area software. And third we built what’s call Data Migration Solutions. Data Migration Solution was built to simplify the data migration problems within the data center itself.
So, we are enabling storage unification and intelligent data mobility. This is what we have done so far. These are the products that Storage Solution Group has delivered so far and I’m going to talk to you more about the next product that next technology announce -- this technology announcement and the next product that will be coming out of Storage Solution Group in few minutes.
But based on this, let’s talk about the market opportunity for, what I said to you. This is completely incremental market opportunity for QLogic. The first market opportunity is about data migration, the product Intelligent Storage Router, data migration is an application on top of it.
$145 million SAN mainly used in software revenue, 19% CAGR. It is going to address the need of current data center needs and also migrations going to the cloud, data center adoption to the cloud and I will cover that in next slide.
The second opportunity is what Simon report earlier about SSD optimized SAN. We announced this technology this morning called Mt. Rainier technology code name Mt. Rainier technology.
So based on Mt. Rainier technology, we will serving this market SAN of $495 million and I didn’t put any CAGR in it, because we believe it’s a new ground breaking technology. It is very unique technology and it’s going to open a new door within the enterprise data centers.
In FY ‘15, we project about $640 million. One important thing I forgot to mention in here, the $495 million SAN does not include any SSD. So I want to be very clear. QLogic is not in an SSD business. We are not selling SSD. We are selling SSD. We are selling solution that basically integrates SSD seamlessly into the SAN, okay. So this innovative new solution will allow us to address $640 million of brand new market in FY ‘15.
So I’m going to quickly cover data migration piece and then focus majority of my talk on Mt. Rainier technology. So lets and this is very relevant, this is very important to understand what we’ve done here, because that is what will help you understand the Mt. Rainier technology against its -- as we go further.
So when you look at data migration in general, it is a very complex problem in the data center. It’s been around for years and years and years, this is not something a new problem that we’re trying to solve.
The problems has been around for years, years and years, so it still remain very challenged, it -- the data center and as you grow more and more data the problem is becoming bigger and bigger and bigger.
So let me give you an example. So while you want, while a storage administrator wants to migrate data and take the data migration projects which are on daily basis happening. They have to deal with lot of different operating system environments, like Roger talked about all the (inaudible) and different operating system and complexity he talked about.
A lot of different operating system, lot of multi-vendor storage arrays within this environment itself, multi-protocol storage arrays, so you have Fibre Channel lies to this, next one is coming FCoE. So the migrations are happening amongst all of this creates a lot of headache to manage all this heterogeneous environment.
And it creates a lot of headache because there is not a one single tool that they can use to migrate the data across all of this environment. Bottom line, they wind up calling storage specialist, cause them which are basically data migration specialists and they spend tremendous amount of money, and it cost lot of time.
So we -- when you look at this, when I said about heterogeneous environment and when you look at today’s environment which is attribute better than worse, that if this customer themselves now have to move from EVA to 3PAR. Dell compare -- Dell’s customers are moving from EMC to Dell Compellent, their storage. IBM is trying to move their customer from their legacy DS4000, DS5000 LSI-based storage, the new V7000 XIV Storage.
So heterogeneity is actually is becoming worse, okay. NetApp and EMC giving more ground market share in there. And think about that’s now you also are dealing with deploying private and public clouds, and these are probably infrastructure place and you’re going to able to need to migrate the data between your data center and the cloud itself. So the problem is getting worse.
So working, as I said again, working very closely with our storage OEM partners, Tier 1 storage OEMs. We developed a single tool that addresses online non-disruptive, heterogenous data migrations, okay.
When I say heterogenous, means, it covers all these operating systems, all major vendor storage arrays, its completely transparent and it remains non-disruptive, minimizes application downtime as you migrate the data and it works within data center, it works across data centers, so, you are local or remote migrations and you can migrate the data, you have block data from your data center to the cloud, itself, okay.
So, this is very unique tool in a marketplace that addresses all of these segments combined together. And we have really gone at bit under the problem and really simplified this particular deployment model.
So, it’s not very complex instead of data migration specialist doing this work, the storage administrator can actually do this work now, okay. So that is what we have actually done, proven ourselves, earned the strides by Tier 1 storage OEMs qualifying this particular product, deploying in enterprise data centers and this is actually very important to understand is, nobody is going to just walk in and say, here is my product and you can now manipulate my data on the fly without owning the sites.
And this product has let us earn the strides where we can move the data around while applications are up and running, okay. So it is very significant part of it. Build on this particular technology.
So this technology was built on core Fibre Channel technology, which we build for nine -- first nine years and rather continue to build that further. We built this solution for next four years, another strides in this area and we built a technology called Mt. Rainier.
As I said earlier, we believe it’s going to open some exciting chapters in data centers. It’s a -- it -- I believe it’s a ground-breaking technology that you will see. But before I tell you what it is and how it works, let me walk you through why, why this is relevant, why this is really needed within the data center itself, okay.
So what this technology does in general, it’s going to and -- it is going to accelerate Enterprise Applications, okay. So starting it at the application themselves, so when you look at large Enterprise Applications, they are 7x24x365, must be highly available, okay.
They must scale on demand. They must work on increasingly large amount of date sets. Their data is growing bigger and bigger. And very importantly, they must deliver results real time.
The performance of an application becomes critical factors. So to support, 7x24x365 applications, you have servers deployed. Large amount of severs deployed is in data center, which are clusters and are virtualized, okay.
You can -- you cannot support 7x24x365 on a single server application. It has to be clustered applications to be able to support that. Aberdeen Group recently released a study in, I think in June 2012, if I’m not mistaken, basically suggest that 79% of the applications in enterprise, okay, are either clustered and are virtualized, okay.
These applications, their structured data applications, data bases, mail servers, analytics, your Web 2.0 applications, VDI, which is coming up the new and so forth. They are basically business mission critical applications, business application that you cannot take it down.
So for these to support these applications, we have virtualized and clustered servers, many, many of them deployed in the data centers. And to support such applications, you have requires a shared storage where you have two servers, when I talked to the same data, you have to have a model where you are actually sharing the storage. So these applications demand shared storage and their demand is a highly available storage.
$50 billion of Fibre Channel assets at work today to support such applications. So 12, 15 years ago when the SAN came in, they enabled these applications, the clustered applications, the virtualized, whether it’s Internet explosion that happened along with that, that enabled these applications. The SAN actually repowered this application, maybe it happened, okay.
So what happened along with that is, you also putting your compliance and data protection quality, which we all know is extremely important for any enterprise, you have to retain your data and you have to have a, all the compliance policies that government is coming up. You must be able to pull out your efforts in time and so forth. They are all deployed in SAN today to manage a centralized polices to manage these things centrally, that was deployed out here.
So point number one that I’m trying to make here is Enterprise Application demand shared and highly available storage, okay. This data must be shared and highly available. Now, I said, applications must deliver performance real time. Why is it important? So when you look at these applications, faster application gives a competitive advantage, okay.
What I mean by faster application is, application that’s delivering more transaction per second, faster access to your e-mail, quicker results for your data analytics. That’s what I mean by these applications running fastest, right. These fastest application have a direct impact on productivity of workforce and also on their topline and bottom line of the company’s growth and profitability.
Slower application means a customer is going to run away from your website, okay. You couldn’t deliver transactions in a timely manner he is going to walk away from your website.
Slower access to e-mail, that means workforce is going to be less productive, response is going to be slower. Slower responses to your data analytic, simply means your decision process is going to slowdown and I think no better than you guys, this community understands how important it is to make this decision on real time.
So slower application means money down the drain, it lost business opportunity. These are very important piece in here. Reducing application, access time to information is critical, okay. So, I said, first I said, as an Enterprise Application, which is about 79% of the application in enterprise required shared and highly available data.
Second thing I’m saying is these application the performance matter because it has a direct impact on the business, application performance matters, direct impact on the business.
So let’s look at it, what is in the way between applications giving this performance and application is being able to -- what is in the way between application’s ability to get this performance here?
And I think Arun kind of talked about it earlier. So let’s start with servers. Servers have grown more powerful. You have networks that become faster, 10-gigabit Ethernet network, you have 16-gig Fibre Channel networks are coming into picture.
More densities in the server itself, 8-core, 16-core and then going up, test driving higher level of virtualization, okay. So you have applications, more applications, more demanding applications. Now, they have an ability to crunch the data fast.
You have faster servers in here. You have more virtual servers that mean you can scale on demand very quickly. Therefore, this application can crunch the data very fast. Therefore, they are demanding very high performance I/O.
And what type of I/O they are looking for? They are looking for Random access I/O. So all of the application that I said earlier, they are not about sequential I/O. They are not about, just reading a video feed. The transaction the data sets are large and they are looking for Random access I/O. Workloads are all random workloads.
The disk drives on the other hand have grown fatter, support a larger capacities, more demand, that’s been created today as the data continues to grow. But we can get faster. Physics, 15,000 RPM drives. My god, 10 years ago you had a 15,000 RPM drive. Today, you have 15,000 RPM drive. Why? Even someday you can go faster and you cannot rotate faster than the speed.
So the storage has remained slow from a performance perspective and your I/O performance gap is growing, okay. This is what Arun was talking about when he was drawing the pictures by hand.
So how do you solve this I/O performance gap? With the recent, I won’t say recent, but in last three to four years as you guys all know, SSDs have become more affordable and more reliable. And the SSD technology now is a real option to solve this particular problem.
Why? The fundamental thing about SSD technology is they deliver very high performance on random workloads, okay. So they deliver high performance on random workloads. The SSD technology is available, which can help build the I/O performance gap.
So let’s recap three things again, I’m kind of repeating this thing is because it is extremely important to understand this particular thing and why I said that it is a ground-breaking technology that we are announcing.
So, first I said, Enterprise Application requires shared and highly available data, okay. Second I said, their performance, the servers have become fastest, they have become performance hungry and the performance -- application performance is a critical success factor for the business.
And third thing I’m saying now is, there is a technology from SSD that can help build this performance gap, okay. So we are going to tie all these things, three things together to actually realize the benefit, right.
So let’s look at it, how this technology is being used and they are deployed in different solution. How different solitons are coming to market or have come to market? We actually solved this I/O performance gap problem.
Starting very initial deployment that happened, SSD acceleration in storage itself, very natural, people said, hey, I deployed SAN. I used them for a long time, very natural deployment. I put a cache here in this storage array itself. Life is good. No change in infrastructure. No host software being changed and very simple to implement. It was completely transparent to applications. That’s great. Should work really good.
So that purpose is good and it’s got some I/O performance in here. But what people forgot was, a storage array has typically between 20 to 10 servers connected to it, okay. So it could not be -- SSD performance could not be realized because this large number of servers connected to a storage array. The storage array themselves become the bottleneck in delivering the performance out, okay. So the performance could not be realized.
And you have multiple hops. Server and storage, there are multiple hops along the way and so they are far away. So they added little bit more latency in the equation. So because of this, you could not realize the performance benefit. And the important thing was, how do you solve the performance problem, how do you close I/O performance gap. So this solution didn’t quite work, okay.
So the next thing happened. We said, starting last year, you saw bunch of announcement and bunch of products in the marketplace. Let’s say, why not deploy SSDs in server and used that as cache, okay. So that’s what happened. Then critical data close to an application, performance problem solved.
Access times from millisecond came into microseconds. Random workload was great, perfect, except one fine detail. Suddenly, this solution didn’t share the data, okay. You had a captive cache running in server. That means you can get a benefit for one server. But if you want to run clusters, you can’t get a benefit.
So great performance and people figured out, how to get a performance out of the SSDs and how to scale that particular piece by deploying SSDs, but the problem remain is, how do you address the shared data model that’s required for these Enterprise Application. So, as you guys know today that SSDs deployment in an enterprise is very, very limited, okay, hasn’t sought on.
There is a huge problem but hasn’t sought on. Why? Because of this problem. The first solution didn’t deliver the performance it was and not very relevant. Second one delivered the performance but would not share the data. So the cache data is captive in server SSD. So this solution still remains incomplete, stepping the right direction, but remains incomplete.
So now let’s look at the whole picture and say, what do you really require? What are the characteristics of a solution that will work for this enterprise application, okay/ So server based actual requirement must solve an I/O performance gap problem to reduce the latency.
Need to be simple to deploy, 28 years of experience in storage industry has thought me one thing. Things can be adopted very quickly in a marketplace as they are simple to work with.
Simplicity brings lower operational costs and OS independence bring in broader adaptability, a single solution, just like I said in data migration piece. A single solution that works across all the different operating environment can help solve -- can be simple to deploy and work with, will remove lot of complexity. It has to support clusters.
Solution has to support clusters. Therefore, it has to be SAN. It has to support virtualization that is going across multiple servers. So virtualization within single server is fine, but you must be able to scale on demand, that means you must be able to deploy your virtual machine on one server going to next server and so forth, okay, it’s important.
$50 billion assets that are involve today and you have your compliance and protection policies deployed today. You have to protect that, okay. Simon said earlier in his chart saying, the demand for the performance and expenses is growing but the IT budget remains virtually flat. So you have to put up what we invested already. It’s not replace.
And whatever you come up with, it has to bring benefit to a broader set of applications. All the applications should be able to benefit from what you are building. Otherwise, you can have a little niche solution, technology won’t go any fast.
So looking at this picture here, the current server based SSD before Mt. Rainier, so the solution, that I said deliver the performance and I’m going to walk through these bubbles, these bubbles are color coated. Green means good. I mean you can see for yourself. Yellow means can I ask and red means problem, okay.
So as they said server based, host based software solution broadened your solved the performance problem. From virtualization perspective, it says as long as your virtualization stayed within that single server, life was good.
The whole idea of virtualization, the virtualized resources is to be able to virtualize across multiple servers, problem area there. Simple to deploy and I’m going to walk you through some of the complexities of these solutions and what they are in a minute.
So simple deploy is a problem. They’re host-based software solution. Therefore, they’re independent operating environment, can support shared resources, can’t support cluster, can’t support shared resources, therefore, they cannot provide benefit to the large amount of applications, okay.
Mt. Rainier on the other hand integrates SSDs behind QLogic, SAN HBA, okay. And provides benefit across all of these solve -- meets all of these challenges that’s been laid out.
And I’m going to walk you guys through these bubbles, few at a time, okay, to see how we accomplish this, okay. I made a grandiose claim here right, so now let me backup this particular claim and see how we do it.
So, first, let’s look at the simplicity aspect. You have to have a performance delivered that means SSDs has to remain within the server itself, that has to be simple to deploy and it has to be work independent, that’s what we believe.
So let’s first start looking and see, what is the solution in the market today, okay? So, last 12 months, what you see the solution in market today is, you had a HBA driver, one of the Fibre Channel HBA driver or a CNA driver for that matter. Let people put in their SSD in the server, so you have another drivable SSD driver, okay.
And on top of that people build, their cache -- caching solution, their caching driver, okay. Now what’s important about this is, for this solution to work all of these three have to interoperate properly. If I change a version of that driver that means I’m going to re-qualify the whole solution again, okay.
Now think about this, if you’ve got virtual machines running that means you have more drivers running in the virtual machines. And if you have to cluster this thing, amount of software and interoperability that you’re going to see it, can you imagine the IT guys’ headache with that? It is very complex and that solution is going to work in one operating environment and then you’re going to need another solution to work in a different obligation.
So you have our Windows configuration works very nice and prove it out perfect. But that’s not enough and we have Linux configuration. And I need NEIX configuration. I need HP-UX configuration, so the complexity is very huge guys. And when you have more complex solutions, your operational cost becomes very high and solutions don’t work, okay?
On the other hand, what we did, what we said, let’s take this cache driver, let’s take this SSD driver, combination of that. Let’s move that into our HBA itself, okay. So before no SSD at all in a solution that’s what you have. You had a QLogic driver, HBA driver, that’s how the SAN operated.
When you deploy the solution, it continues to offer the same. So all proven hard, proven driver that Roger talked about with nearly three roots and he showed that the complexity behind it, all the qualification we’ve gone and an incremental change in that particular driver, a very small incremental change to support this hardware, not the caching algorithm. But to support this hardware, that’s what you have changed in the host, okay. All of the complexity moved in this particular card out here.
So what it did was two things. First, remove the complexity from the host, naiveness of managing all the drivers is gone. And two, the SSD cache management and what I’m going to show you next is about the clustering aspect to create a shared data model where you have to cluster all the SSDs together across these servers.
All of that code now running in a very, very controlled environment. I don’t have to create five different solutions and for five different operating systems. I qualify the solution one-time about my ability to be able to network and all that, a solid wall on it and my solution is almost like a goal, okay.
So we made it work independent. So Mt. Rainier simplifies deployment of these SSDs in the server, where SSD will bring performance in applications. Bottom-line, simple to deploy, equals to lower TCO and broader applicability.
We believe this aspect along and along with the next thing that I’m going to talk about, but these two aspects is going to increase the SSD adoption in marketplace. And as I said, we’re not supplying SSDs, we are using industry standard SSDs, okay.
Now, we talked about simplicity aspects and I know you guys may have questions and we’ll take questions later on.
Now let’s talk about how does it support cluster, how does it, how does it feed the shared data model, right that the other guys won’t do it, multi-server virtualization and so forth. So let’s talk about cluster themselves to start with, okay.
With this example we are showing four servers on the left and there are four SAN disks on the right hand side, okay. For clusters to work, all four servers have to see all four disks, that’s what said, call a shared data, okay.
So now this is a typical cluster. If you look at Oracle RAC configuration, you will have 20 nodes like this. If you look at virtualization, typical virtualization have a deployments, anywhere between 8 to 20 nodes, 20 nodes is a kind of magic number because it fits 20 servers fits in a single RAC, okay.
So you see that. You look at your mail servers, which runs your Microsoft cluster applications 2 and 4 node clusters are very common in marketplace. So the bottom line is all four LUNs are seen by all four servers, okay. Now, you put your direct attached that SSD with their whole software, which cannot share the cache.
Now you can understand why the solution doesn’t work, because when you put in this code in here, suddenly there is a data that’s being cached in each of these SSDs. If this guy has to have a request for that data, he doesn’t know how to go get it, right. There is not connectivity between the two, because all the data typically is kept synchronized by the storage array itself. So it basically breaks the model the solution doesn’t work.
So what’s Mt. Rainier? The Mt. Rainier creates a shared cache module. How does it do it? Your typical HBA can’t talk to each other. They can talk from server to a storage and back, but they can’t talk among each other. These are typical storage protocols, okay.
Mt. Rainier on the other hand is a technology there where the adaptors can talk to each other. That technology came right from our data migration product, where we had to pretend one -- for one guy to be a server, for storage array we have to be server and for a server we had to be a target to be able to accomplish that particular task.
So technology came right from there and we can actually, these Mt. Rainier HBAs can talk to each other without changing infrastructure. So you can take your data server, upgrade with the Mt. Rainier HBA or copying SSDs behind that and now you have shared cache module, okay. So because they can talk to each other, therefore they are -- they get a shared cache module.
What this also does, it also sets you up for a distributed cache module and what do I mean by distributed cache module is? Look at all four disks out here. Each one of the disk is defined to be cached on a different server or a different Mt. Rainier card. Did this example, assume that these are 400-gig SSDs sitting in the server, right. I just created a 1.6-terabyte of unique data cache for this particular application, okay.
If I took a direct attach storage, I mean direct attach with the previous model, that cache data would be duplicated, would have to be duplicated among multiple server. And you have to constantly communicate about how -- what to invalidate and what data I have and what data I don’t have, okay. So there is a lot of communication that had to happen for that type of solution to work and not all 1.6-terabyte of SSD would hold the unique data, okay.
And as I said, this remained transparent to two operating system and OS independent. So your data protection in SAN remains intact, your OS, this whole thing happens completely behind the back, works in a very controlled environment and it did solve the problem.
So shared brings scalable performance to clustered applications, okay. I started out telling you that enterprise application required a shared storage and therefore shared caches, and application must require, application demand performance and SSD can have solve that particular performance.
I stop here. I have met all the objectives that I set up to do. I have a shared data model. I can support clusters. I can support virtualization across multiple servers. I think it’s very simply to deploy. I can -- the very broad -- all the applications within data center now can benefit for this thing.
And all of these description that I did so far the cache model, SSDs are don’t -- are -- SSD remain transparent to the host, meaning as far as operating system is concerned, SSD don’t even exist in that model, okay.
But this technology goes even further than this. So let’s talk about how does it create data pool, meaning how does it create high-performance clustered storage out of this data set? And this is what we’re talking about is, is something that you would see transition over next two to three years, people are actually taking about, I just read a study yesterday where they were actually talking about building such a storage.
So, I’m going to now change a gear and show you how we build clustered storage, okay. High performance cluster storage out of SSDs deployed within the server itself, okay. And there are applications like Web 2.0 applications, like virtual desktops, which work on a smaller data set, can actually benefit from such thing.
But more importantly, when VMware wants to optimize their file caching algorithms and putting the data theory in there, when Microsoft wants to enhance their NTFS file system and say, hey, we are going to create data tiering, we’re going to put zero. We know what, what data is hot and what is cold. We’re going to put it, put this tier zero data inside this particular SSDs and we’re going to move the data in and out and as we think is a data being hot or cold, or cold.
Oracle, they have, 5 o’clock in the morning, I want to say my workload is hot. These particular tables are hot. I know what to bring it in. I’ll run it from there, so all of those data tiering that they’ll be able to do.
So before I start talking about these clustered SSD, let’s get to a one basic fundamental understanding. In an enterprise, data center, you have thousands of servers you deploy, right. But the handful of configurations, fixed configurations, so it’s not like, have thousand servers and have thousand configurations.
hand fully meaning, maybe 15, maybe 20 different server configurations, they standardized on and they says, that is what I’m going to deploy because that is how I’m going to maintain my operational cost, control my operational cost, okay. So that’s a fact.
So based on that assumption let me walk you through this example about high performance cluster storage and what it does? So in this example, I have one server, type of server configuration, which has server 1 and server 2.
The Mt. Rainier card, I’m showing a SAS SSDs in this example. You can have a PCI SSD, which either one you want. The standard configuration says, I’m going to deploy 400-gig SSDs in each of the servers, that’s how I’m going to standardize my server.
So what Mt. Reiner does is, because they can talk to each other, it actually creates a single pool of shared storage of 800-gigabyte of single pool of storage in this example. If I had a 4.0 cluster, it will be 1.6 terabytes shared storage. Instead of 400-gig in here, if I had a terabyte of MLC cache is sitting, SSDs sitting here, I have bigger pool of storage, that I can go create.
And in this example, what we’re trying to show you is our virtual machine that’s running in here and each virtual machine requires a different capacity size. One guy says give me 50-gig, another guys says give me 100-gig and they have a mission critical apps that’s says I don’t have 600-giga storage.
Physical capacity 400-gig without Mt. Rainier, without clustering this up, you can even do it. You would actually have to create another configuration of the server and say, hey, I have application that requires 600-giga, more the 400-giga storage. So, I need to build another server configuration, which is going to have a terabyte of storage.
With Mt. Rainier, SSDs deployed in server don’t go to waste and they create very, very high performance storage. This is all pure SSD storage that we build. Not only we can build the plain, simple LUN out of disk allocation, but this can be mirrored across to Mt. Rainier. So it also provide no single point of failure, okay.
So this is how we create high performance storage and because this is a SAN storage and I want to load balance my server and I want to re-motion the app from here to there, no problem. Completely transparent to the operating system, because operating system think, it is a SAN storage, it’s a shared storage. Therefore, all of this thing is very simple.
Transparent to operating environment, high performance clustered storage, okay. So I told you basically three things. I told you it’s simple to deploy. I told you I can create a shared cache model with Mt. Rainier and I also told you that not only that I can build a high performance storage out of this, okay.
So certainly SSDs would benefit only in a very small application set. Now with this technology, all applications deployed in enterprise can benefit from SSD technology. And this is what will help close the I/O performance gap problem that we started out with. This is going to bring -- this is going to enable applications to run faster, okay. So, I cannot describe this to you.
So let me tell you this, that this is just a summary chart which basically says everything became green. I started out telling you in blue until, what the requirements are. Here are the requirements and I cannot summarize that before. So going to the next chart, SSDs so we can take PCI SSDs, we can take SAS SSDs, industry standard what’s in green is what QLogic has provided, what’s in silver is the industry standard stuff, okay.
We are not in SSD business. This is managed like an HBA, okay. It is application transparent. It is OS independent. It is infrastructure independent. You don’t have to put in your any new networks to connect this particular piece. And it is subsystem agnostics. That means works with HP arrays, EMC arrays, NetApp arrays, IBM arrays you name it and it enrolls.
Same interoperability that when Roger talked about and then what we provide, the same interoperability exist with this particular product, because it’s highly leveraged. So why QLogic was able to pull this thing off, okay.
First thing, we talked about Roger described this particular picture, I mean he had some different language on here. But the bottom line is SAN about the leadership. We’ve done this for 15 years, we know how to connect a server to a SAN and we know that the amount of complexity that we work on over time, right.
On top of this, we built data mobility solutions, data migration solutions, which means we earned double stripes and saying enterprise trust us in our ability to redirect data, because caching is all about redirecting data, where do I put this data here but we’ll get back the data from there. So, we owned our stripes in that area. We owned the trust of our Tier 1 storage OEMs by delivering those solutions.
We don’t believe any HP vendor has that capability, okay. So built on this, without having this foundation and top of that particular ability to do that and the learning process that one has to go through to get there, we bring in the unique competitor advantage with these pure player as adapter in this Mt. Rainier technology announced today.
And this is why we believe we’re in a very unique position. It took us three years after we’d have actually -- we had delivered the data migration complex, data migration solution. We have enabled to leverage a lot of that software into this particular product. And even then it took us about 2, 2.5 years to develop this technology, okay.
So this is complex stuff. I’m very proud of my team that we’ve been able to pull this thing off. The technology is here.
In summary, it integrates server side SSDs into storage, while maintaining $50 billion worth of same assets that you deployed, the data protection and compliance policy. It will unleash the SSD performance for a broader set of applications, enterprise applications, clusters and virtualized.
Dramatically, simplify the server side caching deployment and management simple to deploy, allows us to capitalize on lot of Fibre Channel ports installed in the server today. This technology is not limited to Fibre Channel, because this technology is actually wire agnostic. We can deploy this with -- we can deploy with CNA. We started, we will be of as we announced the product, our first product would be based on Fibre Channel technology, because that’s a huge installed base that we see today, an opportunity.
And we are taking advantage of industry standard SSDs. We are not building SSD device, okay. We are building a solution that will help integrate SSDs and server with SAN that is what we’re building. We are building a solution that’s going to basically enable enterprise applications to run faster that is what we’re building.
So overall in summary in the SSD expansion, it’s all expansion market. As I said QLogic from a data migration perspective, the biggest market is our ability to participate in upcoming cloud market not just meeting the existing data center needs. The upcoming cloud market capitalized on the qualification and Mt. Rainier is a greenfield opportunity. We believe we have positioned right with the right set of attributes to the product.
We have a strong foothold in our Fibre Channel position. We will not take advantage of that particular thing and continue to drive innovation in this particular product. This is I believe -- this beginning of the next wave like what happened with SAN. 15 years ago, SAN replaced that. We believe this technology will do the same thing with enterprise application, what SAN did to the enterprise application.
SAN enabled the enterprise application to be highly available and clustered those applications. This particular technology will basically improve the performance. Close the I/O performance gap closer than the enterprise applications. So what happen with SAN, 15 years ago, I think this technology has a potential to repeat itself.
Having said that, we’re introducing another member set within the ecosystem, SSD optimize SAN shared cache model Mt. Rainier. Thank you very much. And now I’m going to introduce Tom Joyce. There’s a video presentation from Tom Joyce, Vice President of HP Storage Marketing and Strategy Operations.
QLogic and HP have a very strong and deep relationship and we’ve been working together for many years in variety of areas. But especially over the last few years, our relationship with QLogic has become even more strategic. They touch our business in a variety of different ways, certainly in our storage business, our storage networking business, our server business and also our own networking business.
We are finding more and more ways to work with QLogic. And the reason is that the world we collectively live in has changed fundamentally and networking technology has been a big part of that fundamental change, those you can call it techtronic shifts in the landscape that we live in.
We certainly see it in IT, but I think we all see in our day-to-day lives. Everything that we do whether its in our personal lives or our business lives has been changed by changes in networking, in terms of IP networking, in terms of internet, in terms of virtualization and cloud computing, personal devices that we all use. And as those things have happened, it’s really impacted our customers, who run IT operations and have to serve up all of those capabilities to their users and their end customers.
So, over the last few years HP and every other vendor in the industry have had to go out and look at their capabilities and figure out how we adopt to this new world. About two and half years ago, HP decided we couldn’t adopt the old technologies anymore. We really had to invent entirely new things and that’s what we’ve done.
We came up with the idea of converged infrastructure that brings together server, storage, and networking technologies in unique ways. And then each one of those areas we invested in new technologies.
In the storage space, we actually acquired and built with our own internal development and entirely new product line. The center piece of that today is our three part storage array platform. You might recall that was a major acquisition for HP, a little less than two years ago and now it’s grown to be the largest part of our product line, it’s the largest storage array platform we have and growing faster than anything else we have.
And the QLogic has been a part of that advancement in our converged storage product line helping us to provide advanced networking solutions to make that product better. We’ve also worked with them over the last year on something new that we call the flat SAN.
A SAN is storage area network. And if you look at historically at storage area networks, they were very complex and this new flat SAN technology, which was jointly developed with QLogic allows us to go from lots of parts and a tremendous amount of complexity to an environment that’s extremely simple and enables our customers to save a lot of money and move more quickly in implementing these new technologies.
So, QLogic has become really an embedded part of our strategy. They’ve become the largest provider of Fibre Channel HBAs to our business. They’ve become one of the key providers of switching technology, migration technology and new kinds of storage area networking fabrics in the IP networking space as well as in the Fibre Channel networking space.
Now one of the key questions is how does this converge infrastructure technology that HP with QLogic have brought to market affect our customers, what’s those end value for the customers.
It helps customers in a variety of ways. First of all, it maximizes access to data and data or information is one of the key strategic assets all of our customers have and that they’re trying to leverage more. It also speeds up application deployment, allows them to get into new businesses more quickly than their competitors and innovate as they go forward more rapidly.
It allows us to build a lot more flexibility into their businesses, so they are not stuck with legacy architectures that aren’t capable of dealing with a lot of these new requirements that we have in converged computing and the cloud.
HP and QLogic have many customers together and they run the gamut from very small companies to some of the largest companies in the world. But let’s take a look at one particular example of why a business chooses to work with HP and QLogic.
Xing as entertainment company in an Japan. They are very typical of the customer that chooses to work with us and that like everybody they are looking to reduce costs. But they’re looking also to make it simple.
And we’ve been able to provide a complete solution that’s converged across HP ProLiant servers, HP storage platforms and QLogic’s storage area and networking bundle that made it easy for Xing to get up and running very quickly at a lower cost and lower long-term operational cost than they had historically been able to achieve.
So, where do we go from here? What’s the vision? History is our guide. We’re expecting that the rate of change is not just going to continue, but it’s going to accelerate. And that networking technology and innovation in networking is going to be the centerpiece of our strategy going forward.
And QLogic is going to be a key strategic partner. Today, they built a tremendous amount of trust throughout our organization. We are working with them in our storage business and our storage networking business as well in our own HP networking business and especially in our server business.
They work with us really across the board and they bring to us innovative solutions that we’ve been able to leverage. As we go forward, we think that we’re really still just in the very beginning stages of some major shifts, towards cloud, towards storage networking at scale. And we think that there are tremendous number of opportunities for us to work with QLogic, to bring out new and interesting solutions that our collective competitors really can’t duplicate.
Hi. So for a lack of a better term I’m the guy that you actually get to ask how people are going to use the stuff. I’ll give you a little bit of an overview of our infrastructure and we can be as interactive as you guys want to write or talk forever either way it works.
So, Morgan is a fairly sized for infrastructure. Enterprise computing is what normal people called distributing systems for us. So, I run server storage database, middleware and overall that budget for me is between $500 million and $600 million. I have 75,000 servers. I have something like 120 terabytes of raw storage that grows at a CAGR of about 75% per year.
And in aggregate, I would have to go and actually count them, but they would -- I have on the order of 10,000 QLogic cards scattered around the infrastructure in one shape or another. So, I’ll give you my 50,000 foot view of how do we see Fibre Channel versus 10 gig, versus FCoE versus whatever. And you guys can ask me, what you want to ask.
So basically, Fibre Channel works and we’re happy with it. It’s the short version of a long answer. I do a lot of 10 gig that 10 gig is growing. I look at QLogic as one of the providers put into the -- for that as well, but fundamentally my current relationship with QLogic happens to be in the Fibre Channel space.
What makes storage networks whether that happens to be FCoE or iSCSI or SAN work is that they are isolated from all of the other stuff that is generally going on. So whether I build it with FCoE or whether I build it with Fibre Channel or whether I build it with iSCSI, it kind of looks the same.
And frankly, I don’t know how much you guys have looked at Fibre Channel versus 10 gig card and 10 gig port pricing. But frankly, Fibre Channel in many cases is just cheaper. The whole logic was if you’re going to go by this and this then you can bind back to that.
And then some of the virtualization environments we’re a pure 10 gig with no SAN going to like NAS infrastructure. But for the things that are I/O intensive server to array intensive and then to put in perspective I think I’ve got 3,200 arrays in the environment we use SAN. We don’t use SAN because we’re already using Ethernet. We don’t use SAN because it’s we like spending more money, we use SAN because it’s works and it’s good.
And frankly there is a piece of this market that was driven more by the switch providers than it was driven one switch provider. In particular, that was looking at, okay, the whole world is we’re going to converge. I never as a client never needed to converge. It wasn’t solving a problem I had. I didn’t need to get rid of Fibre Channel. I didn’t need to adopt 10 gig. I needed to do virtualization. I needed to do databases. I need to do these other things. And I needed solutions, allowed me to do that.
Convergence, it simply in economic terms, the theoretical logic of convergence is between 30% and 40% of my infrastructure is SAN connected. The logic of FCoE from the switch provider from VU was -- take a 100% of your infrastructure and put it on FCoE. So that the 30% that needed it will be cheaper. But the 70% that didn’t needed it would be more expensive. And we figure stance and wondering why FCoE is not hugely well adopted.
So anyway that we believe in SAN it’s not the hugest growth opportunity for us, right. It’s not -- am I buying seven times more SAN than I bought last year, no. I can buy lot more storage, absolutely at 75% CAGR. I’m attaching things to that system. The one other piece that I would mostly because of what QLogic announced now here is how we think of flash.
There is a view of the world. And it’s probably a valid view if I’m Apple or Facebook, I would say it’s a 100% of my storage could be flash. Euro analyst you do this for a living, I’m not, I buy stuff. I have tried to do the math about backing into how many foundries would need to be built to handle the data growth of one year and I can’t get within an order of magnitude of flash ever getting close to replacing spinning disk.
So my goal is not to replace flash or spinning disk with flash. My goal is, how do I get the cheapest, nastiest, spinning disk available which for the record right now having through the 7200 RPM SAS drive from Seagate. And make that appear to be the fastest signal work.
And I’m not a current customer linear. I don’t believe in chipping, so easy to say. But the how we look at this problem is yes, most of our systems are clustered. Some of them are clustered across wide distances, some of them are clustered across short distances. The systems that are I/O intensive, we want to make sure that the data can be accessed when the server is down because the short answer is server set, right.
And what we have started to do with PCI flash cards is put them in and take the things that don’t need to be replicated off of the SAS. So like as an example most databases have something called tempdb, which is kind of like working space. When they are trying to figure out the best index or the fastest way to do something or how to aggregate or how to sort, they write it to the scratch base. But the scratch base is turned away whenever the database starts.
So we took -- we did something simple with the existing technology, because it’s already existing technology allowed us to do and we took tempdb off the SAN. Now, the interesting impact of that for us was that took 38% of my rights off the SAN, because 38% of my rights went to scratch base that never needed to go to the shared storage, never needed to be replicated, never needed to go anywhere outside the server. But I didn’t have anything that was fast enough, because storage has to be fast, right in fact everything tempdb may be to be the fastest thing in the server because it’s the place where it’s working space and we started deploying that.
The way we see it and the data isn’t fully in yet, right. So there is what we believe and what we hope the data will show as it all evolves is that caching at the server with slow spinning disc on the back-end gives you an economic way to do persistence with a high speed solution.
So while, Mt. Rainier for example probably will make me faster. The bigger win for me is the customer for the most part there are things that have to go faster. But the thing that makes it interesting in a more pervasive sense, because when you sell fast, there is like 10% of your world that needed to be fast. There is a 100% of your world that wants to be cheap.
So, the thing that we look at is I want to put a cache that I can put in every database server as an example that makes my -- takes the writes off my SAN which means I need few reports on the switch side. I need fewer arrays, fewer disc to generate the IOPs, I switch to multi terabyte drives instead of fast drives.
And then it’s all about making the thing after the cache the cheapest, ugliest, nastiest thing you can buy. That’s the win. So anyway that’s our model. And what has limited us to date is frankly the hardware has been there and the software hasn’t worked. You can pick up, there are -- you can list a number of people that sell PCI cards, right. There’s the obvious one and then there is like five other people that are directly behind them.
The challenge for us has been the software that they’ve generally bought. So most of these companies have gone out and bought software companies as opposed to trying to build the software internally. Just isn’t, it doesn’t work well, right. I have a lot of engineers. I have a decent amount of money with all respect to QLogic. If it really came down to just engineering and other driver, I would be okay with that. It doesn’t scare me to engineer another driver.
The problem is right now the software didn’t work. It will get better. But the model of I want to cache on the server, whatever is accessed. I want to write that through to persistent cheap nasty storage. It’s totally valid. And it’s a valid in any model in a way that I don’t think most of the industry has realized yet, right.
To put it in perspective to, scare the crap out of what is probably the second biggest customer HP is biggest one. Why do you view the VMX, I’m sorry, VMAX. They change the name every couple of years. Why do you view the EMC VMAX? You have a $10,000 server base solution that goes probably 10 to 20 times faster than the best thing they’re going to give you. And you can put random western digital JBOD on the other side of that, because you no longer care.
It that -- and it’s I won’t pretend. But this is what I see now. But this is when I look at our data of access patterns. When I look at how our division of server to space is done. The concept of there is a place for a -- what EMC rename, I/O Stream or VIOM or Texas Memory Systems for the super high end ram/flash array, where really the entire thing needs to be in flash.
But that’s never, you’re never going to -- it’s a 10% of your world. It maybe the most important 10% and I’m not saying, I’m not going to go by those things too. Because it is the most important 10% and the most important 10% will stay five times as much.
But the thing that actually moves the storage environment forward is never the 10%. It’s what makes the other 90% cheaper and that’s how we see cache. If I can get a better cache at the end system at the server, I can actually starts spending less on the arrays. And that the motivation for me as well as the motivation for not to put words in HP’s mouth here is the thing that’s happened.
Virtualization has taken all the money from the server providers and given it to the storage guys. This has pissed off the server guys. They don’t have any margin anymore. The only way for them to get back what they had 5 or 10 years ago, is actually to begin selling things that take the margin away from storage. And that’s where the margin has gone. It’s gone to VMware and it’s gone to storage. Anyway, so I could ramble forever. What would you like to know?
Sure. I will repeat just because just -- just fewer folks could hear, I don’t know if this is. So the question was what about the software doesn’t work right now. So I’ll give one example that happens to be true of the current dominant provider i.e. Fusion-io. Fusion-io, when it first came out had a view that we will cache everything which is good. We’ll do a write through cache, also good. But we will only support one-one which is bad because no database I have in the world has one-one.
And two, it doesn’t support -- I try to avoid as many acronyms as possible but I’ll use this one. SCSI-3 persistent reservations. The thing that made all of the magical thing of folks systems connecting to the same array and not corrupting each other is a thing that has been common in the array world for five years called SCSI-3 reservations. It says I own this you, you can’t write to it.
And because the PCI guys came out of a daze world, they are not used to anything getting to it. And so they didn’t start thinking about wait a second, when somebody else wants to get this how do I do that hand off.
So I’m not saying they don’t realize that this is now an issue, I’m not saying that I’m working on it. But that’s where we had probed. Those are the two specifics areas is one counts and the persistent reservation issue. And there was a second, a quality secondary issue of what do I choose to prioritize in my cache.
And so the odd part is the thing that makes fast good for us, everybody who think it’s right transactions. And so that writes are pretty free for us, because writes go to the array cache, it’s pretty much going to ramp.
So like, if I have a one division, I want to not cache my log volume, because it adds no value. But I really want to cache my index volume because that’s where most of my region writes are going to that are actually gating to my infrastructure. So that’s the sophistication I would say, that if I had the first two, it didn’t have that I’ll be doing it but I will still want the third thing but the third thing really makes it much more usable for us.
I think I really appreciate you making this comment. Because that’s exactly what I was talking about when I said shared infrastructure and what he describes was only a small set of problem. He is running a probably Microsoft type clusters in his configuration. But if you run that same thing in an Oracle configuration you are going to have bigger problem, you can’t share the data. And that is when I said, there is an HBA SAN connectivity technology and storage technology, we bring it together. Thank you for validating the business statement. And I think…
These guys occasionally, they...
I think there is another thing also you mentioned that’s kind of interesting. You said that the caching policy based on LOM because some LOM he wants and some LOM he doesn’t. And that’s exactly why you saw the distributed cache LOM policy, which is based on individual LOMs. So, Mt. Rainier would support, technology support the exact same thing you are saying, I have a log LOM here. I don’t care about caching it.
I get enough performance of here. I have this LOM, which is my data base table LOM, I really want to cache this part and I really don’t want to part of this LOM particular that I want to cache, because I have hot data and that is what it supports, so thank you.
Wasn’t paid for this by the way. Sorry, they gave me Diet Coke, which is one of the only two things that I may be able to consume but anything else.
So, I would say to repeat maybe they didn’t hear to make sure I’ve got it right which is, who owns the logic of what the cache and where and did you need to optimize the application for flash and as that kind of, if those applications evolve, who do you -- is it going to be -- who’s advantage, is it the PCI guys, is it the Fibre Channel flash HBA guys, who’s about.
And so on the application side, the things that I most want to cache are aware enough. I would like them to be more aware, right. But for example, the key things are separation of log and data. The ability to think of things in different groups and different LOMs and manage them and whether that frankly, whether it’s an actual data base like an Oracle or a DB2 or Sybase or a kind of database like Cassandra or something else.
They have pretty much done 90% of the work. I’m not saying that if they did more, it wouldn’t get better. But the 90% means that I can get most of the advantage I want to get without requiring application change, right, which is back to that point of, to get real adoption. I can’t assume that I’m going to get a lot of change from application guys, right. So I want to leverage, would leverage that base. The second question is who is advantaged.
To be honest its not clear to me. What I will say is I will have a Fibre Channel card in the server that I want to optimize. And so the question is what else do I have and whether that’s SAN or flash on a Fibre Channel card whether that’s flash with a different driver connected to the Fibre Channel card.
I’m relatively agnostic but particularly in the database world, but I will have a Fibre Channel card. I think it is a little bit different in the virtualization world and the virtualization world is mostly 10 gig that when people do virtualization with SAN, don’t get me wrong but most people ourselves included that they do virtualization, do it with 10 gig. It then goes back to the same logic I will have a 10 gig card what else do I have. So the path owner will always be there. And the real question is how efficient is that add on whether that’s in the past or that’s a different card or that’s something else.
Now, the one thing that’s interesting is to the extent that the market for servers in enterprise has become largely blade you generally only have one Fibre Channel port or so you only have one PCI maybe two PCI card slots in a Fibre Channel array, sorry in a blade array.
So you may not have room for a second card. So there is an advantage. Now I don’t want to overplay that advantage, not to undercut QLogic but so one of those guys being -- you don’t have -- you know no longer have to do a PCI card to get just a Fibre Channel connection. You can get along for QLogic or -- so it’s not like you. It’s blade excludes the concept of having that in PCI card.
But the one is a -- one is a useful number to penetrate the blade market and the blade market is relatively big in the enterprise.
I’ve got two question. One, how do you decide what data system is lasting its server like it’s a super driver today. It is not -- it is relevant for something like (inaudible) to be lined -- or work with (inaudible).
And then secondly, when you decide what can be (inaudible) that you just want or is there any other differentiation that you can give us?
So the two questions were, how do we pick what to cache what ones and the other rise with the cache and how do we pick 10 gig cards. So we pick the thing that is red the most often to cache and it’s not intuitive. You would think that writes are gating, we always think that because writes are slower, writes are everything but the short answer is with most databases these days writes are kind of advent. It’s figured out enough about how to make that efficient that your writes -- there are exceptions to this, right.
This is not an absolute role but more consistently particularly as data sets get larger its reads that become the gating factor. I want to read that index particularly indexes over the data because finally I may not want to read the whole data but to the find the piece of data I want. It’s generally the index, right.
So we, if I were to pick and force a behavior, I would say index is first then data, not transaction box, but that is a default thing. The interesting thing that we found because the counter theory to this frankly is compellence or fast tea which is automagical theory, auto magical theory, it doesn’t work.
I mean we tried it. We loved it. We liked the concept. We put multiple providers in there. What basically ends up happening is it spends so much time trying to figure out what to cache that it kills itself. We actually got better performance out of compellence and MC and other drives.
Turning off the SSDs and the tiering and just letting it to be a big dumb array then we did telling it to auto magically move. And the key difference is if on one server going to an array, the array can figure you out and it can nicely stack everything the way it should be stacked. But I’ve got 40 to 60 servers hitting an array. And it doesn’t have enough of the tier 0, whatever you call that thing to make that model work.
So its thrashing. It’s killing itself, guessing what it should be doing. So we actually found it faster to turn it all off. The second, which is how do we pick 10-gig cards is really who has the best baseball tickets. But no sorry, the 10-gig selection criteria has generally been based around who works best with virtualization, who has the best integration with VMware because VMware is definitely what’s driving our 10 gig, but the two things are for us it’s VMware and the supper low latency trading if I wasn’t a market maker then I would its VMware.
So VMware integration SR-IOV, all the extra features are in that room. All those extra stuff that we did that is the kind off stuff that helps us pick.
I’m actually going to cut off the question, Steve. Actually I’ll ask you one last question, why don’t I ask you one last question on behalf of the room. Do you believe that your views of the world are consistent with your brethren across Wall Street or do you believe that you are an outlier in some way as you look at the opportunities that exists with technology that QLogic serves to the market?
I think that we have typically found ourselves a little bit ahead of most of Wall Street but generally consistent in the sense that if I look at the macro questions we have clustered data whether that’s clustered across sites or not that is consistent issue. We have a consistent need to drive down the cost and we are all experimenting of PCI/flash based solutions.
So I haven’t seen anybody that’s been able to work it effectively as cache model yeah. So well I obviously can’t speak for anybody else, while we maybe a little bit ahead. I don’t see our need as different.
Understood. Once again Steve, thanks very much we appreciate your time.
Excellent. Thank you.
Thanks guys. Madam CFO.
Okay. So far you have heard from our team about the tremendous opportunities ahead of QLogic. You also have heard why QLogic is uniquely positioned as we address those opportunities. And now team also have to talk about the good progress and tractions that we’ve made so far.
So what does that mean financially? I’ll talk about that in a minute but the key takeaway message is we want you to remember it was as Simon said earlier, the way we believe we can deliver long-term shareholder value is to really drive a profitable growth. So that’s the key thing. It is the profitable growth.
So when you’ve to grow revenue on the top line, we also need to get a return on investment. First, before I would look at our target, the long-term model, I do want to look back briefly. So you can really see how QLogic has performed to get a perspective about how we managed our business in the past as a company. What we did is as most of you know while the most useful long-term value creation measure is to return on invested capital.
What we’ve done is that we compared QLogic’s return on invested capital with some leading technology companies that you all very know in the world. And last year we ranked number one. And if you calculate last three year average, again we’ve ranked number one.
So if you look back many, many years that you will see the same results. So as a company we have consistently delivered a very strong performance probably during the past decade. We’re a very disciplined operator. We meant our business to be disciplined. So, if you look into this and although really it’s a reflection of our technology and market leadership.
Our gross margin is really high. It’s a reflection of competitiveness over our products. It’s also a reflection of the execution track record, consistent execution track record over many, many years. So, if you look at what has transpired from this financial model is during the last five years, we generated about 800 million free cash flow and we have returned over $1 billion cash back to shareholders through buyback.
So, we’re very proud of the performance that we know going forward in order to create more value we really need to continue to deliver top line growth and also profitability going forward.
Now, let’s talk about opportunities we have. So, our team has talked about great opportunities that we covered out ourselves. Traditionally, as you know, we only play in the Fibre Channel market but six years ago we started investing and converge the 10-gig Ethernet opportunity and now we can participate in the 10-gig and converge the market opportunity.
On the host side, our team talk about it’s about $5 billion opportunity. On the converged switch side, again there is about the 500 million opportunity there. So, overall, from the overall company perspective, the market we can address is going to grow from $2 billion that translates to $3.4 billion. So, it’s the 19% of CAGR.
So, overall if you look at the market expansion opportunities, QLogic can potentially participate is tremendous. Now, let’s look at the model. I want to briefly talk about the fiscal ‘13. We’re already half way there. For Q1, our gross margin was 67.6, Q2 we guided 67% to 68%. Our gross margin will be quite stable for the rest of the year in the range of 67% to 68%.
On the operating expenses side, on the R&D side as we talked before, we have increased R&D spending really to drive the revenue growth. So R&D percentage will be around the 28% to 29% of revenue. Sales and marketing and the G&A side that will continue to aggressively manage our costs.
Sales and marketing approximately 13% and the G&A costs around the 5% for fiscal ‘13. So what this will do is it will yield operating margin approximately 20% for fiscal ‘13, largely because the revenue impact due to the current week is now making environment and uncertainty. Really, this is where we will start. When we -- going forward, when we grow the revenue, the operating margin will go up, you know our model is a very leveragable.
So now let’s look at our long-term target model. On a gross margin side, our target is going to be between 65% to 67%. As many of you know while for the major drivers of our gross margin is really the revenue mix.
So our model basically says our core market will be largely flat, more or less flat. Really the future revenue growth that will come from our expansion platform, which are converged from host side, 10 giga Ethernet opportunity from host side and also from switching side. Of course, there is also storage target ASIC opportunity and other expansion opportunities.
And on the operating expense side, the R&D target percentage, it will be between 25% to 29%. So, you know this is the higher than we used to be and the reason is really if you look at the opportunities we have and the potential we can drive the top line revenue growth that you already need the innovative product. You really needed to drive the future innovation to de-lever the top line revenue growth.
We’ll continue to manage our sales, marketing and the G&A cost, increase productivity and the increased efficiency and maintain the sales marketing as a percentage of our revenue to be 11% to 14% and the G&A cost that will be 4% to 5%. So, overall our operating margin right now because of the revenue weakness from the economic side will be around 20% but once we going forward include -- increase the revenue side, our operating margin will go up.
The tax rate we basically assumed around 18% the tax rate that’s why it’s here the net margin over 17% to 21%. So, that’s the target model. I think once the question probably in your mind is about okay, how does the Mt. Rainier technology revenue will contribute to this model.
So, frankly what we did in this model really had a very mean assumption about Mt. Rainier revenue basically as you know it takes time to launch product in this market, in our market. And if there is meaningful revenue faster, it’s all upside for us. That’s basically all I have.
I will turn over to Simon. So he can give you a closing summary.
Thanks Jean. So hopefully we’ve given you a clear perspective on why we believe we are going to able to deliver sustainable growth and why we believe that we continue to have a compelling business model and on why we believe that the combination of those two will continue to deliver an expansion of shareholders value.
The innovation around 16 gig, 10 gig going to 40 gig Ethernet and the innovation associated with solutions such as Mt. Rainier are the way that QLogic will continue to differentiate in the markets that it serves. We have an incredible foundation to work from.
I think the comments that you heard from my team that you heard from our largest OEM customer, comments that you heard completely unrehearsed from a significant end use of our technologies supports the pieces that we are absolutely on the right track to deliver that expansion shareholder value.
So with that, the vision remains to be the market leader in high performance, data center connectivity. So fair warning, it’s Q&A or cocktails, you guys are confused. But why don’t we start and to the extent that I feel it appropriate to call on the management team to answer the questions. We’ll do that as well, okay.
So, I’m not going to -- they’re here for a reason, so why don’t we start with Mr. Bachman. [Maurice] behind you.
Keith Bachman - BMO Capital Markets
Thank you for calling on me before Aaron. Just want to hear little bit of context more about the area -- we didn’t hear anything about when the products or product would rev, I think competitive pricing got the MX Q&A cycles anymore...
Yeah. So we can give you a little more Keith, the product the very first samples of the product were delivered to a customer, the first customer to take samples on March the 31st -- 23rd, that’s right 23rd. Any reason he knows he had an MBO associated with it okay.
So we originally delivered the samples of the products to a Tier 1 OEM on March 23rd. We subsequently delivered samples of the product to other OEM customers as well. And we would expect that yeah call cycles take a long time in this industry nowadays.
So, we’re not expecting any revenues until kind of the middle of next year at this point in time associated with Rainier. And then the next generation of solutions that will be derivatives of the first Rainier, we would expect activity around once Rainier’s production released here in.
We will go to Amit. Go ahead.
Amit Daryanani - RBC Capital Markets
Thanks. Maybe if just ask question on Mt. Rainier or two?
To cheer up here.
Amit Daryanani - RBC Capital Markets
So my understanding right now it’s essentially a Fibre Channel centric product, the other roadmap to have 10 gig Ethernet converts our product and result?
That was actually in the slide so yeah. So the product today is Fibre Channel product and is designed to take advantage as you see the side of the $50 billion with Fibre Channel infrastructure and $12 million ports of QLogic capabilities is already been deployed, right. So we’re going to leverage that that with the next generation the products clearly bring full different protocols.
I mean adding one more thing to it. Fundamental technology of Mt. Rainier is wide protocol agnostic. So the reason as I’ve said we picked Fibre Channel being the first one is because of the install based that we have an access to. And as I mean 10 gig and like Simon said the next generation products and our roadmap items we’ll start to rollout sometime towards end of next year.
So fundamental technology doesn’t really change and that is where we got a lot of leverage on multi protocol routing solution that we did back in and when I said that intelligent storage routers and how I will leverage lot of things, that’s where it came from. So all of this software is wide protocol agnostic.
Amit Daryanani - RBC Capital Markets
Thanks a lot to that. And then I guess Simon given all the new product, you guys have been talking about on the growth initiatives side and R&D ramping up, could you maybe just touch on capital allocation because the top process are on capital allocation going forward, change at all and as you will add on dividend as well?
No I think, okay, so I think we’ve got the R&D structure right, the dollar R&D structure right at this point in time. And as Jean said, we’re aggressively clamping that on sales and marketing G&A expenses in order to continue to fund the R&D line, okay. And I think we’ve got it right at this level.
And obviously with Mt. Rainier solutions already being available, products that have samples and so on. We’ve put a lot of the work associated with bringing it to market behind us. And I think from an R&D perspective, we’ve got the dollars right at this point in time, okay. And where it fits within the range, the percentage of revenues that Jean provided is really going to be dictated by the revenue number at this point in time, not by spending by more on R&D, okay, so that’s first point.
The second point on a bigger camp of allocation question associated with, there were three things we always get asked about M&A, buyback and dividends, okay. So we are always looking at where used to fill the product gap from it, the product portfolio from an M&A perspective.
We certainly don’t perceive if there are any gaps in the product portfolio at this point in time that require that we would enter into any M&A transactions. It doesn’t mean that something couldn’t pop-up but at some point over the course of the coming weeks or months or year that we find attractive and decided it was something that was either capable of filling a gap in the current portfolio or complementary in result that must have been able to serve in expanded markets.
So we’re always looking -- I think with a very limited list of properties either at this point in time that we’re attracted to somebody like QLogic, but we are always looking. So that’s number one. Number two then becomes dividend versus buyback. We did a detailed analysis for the board back in the May timeframe and the reason we will see a clear of dividends is the offshore components of cash.
So as you will know we’ve got probably 400 out of the 500 or 375 out of the $500 million of cash parked offshore at this point in time, not easily available for dividends or buyback. And I think the buyback continues to be the way we’ll continue to put the capital structure to work moving forward, okay. Aaron?
Aaron Rakers - Stifel Nicolaus & Company, Inc.
Yeah. Great. I got two questions. First maybe on the model, we can talk a little bit about that. When I look at your TAM assumptions with the flat assumptions that you’re making on the Fibre Channel piece of the business and I look at the Ethernet and the converged portions of the business, I know we talked a little bit about this but I’d like to understand how we think about the change in the mix of business be it the 10 gigabit Ethernet particularly.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
What you’re assuming in that model assumption and how we should think about, maybe it’s even down to the line of Fibre Channel converge versus straight up tier 10 Gig E, what 10 Gig E gross margin you’re assuming in the model?
So we don’t break out the gross margins as you know, okay.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
That’s all right.
I would answer this as follows, okay. Two years ago I stood here and I told you that I thought the business would have that year roughly a 67%, 68% gross margin and that it would erode by roughly one point per year as time went by and as Ethernet became a bigger contribution to the total margin pool, okay.
So two things have happened since then. Number one, Ethernet’s become a total -- a bigger contribution to the total margin pool. And number two, we sold InfiniBand, it was okay. And yet we still continue to offer one is yet and one is in spite of the few. We still continue to offer a gross margin that sits comfortably in the 67
68% range okay. So, you know that we do an extraordinarily good job working the gross margin okay. And we’ll continue to do that moving forward right because gross margin hasn’t moved. There is an assumption associated with continued change in mix and to your point, the growth associated with converged and 10 Gig E products, 10 Gig E in particular within the total mix.
But we’ve continued to believe we’re going to be able to drive 65% plus gross margin. And it will take down a little over time because of the contribution of 10 gig E to the total mix. But we’ve done a fantastic job of it in the past. And I think we’re going to continue to do a fantastic job with it in the future.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
Okay. And then on the second part going back to Rainier, if I can. I like to just understand what kind of competitive landscape you are going up against, is it Dell, RNA asset, is it the LSI Nytro products, it seems to be via cache, just trying to understand the positioning of this and how we should think about really what you are going after?
So it’s kind of hard to answer this question and the reason I say it as Dell’s RNA what they announced and whenever they deliver their technology would be a similar competitive point, would be a similar competitive point -- would be similar competitive point.
We have cache announcement of things like you know the cluster service and some of the re-motion kind of things that they announced recently in VM world. Our guess is that the part of the problem not the whole problem, we believe that there is a lot more work required to support vertical rack type solutions.
And that there is no real competitor to this solution today, doesn’t mean won’t happen tomorrow, but there are lot of smart people in the world because it always happens. But today for this technology and what it delivers if you talk about a segment single server based segment, you have your Nytro’s of the world and you know everybody who provide a host based solution that will be a competitive point, okay.
The moment you talk about the enterprise plus or applications that number maybe trickles down to nothing shipping today. And we are not shipping today either but we will be shipping shortly.
I think, Aaron I think it’s a great question. I would encourage you to take it offline with Mr. Shishir in terms of how he views the competitive landscape. And he answered that far most softly than he usually does, usually is far more aggressive with that. I have got best product in the world. And there isn’t no competition to it. So take it offline and he will help you understand in more detail.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
Any comments maybe on whether potential maybe three to five year revenue growth CAGR you can do, I mean how it look for exclusive guidance, but assuming in the environment like what...
So we don’t do that right, why because you guys draw a straight line and then when I’m not on that straight line, you beat the crap out of us right. So what we do is we put up our view of the markets and view of the market that some of all of the individual presentations that we’d look back throughout the course of the afternoon okay. That’s the market opportunity and if you look at most of those markets over the course of the last couple of years, we’ve gained share.
So we gained share in the core Fibre Channel market on the host side. We’ve gained share in the 10 gig and converged adapter market within the switch business. We’ve done a good job gating share within the Fibre Channel box partner certainly the converged market, okay. When you think about things like HP’s Virtual Connect and a whole series on an angst activities that Craig alluded to.
We’re going to do very well within the context of that market, okay. But we would never give you a growth trajectory. We’ll offer you an opportunity to view the markets and we’ll tell you we’re doing a good job and we’re capturing share. We’ll offer you an opportunity to have a look at a market that we’ve never talked about before. And we’ll allow you to build a belief system around whether we’ve got away with all to execute and actually deliver within the context of the market growth trajectories.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
Simon, do you see Rainier impacting your HPA, core HPA business at all, in terms of...?
Yeah. So the question is that a little bit of -- it could be cannibalistic. I think there is but I don’t for one minute think that we believe that across all applications that use Fibre Channel today that there is the way with all for the flash-based capabilities. Do you want to add anything, Shishir?
I think it’s a two-fold answer. The number one is when we compare to SAN, we basically discounted -- assumed to be conservative number. We assume that we cannot lose revenue on that. I think what this -- Rainier is an opportunity for QLogic to gain more market share in that $700 million market which remains flat.
So there’s an opportunity for QLogic to grow more Fibre Channel share because we don’t believe our competitor would be there to offer such a solution. Okay. So we believe that it will help us win against our competition by 56%, 57% market share something like that. We will be growing from there.
So there are very different ASP assumptions and expectations of the Rainier Solutions than there are for a traditional HPA today, okay. So what Shishir was alluding to was, if you look at that Fibre Channel market that’s roughly flat at $700 million. If we were selling Rainier based solutions instead of standard HPA solutions at $700 million, it’s order of magnitude higher than $700 million, okay.
So it’s a -- yes, there is some risk of cannibalization but for us it’s really about highly innovative technology but leverages what is clearly one of the key technology trends within the data center day which is solid state service side.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
The chairman seems to be happy.
Okay. As long as you’re happy we’re happy.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
Simon just, I think we just talked about some of the macro stuff and just some of the more, may be a little bit more near-term outlooks. Can you just give us an update on Romley and still a 4Q benefits or why we’re not seeing a pull just macro related or any kind of update on that?
These combination things right so macro certainly part of it today and we said on earnings call, if you look at some of the biggest vertical that our technology have start to serve, they are not exactly the best place to be at this point in time. So whether its financial services or government or local government appears to be a little bit more mixed than financial services. It just doesn’t feel robust at this point in time from the macro perspective.
If you look at some of the server unit numbers associated with some of our principle partners, the numbers were way off on a year-over-year basis right. We’ve done a lot of work to try to figure out okay how much of it is truly macro, how much of it could potentially be a secular declinement, we need to be very cognizant about.
And we’ve become far more comfortable over the course of last couple of months really since the earnings call beginning before the earnings call. How much of this is macro and how much of it is attributable to the verticals not spending money anywhere near the levels that we expect them to server we haven’t expected them to and we’re less concerned about whether this is some kind of secular Fibre Channel issue at this point in time.
The second part of it was really Romley. And I think part of what you saw in the numbers last quarter was Romley related. I think the Romley introductions were lumpy but best way related than anybody had expected them to be and that probably was activity that dried up as people waited for those platforms during the quarter, right.
But each of the OEMs had introductions of various different points throughout the quarter. So if you were buying -- I’m not going to name them. So I was getting trouble when I start naming them. But if you were buying server one, server two or server three, if server one became available in late March and server three didn’t become available until April, chances are you may not have been buying if you wanted to buy server three at quite the rate that you’d expected to, right.
So I think last quarter was odd from a demand perspective because of the Romely launches. No doubt about it and then it’s odd because of the macro at this point in time as well.
Aaron Rakers - Stifel Nicolaus & Company, Inc.
And just may be a follow up on that, when do you expect some of the -- how many OEMs to become larger customers?
So, there are intriguing opportunity. There are intriguing customers and intriguing opportunities, right. And I think we were talking about as we stood there earlier, right. So the view of the world with my traditional OEM customers is, a win ship for a generation or if there is significant incumbency associated with the value we bring such as in Fibre Channel win and I continue to ship for many generations because of the incumbent value of the software stack.
That’s a little different in Ethernet world. Okay. And it’s even more different in Ethernet world in Taiwan. So unlike calling on my traditional OEMs, what I win for one generation server platforms. One-generation server platforms in Taiwan to be three times a week and we’ll see requests for quotations, literally three times a week from Taiwanese ODMs. And opting to roll, bidding on the same piece of business. But they are bidding on next opportunity at Web 2.0, where they are bidding on a next opportunity with the [client] or they’re bidding on the next opportunity for OEM piece of business.
So far more significant numbers of opportunities associated with the Taiwanese ODM business. I mean, we’ll see three week and if we’re. I think it’s probably three a week, three individual opportunities a week. And we’ll see three from our traditional OEM customers far less frequently than that. So, we are aggressively targeting that set of customers and working hard to make sure we win efficient. Scott.
I’ll finish that for Scott. That world has just changed dramatically over the course of a very short period of time. They need to be deeply embedded and engaged with Taiwanese ODMs and Chinese OEMs, the Lenovo’s and the Huawei’s of the world from a go-to-market perspective has changed dramatically in a very short period of time and I never underestimate their ability to succeed. Scott.
Scott Craig - Bank of America Securities-Merrill Lynch
Yeah. Thanks, Simon. Let me revisit the long-term model here for a second. If we step back a couple of years ago and I know we’re not comparing apples-to-apples here probably. But you threw up some of this, basically the same size market opportunities and yet your long-term financial model has changed, and that the operating margin goes from roughly 30% into the low 20%.
So, is it that you’re having to spend more money to get sort of the same amount of revenue opportunities or do you believe that you’re going to get a larger market share of those markets than what you expect to say a couple of years ago?
And then as a quick follow-up on the gross margin you thought you’re going to have a lower gross margin, actually a few years ago and now it’s gone higher. So, can you help me understand that as well?
Yeah. And so, I mean it’s lots of puts and takes, obviously right so. Compared to two years ago, the biggest change is no InfiniBand. Actually its two significant changes right, no InfiniBand and the beginning of a revenue stream associated with the Rainier products. Okay.
So, if you look at what we had undertaken, clearly we had Infiniband revenues and we had Infiniband gross margin and we had Infiniband costs. And the costs that were being in the Infiniband business were fairly significant to be fair. I, not for one second have I looked back and second guest selling that business to Intel back in the February timeframe. Okay.
If you look at the model now, what we’re saying is in order to continue to innovate. In order to make sure that we can bring Rainier based types of technologies to market, in order to make sure that we can bring further optimized solutions for further applications to market, we’re going to have to continue to invest. So, and your observation is fair. But I think the substitution is not -- the substitution is essentially -- take out everything that we used to talk about relative to Infiniband.
And you put in everything we’re talking about relative to Rainier and other solutions that are being worked on under the covers that continue to bring highly innovative technologies to new markets where I expect us to have significant leadership position right.
Today, it really has no competition, right. In Infiniband, we have significant competition. So, it’s not a quantitative answer. It’s a qualitative answer. But we’re going to invest in R&D because we think that the technologies we’re going to bring to market are extraordinarily compelling.
Yeah. On our gross margins, you answer the gross margin question. So, on the gross margin side, right, we work very hard to improve our gross margins. So that’s why you have asked us two years ago, we got the gross margin lower, but in reality our gross margin is still close to 68%. We work very hard each year from cost of the sale side to really try to optimize this improve gross margin that certainly helped greatly too.
I’d add one more thing as well Scott. So, I think things move quickly. We talked about how quickly they are moving from a go-to-market perspective. They’re also moving quickly from a technology perspective. And two years ago, we put up a number that said we expected the 10 gig market to be approximately $1 billion okay and today we put up substantially the same numbers, we expected it to be a $1 billion okay.
It’s a very different $1 billion than it was two years ago and things have moved very quickly in that regard. So, two years ago, we’d put up and said it’s a monolithic $1 billion opportunity and this piece of it is long and this piece of its all other nick okay. As we look at how we’re engaging with customers today and we would have said this piece is converged as well.
If we look at how we’re engaging with customers today that monolithic $1 billion opportunity is broken down into six, seven, eight different opportunities that have different technology requirements associated with them. So now you’ve got, you’ve still got the cheap low power 10 gig make its one market, okay.
Market number two, anything that has storage associated with it may be iSCSI, may be FCoE. Market number three anything with copper conductivity associated with it. So its not optical its copper, but you need a base p5-type capability. Opportunity number four, but very small lock, right. Opportunity number five, low latency Ethernet okay. Customers such as Steve, Russell talking about specific requirements for latency optimized solutions okay.
So, we’ve really gone from saying okay, it’s a $1 billion opportunity to saying, it’s a whole series of opportunities all of which have a slight twist associated with them. And all of which have a slight twist on the R&D in order to optimize for that set of opportunities.
So, I think that’s the other thing I think is important to make clear as I think about when Roger walks in my office and says, Simon I need more engineers. The question is why and the answer is because, there is a piece of the 10 gig market that has a specific optimization associated with it but we believe we are competitively advantaged in and we want to go win that. I said okay, understood. So that’s certainly part of what we’ve seen as well.
All right. If there are no further questions, two things, number one, I appreciate you all being here. We recognize that we had you here for -- with us at this point in time and that’s a big chunky day. I think what we’ve introduced is extraordinarily exciting technology today. I think we’re going to be able to demonstrate that expansion of shareholder value as we move forward by delivering on a growth trajectory and by delivering on the compelling business model this company’s always had. I would be remiss if I did not thank the QLogic management team.
You get to see them once every two years. And I think I’ve got the most talented team in the industry. So I want to thank Shishir and Jean and Roger and Craig and bank the man, who put most of the effort into today in Chris Humphrey. Chris, I appreciate your efforts. You did a fantastic job for us. And I still believe this is the best management team across any of my competitors.
So, with that, thank you. Cocktails to the curtains at the back and we can do some more Q&A back there. Thanks.
Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.
THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.
If you have any additional questions about our online transcripts, please contact us at: firstname.lastname@example.org. Thank you!