Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message|
( followers)  

Cisco Systems, Inc. (NASDAQ:CSCO)

October 20, 2011 11:30 am ET

Executives

Marilyn Mersereau - Senior Vice President of Corporate Marketing

Umesh Mahajan -

Analysts

Sam Chang - Turner Investments

Marianne Dolan - Moon Capital

Operator

Good day, ladies and gentlemen, and welcome to the Cisco Webcast Bank of America Merrill Lynch hosted conference call. My name is Shaquanna and I will be your coordinator for today. [Operator Instructions] I would now like to turn the presentation over to your host for today's call, Mr. Tal Liani of Bank of America Merrill Lynch. Please proceed, sir.

Tal Liani

Good morning, and thank you all for joining us. Today, we'll be discussing datacenter switching, a topic du jour in networking. The purpose of this call is to better understand the market opportunities, architectures and technologies as well as competitive offerings. The call will be hosted in the Q&A format. We asked Umesh Mahajan, VP and General Manager of Cisco's Datacenter Switching Group; and also Marilyn Mora, from the IR Group, to join us and help demystify this market. So before we delve into the question, like to pass it to Marilyn to enlight us with some interesting legal statements.

Marilyn Mersereau

Thank you, Tal. We're very happy to be here today. Now I'd like to remind the audience that today's call will pertain strictly to Cisco's datacenter strategy. No new financial information regarding Cisco's overall performance is intended or implied, and this call should not be viewed as an update of the quarter. We may make forward-looking statements regarding our business, which are subject to risks and uncertainties outlined in detail in our documents filed with the SEC, specifically, the most recent forms on Forms 10-K and 10-Q. Actual results may differ from statements made today. So with that, Tal, I'll go ahead and turn it back to you.

Question-and-Answer Session

Tal Liani

Excellent. Thanks, Marilyn. So Umesh, first of all, I want to thank you for joining our call. This is a very interesting and very important topic for investors, and I want to start maybe with a broad question. Just if you don't mind to help us understand the various switches or switching or groups of switches that Cisco offers? And what are kind of the broad differences between the groups?

Umesh Mahajan

Okay, thank you, Tal. And first of all, good morning, everyone. This is Umesh Mahajan. Let me first start out by explaining. Traditionally, we had the catalyst portfolio, which has been very successful for Cisco both in the campus and the datacenter. About a couple of years ago, we decided the datacenter requirements are changing and they're changing rapidly, virtualization, cloud trends, traffic patterns are changing. So we need to have a new product family over here, with which we can innovate and differentiate and meet the requirements of the datacenter properly. And hence, we came up with the Nexus portfolio. This is a very broad-based portfolio. We have Nexus 7000, which is a high-end 15-terabyte switch. It's a modular platform, layer 2, layer 3 features. Here is where we do lot more innovations in this family, in this part of the family of the switches. Then we have the Nexus 5000, the Nexus 3000, which are top of racks, which is the Nexus 2000, which we call tech switch. And I can explain later on what are the effects. And then we have the Nexus 1000V, which is a software switch, which runs on top of hybrid wireless. So with this broad-based family, we allow an entire architectural play. We can go from a small datacenter to a mid-sized datacenter to a very large datacenter with this product family. And, absolutely, we have a very large market share in the datacenter. We have 19,000 NX-OS customers worldwide today. We are absolutely deployed in the largest of the very large datacenters and to the small datacenters. Because we have this wide variety on this portfolio, which allows you to mix and match and architecturally design the network, which fits your needs going forward.

Tal Liani

Excellent. You often discuss 10-gigabit Ethernet as an adoption factor. I would like to just, if you don't mind to explain what is 10-gigabit Ethernet? How your products fit into the space and why is it such a growth factor for your products?

Umesh Mahajan

So let's look at some of the emerging trends in the datacenter, right? Every year, Intel is introducing faster and faster multiple CPUs, right? And with the virtualization from VMware, Hyper-V from Microsoft, et cetera, people are now loading up these very fast multi-core CPU servers in the datacenter with multiples of virtual machines. As this -- it used to be 4, 5, 10. Now people can put 20, 30, 40 virtual machines on some of these high-end servers. These virtual machines need a lot of network connectivity. So they will drive lower traffic between the server through the storage and out into the Internet. So the network pipes coming out of these physical servers, these high-end physical servers, multi-core servers, need to be 10-gigabit Ethernet. Otherwise, you just can't have so many 1-gigabit Ethernet connections coming out of the server. The bandwidth needed for the application running on these servers, on these virtual machines are data hungry and they need fast pipes coming out of these servers. So that's the number one requirement for 10-gigabit Ethernet, really, taking off. The second requirement is the -- there is to be multiple networks in the datacenter, a back-end SAN network for storage connectivity, a front-end LAN network to go towards the interconnect and connect the server. And then, sometimes, for high-performance computing, used to be a third high-performance computing network for clustered requirements, in some cases, InfiniBand. With our unified fabric approach, we have decided that you can collapse all the 3 networks onto 1 common unified fabric network, built out of that Nexus portfolio. Once you unify these networks and the traffic patterns, we all need to travel on the same pipes and share the same pipe. Hence, again, you need fatter, higher-bandwidth pipes in the datacenter. And that's basically driving a very rapid requirement for 10-gigabit Ethernet. Now the server vendors are smart enough. They figured out a – you need 10-gigabit Ethernet. So they are putting LAN on motherboard, which used to be 1-gigabit Ethernet; it's rapidly transitioning to a 10-gigabit Ethernet. So these servers, by definition, will have 10-gigabit Ethernet built on in the server. It's for free. So you might as well use it for 10-gigabit Ethernet connectivity onto your network, the physical network infrastructure, obviously. So those are the big, big drivers for 10-gigabit Ethernet. And then there are these other applications, search applications, big data applications, where big number crunching is happening. There, again, people want to use servers that a lot afford [ph], that a lot of data, which has to go crisscross across the servers. And there, again, these people, time and be able to do the search and competition is a factor, a competitive advantage for them. So they don't want to save on low-bandwidth pipe. They want the fastest pipes, the fastest servers, so that they can have a competitive advantage against their competition. That, again, is driving these higher bandwidth requirements in the datacenter. So we feel very comfortable with the 10-gigabit Ethernet. It's about -- all of our market share, we see it's about 25% in the datacenter. It's rapidly growing, the 10-gigabit Ethernet. And over the next couple of years, we feel it’ll be 50% of our networking portfolio will be 10-gigabit Ethernet. And that's clearly where the Nexus portfolio, the sweet spot for the Nexus portfolio, and we designed it with 10-gigabit Ethernet in mind. And then, over time, we'll evolve to 40-gig and 100-gig. We do support 1-gigabit Ethernet, clearly, in the Nexus platform, because there's plenty of 1-gigabit still in the datacenter.

Tal Liani

One follow-up on this and the portfolio question I asked you before is when you look at the server companies like HP and IBM, IBM acquired BLADE Network. HP has its own initiatives. They have switches also. And then there are independent switching companies, like Arista, that is offering switching to interconnect servers. So where are the advantages of Cisco? And also if you can speak about the competitors, disadvantages and advantages of competitors when it comes to having an end-to-end portfolio versus buying components from server company and components from switching company, et cetera?

Umesh Mahajan

So if you look at it for Cisco, it’s an architectural play because we have a big market share, 75% market share in the datacenter. So clearly, we are very widely deployed. So when we look at it, we want to make sure that we too continue to provide an end-to-end architecture because there are savings in that, right? There's like with our Nexus end-to-end architecture in the networking side from the Nexus 1000V, all the way to the Nexus 7000 high-end. We have the same NX-OS running. So it's the operational continuity, the manageability of the network, how do you support it, how do you debug it. It's all the same. So there is a level of OpEx savings. And, overall, TCU becomes better for your network over there. Now you talked about HP and IBM having some elements of networking within their own switches. So traditionally, we have worked with them and we do have offerings within HP BLADE server and IBM BLADE server. And why that is successful? Because Cisco customers were deploying the widest Cisco network on the same network inside an HP BLADE server or an IBM BLADE server. And 2 days ago, we launched the Nexus 2000 FEX kind of a switch inside an HP BLADE server because our customers are requiring that we have that the same flow. We don't want different kind of networks over there and different kind of manageability requirements for the end-to-end network to work properly in the datacenter. Now regarding Arista, I think Arista has tried to come up with a solution, where they’re trying to tackle certain segments of the bucket. And Cisco, with our offerings, I feel very strongly that in the area where Cisco stands, we have very, very competitive offerings. Like we made some entry into the high-frequency trading market space or the low-latency. We have Nexus 3000 ultra-low-latency switch, is doing very well. In some other areas where they've entered, we have now announced the Nexus 7000 with our F2 line card, just 2 days ago, where we have double the bandwidth of an Arista switch at the count to 10-gigabit Ethernet ports per switch. So we are very comfortable with our offerings that we can compete Arista at bay. And on top of that, if you look at it, Arista is a start-up. And when -- as a startup, they don't have the breadth and reach of Cisco in multiple areas. First is support and tech support worldwide. Cisco has its own forest support. When there are issues in the network, 7/24, we have very, very excellent people worldwide, very knowledgeable people, who can immediately jump in and understand, did something get configured incorrectly? Did we have an issue or something else went wrong? How do we solve it? How do we get the customer up and running again? And our customers have time and again said, "Cisco is the best in this area and that's why we want to stay with Cisco." The other area is go-to-market channels, the whole sales force and the relationships we have with the customers. When I talk to a lot of my customers they feel, "Cisco, we feel comfortable with you. Solve my problems and address my requirements over there and we will stay with you." So we continue to have very strong channels, very strong relationships, where, again, Arista cannot match Cisco here.

Tal Liani

Going back to the technology questions. Very often, we hear about flat networks or a term fabric. Can you discuss first of all, what are -- what was the old architecture for network and then what is the new architecture? What do you offer when it comes to changes to the network? Also, how is -- how all of it is related to cloud computing?

Umesh Mahajan

Okay. So the old architecture was built on the Catalyst platform. And at one point, they used to be servers [ph], where the applications are running in the datacenter. There's storage, where the data is stored and then there's Internet connectivity, right? And there was network in between over there. But nothing was moving around, right? There was no virtualization happening on the servers. You run an application on the server, it just stays there. And then at some point, you may retire the server, but it was very static. And most of the traffic profile in the datacenter was not solved. Let me explain what I mean by not solved. Not solved means you have users, which are sitting outside the datacenter, like all of us are sitting somewhere in offices, but we might be accessing data, which is run -- or applications which are running in a datacenter. That's not solved traffic [ph]. Traffic coming from users and going to servers, which are running the applications and then going back, they're getting some answers back from the server, not solve traffic. So for that kind of a traffic pattern, there was not too much traffic coming from the servers. It was an oversubscribed network, and it was a 3-tier architecture, because it was not solved. Not solved meaning traffic coming from the users going into the datacenter and coming right back. And at the back end, the storage was used. Now the requirements are changing drastically. There is still the requirement for that kind of traffic clearly. We do need to send traffic into the datacenter and come out. But the trend is rapidly changing in certain parts of the datacenter is -- the requirement is that their application’s running, whether it's a search application or whether it's a clustered application or whether it's a big data compute application. These applications span across multiple servers. And when they span across multiple servers, now there's a lot of traffic going from server to server rather than from user into the datacenter and out. So a lot of computation happening. When you use Facebook or Google or Yahoo! or go into a service provider or do anything, or even in your own applications in an enterprise, they are not running on just one server. Lot of the applications go across the server. So that's leading to a change in the design or, at least, this class of datacenters of cloud, and virtualized datacenters, there's a lot more east-west traffic in the end. Once you come to that paradigm, that's where you need to flatten the network. You don't need 3 layers in this kind of a network because you need cross-sectional bandwidth. By that, I mean, just like their servers, the storage and the Internet over here, right? And then you need a very fast highway system in the middle. And any-to-any connectivity, no congestion and you need the flexibility. Because once you have virtualization running on the server, there's this notion of workload mobility also. Things can hop around because these customers out there, once they have the virtualization, they want to have the ability to move these virtual machines at any time from any server to any other server and have the same level of connectivity. That requires a network in the middle to be flat, fully connected to each other, align all servers to be connected to each other, all servers connected to all storage because you don't know what is going to end up where. And that's what we mean by a flat network, where it's a Plus Network in technical terms. And, basically, you have leaf switches, which are connected to the servers and then you have spine switches, which interconnect all these leaf switches and provide you a very flat, very fast, very high bandwidth network. And on top of that, we have intelligent services layered on top of it, Cisco datacenter fabric. And then we combine this with our UCS offering, which is very tightly knit with our Nexus portfolio. In fact, the elements from Nexis switching, which are inside the UCS portfolio. And Nexus 1000V and elements of the FEX platform are in-built into the UCS platform. And then we layer that with our unified network services, which is our firewall security, our Wide Area Actualization Services, our Network Analysis Module to give you more analytics in the network and what's going on over there.

Tal Liani

Types of customers. I remember when you first started, the concern was that UCS or Cisco's offering is really only good for small or medium-sized datacenters. And as you launched more products and expanded the portfolio, the question now is if you can discuss, first of all, what are the types of datacenter customers that you see? And how do you plan to address all the various types of customers?

Umesh Mahajan

So like I explained earlier, we have a very large customer base. So when we categorize the requirements, some requirements are common across all data centers. But then, we feel there are full broad-based datacenters where there's a level of difference in either the features they need or the way they are designing the datacenters or some of the requirements are different. Let me explain the 4 kind of datacenter categories we see. One is general purpose. This is the traditional enterprise. This used to be the Layer 3 -- the 3-tier architecture. Here, they need Layer 2, Layer 3. They need a level of scale, but these are not these very large cloud datacenters. So they need the operational continuity, they still have Catalyst. As they roll out Nexus, they want to make sure some of the features are similar, some of the management tools are similar, so they can migrate in a seamless manner. And they are doing server virtualization. So they have embraced server virtualization because that gives them a cost savings over there. And these are the ones who want -- also want to converge their network because they have a SAN network, they have the LAN network and they have a high-performance network, the InfiniBand kind of network. So this is a general purpose enterprise kind of datacenter. This is one big category for us. And this is where we are very, very successful in this area. The second category is the cloud and service provider in our datacenter. Here, the scale is larger. They have a much larger Layer 2, Layer 3 fabric. They need to support multi-tenancy because they're going to be hosting multiple customers over there. They need simplified management because these kinds of customers don't have large software teams, so they rely on simplified management tools so that they can manage the network and the entire datacenter well. We also believe in security. Since they are hosting other customers' requirements in their datacenter, this needs to be tightly secure, otherwise, nobody will outsource their stuff to them. So for them, security is very important. Again, convergence, because they want to converge the network so that they can get the savings. Now there's a third variety of datacenter, the Web 2.0 MPEG data. This is the likes of Amazon, Microsoft, Yahoo!, Facebook, et cetera. Here, we are trying -- we are moving on rapidly into a Layer 3 fabric with 10,000 or more, 25,000 servers. This is where the flat network paradigm is happening because they want any-to-any connectivity. They want to be able to move very rapidly. They want to deploy the solutions very rapidly. They want the highest bandwidth and then they want open APIs because they want to like monitor the network, monitor the servers, monitor the storage, so that they can do things in the application, should there be any prohibitions around here to go around those, so that they can serve their customers without any glitches and provide very consistent time, kind of serving to their end-customers no matter what is happening underneath. So they want lots of open APIs that they can traffic, engineer around there, should anything be going -- should there be a level of congestion in the network, et cetera. They want to rapidly get around that. And we are working closely with this class of customers to provide them with the right level of APIs so that they can do the monitoring and troubleshooting and the analytics they need. The fourth category in the datacenter is the high-frequency trading and the high-performance computing. At some point, some of these folks were using InfiniBand, a completely separate kind of a network. We have been reducing the latency, this is where the ultra-low latency comes in. They also need multi-tasker scale and they have also a level of east-west traffic. So this is a kind of a separate category. Here, the network is smaller but the requirements are different. These are the ones who really, really care about latency and multi-task. This is where our offerings on Nexis 3000 and some of our other new offerings play a critical role in this area. So if you look, our portfolio addresses all these 4 class of datacenters and the physical infrastructure can be very similar. But with NX-OS features that you can turn on, turn off the right kind of features, so that you can meet the requirements in these various categories. And if a customer -- let's go back to the general-purpose customer, right, the traditional enterprise. Today, they want to deploy a class of features, which they are familiar with, which they are comfortable with on the Catalyst, we provide them in NX-OS. But they are beginning to think about private cloud and as they want to go onto their journey, they will not have to do any rip and replace of the infrastructure, but then they can just turn on a different category of software features and do the transition. So they are very, very comfortable with our roadmaps, our architecture and our vision because they see they can do the migration at their own pace. And they don't have to suddenly stop doing what they are doing today and completely change the requirements.

Tal Liani

Excellent. Last question on technology before we go into the market and competition. OpenFlow, we have read about it over the last few months. If you can discuss what is OpenFlow and what are the risks and opportunities for Cisco?

Umesh Mahajan

So OpenFlow, first of all, is kind of a research paradigm, which came out of Stanford. So there's been some level of buzz around it. But first, before I go into the opportunities and disruptions of OpenFlow, let me explain a little bit what OpenFlow means. OpenFlow, at the highest level, means, okay, you have hardware switches underneath, you have low-level software over there running on the switches, you provide an extensive API at the slowest-level software, which is programming the hardware, setting up the hardware engine to forward the packets, you have that level of API. And the rest of the software, which is the real brains of running the network, which is all the control-part software, this is all the software which really is the protocol running, which controls how do you talk from one switch to another switch and back and forth, how do you route, how do you switch and how to provide the connectivity, redundancy, failover in OpenFlow paradigm, this is the software you remove from the switches and run onto a controller or a central point for the entire network. So you take all the software, which was developed over 15, 20 years, which runs on the network switches or router, and move it into one place and then run it from that point. So it's kind of a little bit easier said than done in our view, and I'll explain a couple of reasons. Cisco has been in this area 15, 20 years. We've learned a lot. We've had the brightest of engineers, who've invented these protocols and evolution has happened, right, whether it’s Layer 2 or Layer 3. In Layer 2, we went from spanning 3 now to FabricPath TRILL. And in Layer 3, there's always evolution of the protocol. So not only we’ve evolved these protocols, but we've had such massive deployments and literally thousands and thousands of customers. Of course, the customers have given us feedback, which we've woven into our protocol stack. Second, sometimes, we've had issues and we learned that failures can happen in ways, which nobody can imagine. So we’ve taken that into account and fed it back into our protocol stack. That's what I call hardening of a protocol stack because Cisco has a big heritage over here and we have done that. So it's kind of hard to imagine that suddenly, somebody can take all this knowledge overnight and provide this controller layer, which can meet all these requirements, solve all these problems and still be very robust and scalable. Having said that, there is a requirement, like I said, to provide open APIs. So Cisco is absolutely, absolutely committed, especially like with the likes of Web 2.0 and big data customers or large service providers to provide this API and provide them the things, what they want to kind of monitor, troubleshoot, diagnose, so that they have that ability to integrate some of their own applications or monitoring applications into the network and know what's going on. So that's their real application, whether it's Facebook, Yahoo!, Google, they can tune their own applications based on what's happening in the network. So we are absolutely, absolutely committed to that and we are working closely with the likes of these customers to provide them these APIs and the ability to control certain elements and mostly provide, I shouldn't say control, mostly provide what's happening in the network and the level of traffic engineering, so that they can do the right thing. They don't necessarily want to set-up everything in the network and build the entire protocol stack and massage it to that level. So we are very comfortable with our approach. But at the same time, we are participating in the open-software foundation because we want to make sure that first version of OpenFlow 1.0, in our view, was rather rudimentary and too low level. So we want to make sure that this whole effort evolves in the right direction. So Cisco will participate in this area and provide some of our knowledge. We’ll say, these APIs makes sense; these ones maybe not make so much sense and let’s abstract them to somewhat of a higher level, so that we can provide the correct data. But at the same time, don't make it so complicated for the end-user to use it that they never use these APIs. So that's where we are on the OpenFlow front.

Tal Liani

The next section of our discussion is about the market opportunities and competition. And what I want to ask first is why -- very basic question, why do datacenters need to go to these new architectures? What are the key underlying demand trends? And also if IT budgets get constrained due to macro issues, do you expect continued investment in datacenters?

Umesh Mahajan

Okay. So what we see at Cisco is, first of all, why are things getting centralized into the datacenter? It's happening because all the smart devices, mobile devices, right, everything needs to be online, everything needs to be available all the time no matter where we are, whether we're at work, whether we're at home, whether we are traveling, in a different country. Everything needs to be available and everything needs to be centralized. So that, in our view, is massively driving the growth of datacenters, whether it's enterprise datacenter, service provider datacenters or the Web 2.0 datacenters. All the shift is going on because everybody wants to be connected all the time and everything needs to be available 7/24. So now this is leading to different kinds of datacenters. But the requirement for massive amounts of compute, massive amounts of storage, there's no question about it. It is just growing exponentially because it's not just the traditional data, which is moving out of the data -- into the datacenter, it's also the video. The younger generation is just cranking out videos after videos and they have to be stored some place. And they're all moving again to the datacenter. Apple is offering iCloud. Somebody's offering this. All the elements need to be shared and they need to be hosted someplace. So these requirements, it's our belief, no matter what the macroeconomics is happening, sooner or later, datacenters have to keep growing because the requirements are such, and there's no turning around over here. There's just no turning around in this area because that's how the world wants to move and that's what's happening. Now some of the other trends, we see clearly all datacenters are not equal, where people are trying to host application, I mean, host other vendors or host application, some of the requirements are different. But these datacenters also grow very rapidly, because just smaller companies will figure out that if you want to host your applications or your server someplace, they will go into this cloud kind of environment. And we see tremendous growth in the traditional enterprise kind of datacenters, so we have to move everything into centralized datacenters. But we also see a tremendous growth in the cloud kind of environments, both service provider and your Web 2.0 emerging trends where we have very, very large datacenters. So we feel very, very comfortable and hence, the datacenter is one of our 5 main priorities at Cisco that we will continue to invest in a comfortable manner. And we have shown that with our investments in the Nexus portfolio. We would never -- we will continue it on with the Catalyst, but we saw this huge paradigm shift. And hence, we made a considerable investment in a complete new line of switching, the Nexus line, a complete new operating system, NX-OS, and you see that line. Because with SAS, there's such an immense opportunity for us. There's so much growth, which is going to happen in this area. Cisco absolutely needs to participate in this area. And then on top of that, we have strong partnerships in this area, right, with EMC, VMware, NetApp, where we have joint offerings and with other vendors to provide the entire datacenter solution.

Tal Liani

I have 3 questions left. The first question is how large is the opportunity? And how do you think about market share and target market share, et cetera?

Umesh Mahajan

So I think target market share, we already have a very high market share, 75% market share. So our opportunity over there is -- we absolutely want to grow, but we have a lot of market share already. So our feeling is that since the datacenters are really going to explode, even if we've maintained this market share, it's a tremendous opportunity for Cisco, because there's going to be so much growth in the datacenters. So we are very comfortable with that. I'll ask Marilyn. Do you know what the total size of the datacenter we expect?

Marilyn Mersereau

Yes, it's approximately 20 billion plus.

Umesh Mahajan

Okay.

Tal Liani

So that means you expect the datacenter switching market to grow substantially from the current levels?

Marilyn Mersereau

Yes.

Tal Liani

Okay. Next question is going back to something we discussed before, which is competition. But here, I want to zoom in on 1 topic or 2 topics. We spoke a lot about Cisco. We spoke a lot about the smaller competitors, but I want to just -- if you can speak about Juniper and QFabric versus Cisco. And then what does HP, what do they have to offer in this space?

Umesh Mahajan

Okay. So first, let me talk about Juniper. Juniper, as we know, has not been very -- first of all, they don't have a significant market share. It's a couple of points of market share. They have 2 points of market share in the switching side, right. So clearly, they have not been very successful even though they entered this market several years ago with the EX platform. And that, clearly, has not made much headway, especially their modular EX switching line, hasn't made much headway against either the Catalyst or the Nexus line. Recently, they started, as they've released the QFabric platform, while they talked about it for a long time, once the platform showed up, we saw that it was a proprietary architecture, where interconnections within their top-of-rack, so called switches, to their middle matrix or whatever they call it, is kind of proprietary links. One, it's a proprietary architecture in our view, which is not a good thing, for standard-based open-interoperability in this networking space Cisco has always stood for. Even if we come up with innovations, we want to standardize them, so there's this whole ecosystem, there's a choice for our customers, who can dictate, how can all the elements inter-operate and work. The second thing about the QFabric is, it's a large kind of a integrated system together, right? And when you have a large kind of an integrated system together, there are 2 problems I see with it. One is the failure domain. And something fails because since it's a large system, they’re binding it together the software in the end. And that software can fail no matter what one says, and then you have a very large failure domain. It can take out significant part, if not your entire datacenter. Do you really want it? And when we talk to customers, all of our customers said they do not want these large failure domain. Second is when you have this large integrated system, there has to be a very high level of software complexity. It kind of reminds me of the mainframe time. Like mainframe used to be this large humongous computer, very large complexity of software. Once you go down that more, you don't see mainframe releases or software releases coming out of IBM every 6 months. You see one every 4 or 5 years because when you have such a big system, you have to really test it and make sure it stands up because if it fails, customers are just going to throw you out completely. So this large kind of a QFabric system, I think, that's also a big disadvantage. In this rapid changing cloud environment, customers are demanding like tweak this protocol, tweak this switcher, give me this analytic. How can Juniper in this very large system be able to roll out these innovations and rapid changes over there? We feel Cisco, with our architecture, with our flat fabric or the flat architecture, this Layer 2, the FabricPath TRILL or Layer 3 with BGP, ISIS, you can deploy the leaf-spine plus architecture we are pushing, can provide a very competitive, very open standards-based architecture. And we've also looked at it and we see that with our offering, we can scale twice the amount of the Juniper offering. They are saying they can handle 6,000 10-gig servers. With our latest offering, we can offer twice the amount of 10-gig servers. So we can scale much larger and even our price points are much -- very, very competitive in this area. So I'm wondering, what's the big deal about this QFabric and who is going to deploy it? Oh, I forgot, you also asked about HP. So HP is a kind of a different entity. HP is not really known for their software stack, networking software strength, right, in the datacenters specifically. In the datacenter, the requirements are much more stringent, right? The datacenter has to be up 7/24. You cannot have failures and the stuff has to be converged. It has to scale and it has to be really, really robust in my view. And HP, traditionally, has not been known for building large networks. And today, datacenters are really becoming large networks at scale and meet those 7/24 requirements. And the Layer 3, which is an integral requirement of the networking stack in the datacenter, they're absolutely not known for their strength over there. So HP, so they show up in the datacenter, "Okay, we have servers, we have storage. Okay, Mr. Customer, we throw in the network maybe for free." Please look at it in that -- from that viewpoint. But typically, we do not see HP on the networking space, in the datacenter as a viable competitor because customers are not comfortable with their offerings and specifically with the software.

Tal Liani

Got it. Last question, semiconductor strategy. This is a question that we find ourselves asking again and again in various spaces of networking, not only datacenter, in various spaces of comm equipment. Just if you can discuss the merchant silicon versus customs ASICs that you have, you designed certain components in-house. What are the advantages that you see from your point of view of having this effort in-house?

Umesh Mahajan

Having custom silicon in-house gives Cisco several advantages, right? I think one and the foremost advantage is a lot of the innovations come out of the software teams. They are the ones which was participate in standard body. They are the ones who are looking at protocol features. So they come up with a lot of requirements, hey, these are some of the new innovations we need. But unfortunately, the software can only do the control path of the brains or to set-up things, but the packet flow or the packet switching has to be handled in silicon. Some when these guys come up with innovations, since our ASIC team and our software team is co-located, intermingled, sitting side-by-side, they go talk to the ASIC people and we roll in the innovations into the ASIC at the same time. And that's our #1 innovation. If you see, we came out with FCOE. You can't do FCOE without ASIC innovation right there and then. We came out with FabricPath TRILL. Again, the data packets are handled within the silicon, OTV list [ph]. It’s innovation after innovation, which we are rolling out rapidly, which we can roll out at a very -- in a very speedy manner because we control our own destiny with the silicon. The other thing we are doing on the silicon side with the merchant silicon is we absolutely need to be very competitive, right? So what we've done is, as the strategy changed within Cisco the last 2, 3 years, is we've decided we have much faster silicon cycles, right? Like in the Nexus 7000, every 18 months or so, you'll see a new breed of silicon or new kind of line card come out. Why are we doing this? Because just like Intel does, because they know there's new forms of nanometer technology, you can fit more and more transistors into the same size of silicon. That technology innovation keeps happening with our silicon vendors, who build ASICs for us. So we want to make sure we can use that towards advances in the technology every 18 months or so. That way, we can fit more into the same chip size or the die size, and deliver much bigger bandwidth and a feature set at the same cost to the customers. So overall, customers will get -- we're targeting down the road, I'm not promising immediately, 40-gig at the price of 10-gig because we will shrink more and more and more integration into our silicon. And then we'll also make sure that we do pay attention into latency, power consumption because all those elements are very important in the datacenter, not just the innovation side. Innovation will be a differentiation, but we'll make sure cost -- we meet the cost requirements, we meet the latency requirements, we meet the power requirements, so we can shrink everything and meet -- provide the densest, coolest, greenest solution for our customers. So we feel very comfortable with our in-house silicon development. It's a big advantage for us and the other thing is since we are developing our silicon in-house, there’s no third-party to take a cut over there. Because if you use merchant silicon, the merchant silicon vendor also has to make some profit over there. And then the supplier, who uses that merchant silicon has to make some profit over there. It's a level -- the pricing goes up. In our side, we are building our own silicon. Our profitability also helps margins over here by owning our own silicon.

Tal Liani

Great. I think with that, we'll open the lines for Q&A. And maybe, operator, you can help us with the process?

Operator

[Operator Instructions]

Tal Liani

I have one question that I wanted. I'll use the time while people are formulating their questions. In the past, the criticism against Cisco switches were, one of the reason was that there were too many versions of operating system. How did you address the issue with the Nexus line?

Umesh Mahajan

So with the Nexus line, our objective was that there are new kind of requirements, so we wanted to make sure that the operating system we provide for the Nexus line meets the requirements, right? First of all, first and foremost was it has to be a modular operating system, so that we can provide very high availability in our switches, right? So they are very robust. They are very modular operating systems. And that was the basic tenet of NX-OS. On top of that was we want to provide the features which customers need in the datacenter. We don't want to take all kinds of legacy features and put into NX-OS and unnecessarily make it more complex than it’s needed to be. Some of the requirements we felt we can provide that. At Cisco, we have decided we have 3 broad operating systems: NX-OS, IOS and IOS XR. And that has been centralized into one group, so there's a level of sharing across the operating system. We can have the best elements shared across the 3 operating systems. But at the same time, there's a level of differentiation also because Cisco has such a wide portfolio, trying to run the exact same operating system and make all the requirements meet would have been – would have slowed down our software releases on each one of these platforms. So we have a very focused effort in the datacenter with our Nexus portfolio because we believe the Nexus portfolio can entirely meet your entire datacenter needs. And it's running one operating system with the Nexus and the MDS line, and it's meeting the requirements very rapidly that our customers are coming up with.

Operator

You have a question from the line of Marianne Dolan representing Moon Capital.

Marianne Dolan - Moon Capital

But could you tell me like when Apple rolls out like the iCloud, what kind of participation Cisco would have in that type of construction of a datacenter?

Umesh Mahajan

So I think when customers like that rollout a datacenter, we have our offerings, right? We have the Nexus portfolio and we have our UCS line and our unified network services. So these customers who are building something like iCloud, they will need servers. They will need network and they will need storage. Storage, Cisco doesn't provide the storage element. We work with top partners, whether it's NetApp or EMC or XYZ. We work with them to provide a solution. So they would probably look at it, I need to build a large datacenter and have these needs. What is the portfolio? What is the company which can best meet our requirement? So we would work with a company like Apple and show them, "Hey, here's what we can offer and how we can meet your requirements." And then do a proof of concept with them. And if it meets their requirements, they will go with ahead and go with the Cisco solution. And I'm very comfortable we can meet the iCloud requirement.

Marianne Dolan - Moon Capital

And who would be your closest competitor because they probably have a high standard in that, right?

Umesh Mahajan

So competition can vary, whether it's solo vendors and it could be anybody, HP, IBM, et cetera. Those are our 2 main competitors over there. When it comes on the networking side, I think there’s a little bit less competition because we are the absolute leader in this area with 75% market share. And there, they would certainly be more comfortable with Cisco on the networking side because in our past relationships. And we have this architectural approach, where we also make sure at the systems level, our UCS line, our Nexus line, our MDS line and our partner storage all comes together. And the solution we have, like the Vblock offering with EMC. VMware, we have the Flex part offering with VMware, Cisco, USC platform, Nexus and NetApp storage. So we have the solutions and systems, which we pretest, pre-integrate, so it makes it much easier for customers to go ahead and quickly deploy and turn on the services. So we feel pretty comfortable with our offerings over here and the competition on the networking side like we've discussed previously in the call; could be a Juniper, could be an HP. So HP, not at all so much in the datacenter networking, more on than server side or somebody else. But our solutions stand out and customers continue to deploy them.

Marianne Dolan - Moon Capital

And how about like when Google talks about sort of going to generic datacenters? I mean, I know that's mostly on the server side at the moment. But have you seen initiatives to take that kind of build-your-own mentality outside of the server portion of it?

Umesh Mahajan

I think Google has a kind of -- entirely different kind of company. They continue to invest in some of their own infrastructure like on the server side, they were the first vendor who went to these server, which are completely stripped down. And they build their own software to manage it because they believe they can run and build these kinds of elements themselves. But when it comes to the high end, the networking element, I believe it's easier said than done because Cisco has a multibillion-dollar R&D budget, and we have such a long heritage of developing these elements. I don't think you can take 200, 300, 400 engineers and just fill these solutions completely end-to-end on your own. So they may be able to do something at the low end. But when it comes to the higher-end switching or high-end routing, I would be completely be surprised that Google can roll out something of their own in that area. So they continue to deploy routers and switches, which are built by the networking companies.

Operator

And your next question comes from the line of Sam Chang, representing Turner Investments.

Sam Chang - Turner Investments

So first question is any thoughts on Brocade and their VDX offering, and how that stacks up against you guys? What are the pluses and minuses in your view relative to your products?

Umesh Mahajan

So Brocade, traditionally, has been a SAN vendor. Their clients have been fiber channels. That's their entire heritage. So a couple of years ago, when Cisco announced FCOE and how we were going to integrate the LAN and SAN together, they went ahead and bought Foundry. That acquisition hasn't been really successful, as far as we can tell, because it's like taking 2 platforms, 2 operating system and trying to somehow blend them together, and they haven't been very, very successful there at all. I would say it's been a failure to some extent because if you look, the Foundry part, the Ethernet side, they've been losing market share over there. So they have not been successful, and so they continue to be doing okay on the fibre channel side. Even there, with our combination of fibre channel and FCOE, we are slipping away the market share even in that segment. But on the Ethernet side, I think they just haven't gone anywhere in our view. And that's why the company is up for sale. They've been trying to sell themselves off and that doesn't lead to a lot of confidence in the customer's mind, right? Who know what's going to happen to them? And so I don't see them as credible competitor on the Ethernet side because of all their -- they haven't been able to successfully integrate the platforms. And even when they say they support TRILL and things like that, they are doing weird kind of protocols and features over there. So I think customers will have a hard time embracing them for the bigger datacenter deployment. So we feel that Cisco is very strongly positioned against Brocade, and we don't have to worry about them that much on the bigger datacenter play.

Sam Chang - Turner Investments

Okay. So do you just not see them when you get to the bake offs? Or do you see them and then just customers just dismiss them because they haven't been around that long and they don't know about the stability of the company?

Umesh Mahajan

So we don't see them in most of the places. And I think in other places, with our big market share and our strength on the Ethernet side, they haven't been able to make any dent. And in fact, we keep making penetrations into the previous accounts they had.

Sam Chang - Turner Investments

Yes. And then have you heard this -- in terms of their technology itself, have you heard positive or negative things from your customers? Or was that kind of your take on their technology in terms of the different platforms and the protocols that are a little bit weird, per your remarks?

Umesh Mahajan

Yes, for example, I don't think they're offering anything revolutionary or anything new. They're just following on whatever Cisco and others are doing. Even when you look at their literature and slides and they come up with these new launches or whatever foolish thing they do, it's very similar. So they are just trying to -- so I don't see them any leader, any leadership on the innovation side or coming up with anything new. And since they are not doing that for customers, it's hard to believe that hey, they don't exactly see anything new, anything big over there, so why take that route? And the weird things I talked about was like they took a storage protocol because that's their sense [ph] and tried to do TRILL with that. Now that's not a good approach of doing it. TRILL has a different standard. You should follow the standard properly because we will not be able to inter-operate with vendors like Cisco or other vendors who implement TRILL.

Sam Chang - Turner Investments

Yes, okay. And then a separate question, on the previous question about the overall question of -- I guess, it's been around for a long time in terms of commoditization of hardware. There was a consultant that I was listening to on a call a couple of days ago, and this consultant was from a big 4 consulting firm. And he was -- his customers were the likes of Googles and the Facebooks and the Amazons, and he was incredibly bearish on commodities -- on hardware, being a commodity, Cisco and any hardware player. And he was -- the additional point that I would like to ask you is that it seemed like he was saying that the likes of -- he wouldn't name names, but the likes of Google and themselves were potentially interested and even kind of -- I don't know how this would work, but they were potentially interested in the kind of selling what they've learned about really commoditizing hardware and just adding the software layer on top. Have you seen any of that, or is this just kind of an isolated case that he was talking about?

Umesh Mahajan

We haven't seen that Google is offering what -- if they have done something in-house, they are offering it as a -- or selling it as a solution to other customers. We haven't seen that at all.

Sam Chang - Turner Investments

Okay, okay. And then what's your comment on the whole commoditization of hardware, even though you guys are having -- you guys have more software layers on top that add value, what's your rebuttal to that comment of hardware being commoditized and the only value you add is the software side? How do you guys compete against that?

Tal Liani

And, Umesh, before you answer, just one thing. This -- unfortunately, I promised Marilyn to kick you off the call one hour shortly after. So that would be the last question and then we'll -- anyone that has any other questions, we'll take it off-line.

Umesh Mahajan

Okay. So I remember Tal asking this question earlier about silicon strategy from Cisco. So one thing, the way we want to make sure that hardware, commoditization of merchant silicon, how do we compete against that is -- I outlined our strategy, is where we will have a very aggressive roadmap. We are not going to wait 4 or 5 years before we roll out the next-generation silicon for our switching families. In all our switches, we are going to come out with rapid fire changes, right? Every 18 months or so, you'll see the next-gen stuff come out. Because just like Intel takes advantage of how many transistors you can jam in into a NX86, we are going to do the same. We are going to ride the innovation wave which the silicon vendors are providing, right? Today, nowadays, we're [ph] shipping our 65-nanometer, if I can get technical. So later, we will start shipping 45-nanometer. We’re already working on 32, 28, 22-nanometers. Because we have to look forward couple of years out there, right? We will be rolling out competitive silicon at cost points where it can be very, very competitive, so that we can meet not only our customer requirements, but we can have silicon which is very, very competitive out there. And so our merchant silicon is not going to be very, very cheap or anything like that. We will be right there.

Tal Liani

Excellent. Umesh and Marilyn, I want to thank you very much for joining us this morning. We really appreciate you taking the time. It was very, very enlightening. Thank you all also for joining the call and if you have any further questions, please feel free to reach me and have a great day. Thank you.

Umesh Mahajan

Thank you very much.

Operator

Thank you for your participation in today's conference. This concludes the presentation. You may now disconnect and have a great day.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Cisco Systems, Inc. - Special Call
This Transcript
All Transcripts