David Flynn – Chairman and CEO
Dennis Wolf – CFO
Gary Orenstein – SVP of Products
Jeff Birnbaum – CEO, 60East Technologies
Steve Kleiman – Chief Scientist, NetApp
Paul Perez – CTO, Datacenter Group, Cisco
Jeff Rothschild – VP, Infrastructure Engineering, Facebook
Nancy Fazioli – IR
Fusion-io, Inc. (FIO) Technology Update Conference August 28, 2012 10:30 AM ET
Good morning, everyone. On behalf of David, Dennis and the Fusion-io leadership team over there, I'd like to welcome you to our Technology Briefing. This is actually our second annual briefing to coincide with VMworld. We were crowded around a TV screen last year, so I'm happy to have graduated to a slightly larger screen. Thank you for joining us and thank you to those of you who are joining us on the webcast as well.
Just a quick note on our agenda. We're going to start off today David Flynn, our Chairman and CEO will present as well Gary Orenstein, SVP of Products and Dennis Wolf, our CFO. We're very grateful for a very distinguished panel joining us today, taking time out of their busy schedules to share some insights on our technology and approach and we'll conclude with some Q&A.
Important slide here. We will be making some forward-looking statements during today's presentation. We did file our 10-K yesterday and have our 10-Q and 10-Ks on our website. Please review them for important information. In addition, we will be making some non-GAAP in regard to financial information and please do also visit us on our website.
So with that, one last comment. We do have a gift bag for you as you leave. So don't forget we've uploaded some collateral material and all of that is available on our website. So if you miss it that will make it easy for you, so please grab those. With that, welcome and please roll the video.
Thank you for coming out this morning to visit with Fusion-io and team and for our distinguished panel, and thank you those that are joining us by the webcast. It's a real pleasure to see many familiar faces and a number of faces that I don't recognize in the audience. That's about the best job of making our differentiation nice and concise. What I would like to do is to take that and talk about what it means in terms of view of the industry.
Now recently a vendor in the legacy storage world has presented a pyramid something like this, which I think helps detail the different places where you might deploy flash, where we will see flash deployed, where we are already seeing flash deployed. At the base of the pyramid is storage arrays that have flash in them to enhance their performance. Then there's all-flash arrays. There's appliances made of servers that have flash in them and then there's the flash within a server itself. So these are -- when we talk about how big is the market for flash and where will it be deployed, this is kind of in the enterprise, these are your options.
Now, this is a good pyramid. One of the things that is stated is that the closer you get to the applications, the more performance you get. In that point, we totally agree but here's where we start to diverge. If you look at what the implication is for the pyramid, this is really talking about market size or market opportunity for the flash technology. And implied is that the market opportunity in the storage array is larger. And also implied as the corollary is that flash in the server is most expensive and it's the fact that it's more expensive which makes it "a niche market" presumably.
And this is where Fusion-io does not agree, because when you put flash in the server, well, first you have to realize for a traditional storage vendor when they provide more performance, they have to charge more, otherwise it turns the world upside down if the performance costs less than from the storage system. So if you look at how vendors are pricing their server-side flash, they are actually charging considerably more than the flash that they put in the storage array.
But that is their choice to do so, because they don't want to cannibalize their own market. Because if you look at it, the fact that you don't require storage networking when you put it in the server, you don't need a separate box to connect to and it's not a proprietary system that has those additional mark-ups. Look at how much of Seagate drive costs. When that Seagate drive is in the server, it's not nearly as expensive as when the Seagate drive is in a storage array. You're going to pay two times, three times, four times as much for that same drive. So the closed system architecture makes it more expensive. So this is really the structural pricing of the flash deployment options.
Now what happens though when you look at that is it changes the market size picture considerably. There are 10 million servers expected to shift in 2012. There are only 100,000 storage arrays from the largest storage vendor expected to ship in 2012. So when we look at the market size, 10 million servers versus 100,000 storage arrays. What you realize is that now this is much more congruent. Flash in the storage array is much more expensive and will be used much more sparingly than flash in the servers and there's a lot more opportunity for flash in the server.
What this does is it turns the entire thing on its head. Now you'll notice the difference between these two. Going back this is a storage centric point of view, storage at the foundation. This becomes an application centric point of view, a server centric point of view which Fusion-io holds as the more valid point of view because that's why data centers exists is to run these applications that crunch on data. Storage is there to service the application, not vice versa and flash in the server makes that possible.
So why one might ask if it's more expensive in the storage array and not as high a performance, then what reason does it have to be there? Well the answer is it's a capacity tier. It is an opportunity for disks and disks will be in the storage arrays and be the main part of the storage arrays. But when it comes to all flash systems, what we are going to see is that flash in the server itself where it's software defined, it no longer takes a special appliance, it can be an off-the-shelf server with flash memory devices in it that the software has act as a SAN. This is the product that we just announced ION. The ION Data Accelerator is a software stack that goes onto an off-the-shelf server and teaches it how to act as a SAN.
Now today it's rather feature [Spartan], it doesn't have a lot of the conveniences of a big storage -- proprietary storage system. The reason for that is that we have chosen to accentuate performance over those additional feature sets and only once we can introduce those features while maintaining the performance differentiation of flash, will they be introduced. But as those new features come online, if you're able to do more of the high-end data management services from within the flash tier which can be inside of your array of homogenous servers that is the primary mode for building out cloud architectures.
Now, the final part I want to leave you with goes back to the presentation we saw, that movie segment, and that is that by moving the primary data from a backend proprietary system into the frontend in flash in the server, it not only gives you an opportunity to build those storage management primitives at higher speeds and within a commodity infrastructure, but it allows you to introduce new interfaces, new methodologies for how applications will interact with persistent data. You're going to hear more about that today as we talk about the product line, but this goes to the notion that you saw in the (inaudible) segment that when you introduce a fundamentally new medium for storage that has powerful new ways to access it, it can and will change the design of applications.
I'd like the quote from a gentleman at Amazon recently at a conference on non-volatile memories, this is Peter Cowan. He said non-volatile memories will change programming as we know it. And the thing is it will change it by making it fundamentally easier to persist data very, very quickly and very reliably, something that disk drives don't do. So looking forward to presenting this information about how the SDK works and how it fits into the overall portfolio. But with this pyramid, you can see the two primary things. Flash in a server-based appliance and down at the server level, the Software Development Kit that lets us provide richer services to the application.
So talking about it as the software defined storage world which we believe has made possible because finally, you can miniaturize the storage system small enough to be able to embed it in the server. Like we talked about from day one, it's the power of a SAN in the palm of your hand and that means you can use open server platforms for the storage. I was recently talking with Mark Leslie and he said, this is the concept that I always [astute], so I'll credit it to him. When small meets large, small always wins. You see it over and over again in technology.
Now with that thought, I'm going to turn the time over to Gary Orenstein, our SVP of products. He's going to go through the technology portfolio for you.
Thanks very much, David, and thanks everyone for coming this early hour here in San Francisco at VMworld. Thanks to those folks who have joined us on the webcast. I wanted to take a quick walk back through history. It wasn't that long ago in the 1990s that life was pretty simple and to get performance for databases and applications, we simply had to put some disk drives together inside a single server. It was great, it was relatively cost effective.
But as those applications and those databases demanded more performance, the only way we had to get more performance was to add more disk drives. And not being able to fit those disk drives into a single server, we had to externalize the disk drives into an array, initially connected with directCache iSCSI, but then of course once we realize that we were spending a tremendous amount of money aggregating all of these disks together for performance dedicate to a single server, we said, boy, we better find a way to share that across multiple servers and storage networking was born.
But now we're at a time when things have changed significantly due to flash memory. And what happens when you can get all that performance that you originally needed right back in that individual server. Put another way, what happens when we take the power of a SAN and with the advent of flash memory and all that it can accomplish, tuck that into a one-use server, this is the kind of transformation that we are in the midst of every single day at Fusion-io with our customers.
What this also means is that customers can easily deploy flash memory across a variety of deployment models, directly in the server for the maximum amount of acceleration, caching for the maximum amount of interoperability with the existing storage systems that customers have in place and that is accomplished with our IO Turbine software that we actually talked about starting last year at VMworld.
And now with the introduction of our ION Data Accelerator software, customers can deploy flash in the server and share that as a shared resource to other servers for maximum scalability, clustered architectures and most importantly the ability to choose any open platform server.
I want to walk through each of these deployment models, the direct model, the caching model and the shared model and show you how Fusion's broad product portfolio fits into these different areas. Let's take a look at the direct deployment models first which is how Fusion-io came to market and it remains one of the most critical aspects of our business. We have a wide portfolio of IO memory products.
Our ioDrive2 which many of you are aware of is the leading PCIe-based flash product on the market today. Our original ioDrive remains a formidable competitor in the market with the enhancements that we've made to our VSL software. And we're moving into new areas like performance work stations with ioFX where we've partnered with leading software companies to optimize applications for digital content creation, so that folks can work faster, save time, save money.
Let's move into the caching area, again the benefit here is for customers to get maximum performance at the server by placing flash close to the CPU and close to the applications, but also to be able to take advantage and keep their existing storage investment going. Also sometimes to relieve the load on that existing storage investment so it will last even longer.
IO Turbine is our software offering to accelerate applications and do caching in virtual environments. It transforms ioMemory into a powerful, intelligent cache. This helps customers increase the number and type of applications that they can accelerate. It helps customers increase the virtual machine density on a physical server. It helps relieve the load on strained network storage systems and in our opinion, it gives people the power to really unleash what virtualization can accomplish.
Sometimes there are customers who deploy in a non-virtualized environment. We have a caching offering called directCache which is optimized for those deployments as well. Perfect for clustered architectures with Oracle or SQL Server to seamlessly improve non-virtualized applications. And to make everything easy to consume for our customers, we have bundles. In this case, our ioCache bundle which combines 600 gigabytes of powerful ioMemory with IO Turbine software, a very popular offering in the retailer channel to make it easy and simple for customers to deploy caching all in one shot.
Now let's move to the third deployment model of shared storage, which just a few weeks ago we launched our ION Data Accelerator software, which we are very excited about as really a way to change the way people purchase storage. With our ION software, we're enabling customers to pick an open platform server, fill it with the appropriate amount of ioMemory for their application or use case and transform that server into a powerful server-based flash appliance.
And we've been able to do so at performance levels that beat proprietary boxes, which begs the question why we need to have those proprietary boxes going forward when we're achieving the performance that we need with an off-the-shelf industry standard open platform server than can be -- that can share storage across the network via fiber channel, iSCSI, InfiniBand. So that is the complete set of the shared acceleration as well.
Now all of these deployment methods; direct acceleration, caching acceleration, shared acceleration, benefit from a significant amount of technology for flash optimization. That starts with our management software, ioSphere, so that customers can from a single station manage thousands of ioMemory devices across their data center in real time to do monitoring and management but also to see the performance that's happening and drilldown where needed to understand how flash is interacting with their applications, not just within a single server but at scale across a data center.
Deeper inside we have our Virtual Storage Layer software which transforms ioMemory into a host visible storage block IO interface. Most importantly with VSL, we have bypassed all of the traditional storage baggage with legacy protocols like SAS and SATA and RAID to give us native access to this powerful new medium. And our Software Development Kit, which is breaking new ground and giving flash the power to showcase its full capabilities, we now have numerous partners who are programming [to our] -- the APIs of the Software Development Kit giving them native access to the ioMemory, harnessing the power of what flash can do, reducing the amount of code that they need to do to get their applications up and running, increasing performance and I'll talk a little bit more about this in a few minutes.
So I wanted to -- it's hard sometimes with the amount of activity in the market to understand what makes Fusion-io unique. We saw a little bit of that in the opening video, David explained a little further. I wanted to give you my take on the market over the last several years and what makes Fusion-io different. When the industry started deploying NAND flash, we all took the same approach of making it look like Block IO. The advantage of that was that applications could use it with no change whatsoever. This was a great idea. Any application that needed performance could simply access flash as if it had accessed other storage expect now it was a heck of a lot faster.
However, behind the scenes there were some significant differences and this is where the two roads part. Not only were people making it look like a disk, but many folks choose to also architect like a disk and this was not as good an idea of presenting the Block IO because architecting like a disk is retaining a lot of the legacy architecture infrastructure protocols designed for rotating media which are simply not capable of allowing flash to harness and deliver its full potential.
Architecting like memory, however, regardless of how the flash was presented, does give that capability and that is the pass that Fusion-io choose from day one, not to architect like a disk but to architect flash memory like memory, giving us the opportunity not only to deliver Block IO to the applications but to deliver enhanced IO, functions the disks aren't capable of. And going even further to showcase and deliver flash memory to applications as memory.
Digging in a little bit more to the enhanced IO and again starting with Block IO is one capability, digging into the enhanced IO we have a number of offerings in the Software Development Kit such as transactional block interfaces like Atomic Writes, the ability to persist multiple blocks of data simultaneously for databases and we've introduced Atomic Writes into the MySQL community where we see performance improvements, code reductions and a simpler overall deployment. Native file systems where we're seeing performance of the file systems that is equal or better than the raw blocked device. This is not something that has happened previously.
Logging interfaces or key value stores that can collapse a number of translation layers simplifying the data path, simplifying the code. And as you notice the application bar up top gets simpler, because now before application developers would have to do a lot of that lower level infrastructure themselves in order to get the performance they needed, in order to get the reliability they needed. Now when that's delivered to them, it simplifies the job of application development considerably.
And memory access, again delivering the true potential of what flash is capable of both in manners of Extended Memory being transparent to the application and Auto Commit Memory which will help persist the memory through system reboots which again is something that is a big plus for folks who are working on in-memory data architectures.
Fusion-io accelerates a wide variety of application across the enterprises, databases and virtualization are primary areas but as you can see from this spectrum, there is hardly an application where Fusion-io cannot help accelerate and we are thrilled to be working with so many customers and so many different used cases to help them improve the speed and agility of their business.
Our customer case study roster grows every single day. There are more than 35 case studies on fusion-io.com/case studies. And as you can see from the numbers at the bottom of the chart, it is routinely common for us to help customers with a 5X improvement or a 10X improvement or one of my favorites with Datalogix, a 40X improvement in the time to complete their complex data warehouse queries. Datalogix is running these complicated queries on a disk based infrastructure and when these queries would come in, it would simply overwhelm the system. Moving that data from a conventional legacy storage array to Fusion's ioMemory gave them a 40X improvement in the response time for that application.
I like to look at deployment architectures in the data center. There's a lot of discussion about where flash memory will go and how will it be deployed. But I think it's important that we look at how data centers are being architected today compared to yesterday. Yesterday, we used to look at the architecture in a vertical layout. We had the application server supported by a database server supported by storage array. And we thought about it vertically, but there is no better place than a place like VMworld to talk about the new infrastructure of going horizontal.
And when I go out and I talk to customers today, what I hear is the interest in picking an open platform server and scaling that out horizontally, picking the hypervisor or hypervisors I want to deploy and then on top of that folding in the services I need, whether that's applications, databases or others. Now as seen here, it's very hard to fit a storage array into that model. It just doesn't fit.
But flash memory when placed inside those tiny little servers gives them the power and more power than a conventional storage array. If I want to now spin up a database that can handle thousands or tens of thousands of database transactions per second or I want to spin up an application that requires high performance IO, I can do that in a heartbeat because of the flexibility of this architecture.
And that brings us back to the flexibility of the customer deployment with the ability to place Fusion's ioMemory directly in the server, customers can now deploy direct acceleration, caching acceleration or shared acceleration. And should they decide that they want to do something else, all the same infrastructure applies.
Two thoughts to conclude on before I turn it over. One is we are now in a world of x86 servers delivering better performance and proprietary raise in the storage market. To me this is a major turning point where we will see further penetration and further adoption of the x86 open platform ecosystem in the storage market.
Secondly, when we talk about solutions like the ION Data Accelerator software combined with Fusion ioMemory and open platform servers, we're now in an all silicon world for storage. And we all know that once we enter that all silicon world, we have Moore's law at our back and things will just get faster and cheaper and better and more reliable for us, for our customers and for the entire industry.
With that, I want to thank you for your time and attention today and I'd like to turn it over to Dennis Wolf, our Chief Financial Officer.
Thank you, Gary, and good morning everyone. Thanks for joining us. We're very excited to have you. I'm going to review with you the financials actually for the year and give you guidance on what we had said a couple of weeks ago for those who may not been listening. First of all, we're really excited about the business. I like the word persistence and I was thinking here, we've had a really good year but it's been persistent. I mean if you look and I'll show you over the course of the last four or five years, we've just continued to increase -- improve our performance.
It's come about by expanding our software defined portfolio. Gary spoke about it. But in fact we have the most robust road map that we've ever had and we're executing against that road map, and it continues to get larger. We have a diversifying customer base. We have about a 1,000 more customers this year than the same time last year. And when you look at that customer profile, you can see that our core business, those that are not represented by the top strategic customers that we have, has really grown by an excess of 100% this past year.
And in addition, our strategic business has grown by 60%. Our strategic business now represents about 53% of our business versus when we started the year in the mid 60s. So a 50/50 split we believe is a good thing for us to have and we spoke about that on our call a couple of weeks ago. We continue to invest in growth. We have 600 -- at year end we have 669 people and roughly 300 were in R&D or operations and another 300 were in the sales organization. That's where our investments are. That's where they'll continue to be.
We continue to have a lot of expansion opportunities in both the road map and in new software defined storage portfolio that we've introduced and we've embraced as well as the sales organization we are quite frankly -- it still remains the land grab and we try to expand that as quickly as possible.
Our operating leverage comes in a few ways. One is the go-to-market strategy. Not only that we have all of our sales people working the street, but we also have (inaudible), alliances and partnership that we're proud of and it's expanding. The other way that we leverage our operations is just improved operational performance. We've scaled fast and we're doing very well with that, and it's providing efficiencies that we're very pleased with.
And of course our product road map continues to expand. So those are kind of the three leverage points. And finally we have a very healthy balance sheet. We ended the year with $321 million. We were cash -- we have sourced for the year and it gives us strategic flexibility that we'd like to execute on.
So let's start with the financial position. We're only showing really here since the IPO, but if you look here it's been an exceptional performance going from $70 million to $107 million year end. And in the $107 million number that we have this past quarter for the quarter was Apple and Facebook was 53% of the revenue. Our core revenue grew 20% sequentially and again over 100% year-on-year.
Our gross margin was 57.6% which is within our target margin range of 56% to 58%. Our operating margin of 11.2% we're pleased with. We continued to show a strong profitability, again $320 million in cash and equivalents. And an interesting point here is that we had $29 million in deferred revenue. That's up from $12 million just a year ago as we build out that service support and future software stream.
So you look at this and when I started, I said persistent. Look at -- we had 228% CAGR. Now I realized that when you go from 10 to 360, you're going to end up with a 2X CAGR. But we went from 10 to 360 in four years. So we're really pleased with that. Our strategic accounts are on the right here and as you see, it's $107 million in equal proportion, about 50/50 now.
Our customer base is very diverse. We're now in about a third of all the Fortune 500 customers. This came about through both our direct efforts as well as the robust partnerships we continue to maintain with HP, IBM and Dell and of course with -- but we're really excited about is that we now have an emerging partnership with one of the ever innovative companies of Cisco and the leader in storage net apps. So we're very pleased to have them and also have them on our panel.
Strong customer momentum; 3,500 customers now; 2,659; 25 customers last year. In fact when we were doing out IPO, we were roughly 1,500 customers. So we've kind of gone up in the course of six quarters by about 2,000 customer, pretty awesome. And when you look at the customer base, we have 40 end user customers greater than $1 million. In the last quarter, that was eight and in the last three quarters one half of those 40 customers occurred in the course of the last three quarters, that speaks to the improve -- to the strength of the product road map.
Channel increases touch points. We're very religious about getting as many robust channel relationships as we can whether they're coming from alliances or resellers or (inaudible) where, as you can see here, our touch points were in excess of 300 up from about 180 this category from a year ago.
Our investments will be -- have been in growth. As I said when I started, we're 669 people. Of course we're higher than that number now. We're going to continue to hire as aggressively as we can. We have the tiger by the tail. Half -- roughly 300 of our employees are in engineering and operations. Within engineering itself about half of our engineers are software engineers and we have about 300 people in the sales organization, as you can see right here which roughly half of them are quota carrying. They're either AEs or they're SCs.
I talked about operating leverage and I talked about those three norms for operating leverage; go-to-market strategy, vastly improve product road map and improve manufacturing or manufacturing efficiencies just by virtue of our scale. What you want to see and we all know you know is that you want to see that the gap here continues to widen.
Our ratio of revenue growth to expense growth over the past year has been between 1.2 and 1.3. We're in a land grab situation here and you should assume that our ratio will continue to be at about that number on the way to really strong performance, we have said that while our long-term business model is greater than 20% operation margin model.
Our cash at $321 million again enables us to have strategic opportunities. Our inventory has come down over the last quarter at about $60 million. Our turns are improving that's in manufacturing efficiencies I'm talking about. Our deferred revenue has built out. And then finally on guidance, the guidance that we gave a couple of weeks ago on our call was of course we grew by 82% this past year.
Our gross margin was 56% for the full year and our operating margin was 12%. What we're expecting for Q1 is modest increase on revenue growth, a margin of 56% to 58% which is within our target margin range and operating margin of roughly 10%. For the full year, the indicator we gave was 45% to 50% growth in a very robust market with 56% to 58% operating gross margin and a 12% or so operating margin.
So with that, I want to turn this back to David. I hope you're excited about the business, we certainly are and David has a great panel too that we're sure you're waiting to hear from.
Thank you, Dennis. If I could ask our distinguished panel to come on up, have a seat, got nice high chairs right up front. I'd like to start by thanking these gentlemen for coming out this morning. They were generous enough to do that and spend their time with us and for that we are very grateful.
Now, we did when we announced that we were going to hold this panel, we solicited questions from many of you and had a chance to talk and we've incorporated many of those questions into what I will be asking here. Please hold any further questions until afterward. We're going to be having a reception where we and the team will be available for questions.
Let me just take a minute and introduce folks. I'm just going to do the top level thing and let them tell a little bit about themselves. But first Paul Perez, he's the CTO there of the UCS Group at Cisco; Steve Kleiman, the Chief Scientist of NetApp and has been for over a decade now and is the inventor of NFS among other distinction. Jeff Birnbaum is the Founder of 60East Technologies, a start up and Andy Bechtolsheim funded venture and formerly Jeffrey is a CTO from Bank of America Merrill Lynch. He quit his job there to venture out in a startup.
And then last is Jeff Rothschild who has joined us from Facebook. I believe what's the number? Was is employee six? Something like that. I'd like to talk about him as the adult supervision for all the young bucks there. Brought in very early by Excel and Jeff is also very distinguished as the Technical Founder of Veritas, the storage software, so knows storage inside and out as well.
So maybe as the first kind of thing would be to ask each of these gentlemen to provide just a brief bit of a background around themselves and what their current role entails at their organizations. Would you like to start, Paul?
Sure. Good morning. As David said, I'm Paul Perez. I'm currently the CTO for Cisco Unified Computing System as well as the Nexus at MDS switching for clients so as the virtualization portfolio. Been at Cisco for only a few months. Went there late last year after being at Hewlett-Packard for 27 years working on mission critical systems, working on ProLiant and BladeSystem and working on storage portfolio. And I've been engaged with Fushion-io on and off since three or four years ago.
Hi, I'm Steve Kleiman. I'm currently Chief Scientist at NetApp. Been in the industry for a long time, first at Bell Labs where I believe we worked on the first x86-based UNIX machine. I joined Sun back in '84. I think was the fifth guy in the -- sixth guy in the OS Group, something like that. And did NFS, was project lead for port of the operating system to SPARC and I remember the SPARC architecture committee of the project lead for multithreading in Solaris and I left to join NetApp in '96 and did their high availability and SnapMirror architectures. I was CTO there for five years and decided to step back and go to less meetings and concentrate more on future key scientists. And what I have been working on actually for most of the past five or six years is in flash and we have several products in this area, and I'm proud to say we just did this announcement with Fusion. So it's a whole portfolio.
Thanks, Steve. Jeffrey?
I'm Jeff Birnbaum. I've spent about 20 years in financial services. Most recently was Bank of America and given that I had an idea that with the way technology was in terms of processing power networking and this new memory tier, decided to go and do a start up to rewrite the messaging business. So we'll get a chance to talk about some of the things that drove me to the that relative to Fusion-io.
I'm Jeff Rothschild and I guess my official title is the old guy. I've been working in this space for longer than I like to admit, but I'll put some dates on it. I think my first project was actually a solid status project in 1980 at Intel. It was a refrigerator size unit also. Not surprisingly, the first thing I wanted to do with starting my own company was build a small one. So I started a firm in '82 to train a very small solid state desk and we sold a couple. I learnt a great lesson of not to do your own hardware then. Other people were better at it. But the focus of my career has been in operating systems and in particular storage systems. So Varitas software, I've worked with a number of other firms as an advisor in the storage space, and I've been with Facebook since 2005.
So Jeff next question will be from a very broad sense, how do you see the introduction of flash in solid state impacting your business today?
It changes everything. Quite literally it's going to change every layer of the fact. And as you alluded to in your introduction, this is not a disk replacement. You can certainly use it effectively as a disk replacement and people achieved tremendous benefits of doing so. But the full potential of the device requires changes to the allocations [back]. And interestingly at Facebook, our first deployments of flash were not replacing disk in existing applications.
They were the authors of some internal application, recognized the potential for flash and then integrated new functionality into the application to use it as a form of intermediate storage, lock addressed but with something closer to memory like space but living within the restriction of block addressability. And those were not applications that previously ran against disk. So we were probably a year through our first class deployment before we had a single application where we simply unplugged a disk device and started using flash in its place.
Other panel members want to talk about how they see the flash impacting their businesses?
Let me just describe something that I think -- I want to put picture on what Gary was talking about with this SDK, because for many of you it's probably a mystery, folks like myself and other panelists here. The programming status is 12 years old, so I sort of know what he's talking about.
So let me paint a picture that – part of the reason, we're at a phenomenal point in time with this technology – is that – Fusion-io is taking it beyond the hardware and driving it into the software. And many of us that have lived in this industry for a while have come up with impediments to what we're trying to achieve.
And for the first time we have a set of capabilities that begins to unleash. And if you want to know where Fusion-io is positioned relative to competitors, I use a baseball analogy, that many of the other firms are trying to get the first base, trying to produce a piece of hardware. And then Fusion-io heading into third, because of the software. The software would be differentiator and I think this alludes to what Jeff was saying is that it's going to impact every layer of the stacks, most of the stacks are software. And what we have today, is we have old stacks, stacks that were based on experiments, or what was possible 20, 30 years ago.
And there's a chart that I produced over the weekend that I think illustrates this better than anything. And we can put that chart up. There's actually two, but one of them, I'll get you to the next one, let's go to the next one. But then it is will tell you, I can paint a very interesting picture with this next chart, what's talked about where things are really heading and the impact on business is, can we get that? Yes, no.
Unidentified Company Representative
Here we go, it's loading for now.
This chart, the orange line it's for those of you that think that orange line is good, you're wrong. The orange line is horrific. That is the current world, now imagine what this means. This line has perfect, bad periodicity, that means every n times you do something it's stalls badly. That's the height of the line, it's the latency of what you're doing. So imagine, you're driving along a path and all of a sudden you hit a stop sign and then you go further you hit another stop sign, as opposed to get on freeway you'd just go. That's what the blue line is, no jitter, no destruction to making progress and not only bad, it's actually better.
That blue line is part of the SDK, that David and team are talking about. That is something you cannot create overnight. I think David and I first talked – tried talking about DFS over two years ago. And he was telling me every week it's – you know six months from now, it's six months from now, well it's here finally. We finally tested it. And I called it the awesome [soft chart]. This is awesome (Inaudible). This is awesomeness, this tells me that as a developer I can go out and produce a piece of software, okay that nobody else right now can do, because this gives me a capability that we haven't had. And every other file system, okay, will produce that same orange line. I have ext4 up there, for those who don't know ext4, ext4 is the default file system in Linux. There are other file systems, they all have that same exact behavior and it's a total hindrance.
Unidentified Company Representative
So I agree with you gentlemen that flash changes and everything, one of the interesting thing is that, when we first started speaking about flash, five, six years ago at this point we came up with a really interesting resolution first of all it's that – this don't go way for foreseeable future there's still too orders of magnitude of difference in dollars per gigabyte. What's interesting is, is that when you have the such a vast amount of data out there, it can't of fit in flash, it still have to have disk drives around there. But it changes the nature of how the data is – moves around the system. You can have flash at multiple levels in the system, one in the first things we did with it was use it as a cache in our servers and that worked fantastically great.
It made things just faster, working with people like Fusion, we wanted to – also we realized we can put that in the server and they are also great as a caching mechanism. Why caching? Why is that interesting? It's because especially in today's modern virtualized environments, you don't know, where your application is going to be and what survey, it might be here today and over there tomorrow, you and you don't know which data they're going to be using, out of that vast array of data. So what caching allows is the data to move exactly where it's needed and that especially in the LUN world, not just the entire LUN, just for blocks of data that you're actually using, right at that point.
And what's interesting about this is that the marriage of this kind of technology, like that Fusion has in caching world and what NetApp does is fantastic. Because if you think about it, the data is in the sever, when it goes out to be persisted, well you have a choice, you can go to a "peter" and that's fine, but going – but the stuff that isn't used anymore still has to go back to the disk which we stored efficiently for long term, and be called again when it's hot again and when needs to be used.
So it's a marriage made in heaven with NetApp in particular, one of the things that are our file system is good at is taking all these random blocks of data that are coming down from the caching layer and writing them serially on the disk because that's what disks are good at. And having our block by block metadata allows us to do this on a fine range basis and that's really the difference between what NetApp does with virtual storage [tiering] with Fusion as a part of, and the other vendors in the universe.
Yes, this if I can interject participators, part of the panel, this one of the things that was so very interesting, I thought, that high performance storage arrays, the arrays that were build to aggregate massive numbers of disks and have each one of them working independently on finding different chunks of data, the SAN world as opposed to the NAS world, it's ironic that those that do random reads and random writes, don't benefit as much as those that do the sequential write pattern of NAS system from a – write from Net App.
So there's an interesting history here Steve was actually an advisor to the ioTurbine team. And the caching product and that's really something that kind of brokered the relationship here. Because of that immense synergy with having a cache at the server that sorts out the randoms and serves the random reads and the writes can go back to the storage array and get serialized on the disk, it's really ironic that a system which writes on the disk, as if it were a tape, it sequentially writes on the disk.
Is now much more benefitted than a system that was presumably designed for performance and for doing random reads and writes. So it's not something that's easily replicated with a traditional performance storage array, to do what can be done with caching out front and a NetApp filer in the back. This brings up an interesting question though it as one of the things that came from the audience is.
Okay, so what's the interplay, I mean we'd all agree there the performance tier and the capacity tier what is the implications in terms of driving demand for those. What – what do you expect in that area, Paul in terms of with the advent of flash, flash at the server, shared flash the ability to accelerate, how fast you can do the performance tier of data, what does that mean for the capacity tier.
Well, I was thinking that – the market has really moved to be a computing market. But it used to be that – you think of a server, a storage system, a switch, those are a computers and where viewed as platforms and really the platform today is the ensemble of all that, the data center or our collection of data centers depending on the size of your problem set. And from that perspective on the network, the network is becoming a lot more like an inter-process or communication process.
Right, it's a, it's very interesting and I think at the advent of flash has done things like generate demand for 10 gigabit down to the server. In our computing environment, you are constantly seeking to achieve balance, you're removing bottlenecks and balancing the system. [ICAE] you and I have drawn, similar triangle charts in the past, where you can create very interesting performance-cost arbitrage between main memory and flash. With the current limitations of flash, although that is expressed as multi-level memory, I think that ultimately where that is going to it's where technology is scalable for system memory, which I believe puts even more emphasis and highlight the advances that Fusion is doing and its software ecosystem.
I think that – I've heard people say that flash is in disk and disk is the new tape, I think it's really more like solid state will become the new memory and there's a role in external storage where – keep in mind there's an explosion in the structure that are happening. Right, tons of data being generated every day, you guys have some very big numbers in your website that your quote about that, and the ability to retain, organize, protect, and persist that for very long periods of time, is our key mission for external storage, and it'll still be there.
So find that fascinating, I agree wholeheartedly when you remove the constraints of sourcing and thinking data quickly, it will create more appetite for the capacity tier. There's only we'll accelerate the growth of data and the need for improved disk-based systems with better data management capabilities.
We've noticed that already, even before the selling this flash as a caching tier inside our systems again. And the flash we use inside our system is memory it's not use of disk, let me say and flashcache, the what we've noticed is that – you can use now the big keep disks and you don't have to do anything with the higher performance disks that are more expensive or what we don't show (Inaudible) [1:07:18].
You don't, that's right. The other guys did, that's why they can't benefit from having liberated.
Right. What's interesting is you can use the cheaper dollar per gigabyte disks and still get performance out of them. And what's more, all the things that like NetApp has done to be storage efficient. Actually help in this situation.
It's like having 10 gigabit Ethernet when actually.
Right, and it's – if it doesn't cause a performance degradation because you've got this caching layers in between. And so it's really a marriage made in heaven. We can do what we're good at in terms of – having a performance think, having flash we are needs to be shared widely and servers don't have flash. But also being very storage efficient and holding all the data you need very efficiently and then serving it up where it's needed.
Let me take a different stand on that, which is getting to the same point, I think that's we're talking about here I mean, if you look at what this opportunity of taking flash with a software layer that makes the flash sub-system not look like storage but look like memory, we're really talking about the holy grail for programmers. Programmers for the last 20 to 30 years have been cheating, cheating badly. Because what they really want is instantaneous as for low latency, high-throughput, persistent data. Which means that as a programmer I can do stuff and if the system which a crash, I don't have to worry much because I can get right back to where I was.
Which is slightly different than I write huge in the number of transactions and then I need to save that for all kind and I want to push that off, so what we're talking about here is as a programmer, I now a huge scratchpad. I mean tremendous, we're talking terabytes of scratchpad that I can go game busters on. Okay, and then have something in the background bleeding that off, for short of more medium long-term storage.
But then also because of the caching capabilities read that back in some efficient way if I need to get back to it. These are capabilities that most programmers today haven't had, didn't think were possible and what's even more interesting as the technology progresses, as was being alluded to with the phase change memory or whatever you want to call it. The software that is being built today is exactly the same software you're going to need when the underlying solid state technology gets better which is what's so impressive about what's happening.
Jeff, as nobody that manages data on the scale that Facebook does? What do you see of that interplay between capacity tier, performance tier and the applications that use them.
Well, and you have to sort of look at this as a – either a point in time or where we think that this industry is going and if we pre-suppose that the increased demand for flash and application of flash today is going to drive, towards lower prices and greater densities then I think you can look towards the future where more storage is flash. But today, you just say, okay let's say that the product reads, it's going to be a hybrid and you'll have some applications that are driven by capacity requirements, and they're likely to be largely disk-based.
You're going to have other applications that are I-op driven, and that the access performance is the constraint, in which case there's no question that flash is dramatically cheaper than any moving disk, when viewed from a dollars per I-op, and of course you'll have those applications that exhibit some form of access locality that can benefit from caching, either time based or this is just general access locality and there you'll find hybrid approaches, where caching going to be very effective. So that's looking at today, if we see that all of this is going to drive the industry to provide continued improvements in density and dollar per given density then I think that you'll just see the benefits of flash being applied to a broader range of applications.
Maybe another – if I try to view this just from application classes and I look at the applications, where use flash and where use hybrid where we're using just straight disk, we shouldn't as surprised anyone videos are clearly placed on disk, photos, or also on disk but you can start to make an argument that if as flash prices decrease that there could be application – later application for flash and photo storage.
Databases or anything that is measured in terabytes it'd be silly to deploy on disk really it's certainly the operational benefits, the power of the reliability and of course the performance all (Inaudible) on in the direction of flash and I just think that this curve is going to be shifting over the next five to ten years to where you know, you won't ask this question.
Yeah, I think there's a different vector that you might be missing or downplaying here which is I think that it's the what people dream of today, when they're given a canvas that they can create new stuff on that drives the demand, the changes that line of what's flash and what's because I think that it's been seen overtime that the more we can get from our computer like our computing experience the more we want to do. So we have set of problems that we think we can solve today, and we're not thinking about the problems that because we think that they're too big. And then you say, you know what we've now jumped up a level, let's go after those and all we need so much more, so we talk about a terabyte system today of flash, could be talking 10-15 terabytes flash systems tomorrow or whatever technology is there, because we want it. And we can think about solving those problems.
Yeah, and I've talked to a few customers that are looking at, they call thin layers of solid state optimized close to applications on the order today they're tens of terabytes this is an enterprise moving to hundreds of terabytes and then moving in the majority of what's underneath to NAS and especially cluster NAS. For balancing and performance and you always optimize for what's expensive. You would have minimized the penalty of what's expensive and when something that was been expensive all of a sudden is cheap, completely changes the product and the people who recognize it early gain a huge advantage.
Unidentified Company Representative
So I guess just going to trying to try and work and try to answer, I don't see disks going away, but I do see its role changing over time. And I think that the combination of disk and flash is going to be with us for the foreseeable future. So suppose that these economic differences and performances and all that like you said frozen in time right now, what more does it, what applications could benefit from it even at that economics and what would need to happen to be able to use it, to use flash, what is the…
I think that, we mentioned here already here this morning. The software we're running today has evolved for a certain environment, to certain set of constraints, so pretty much everything that's in the enterprise or a internet infrastructure data centers today, grew up and ran a set of assumptions about you know the speed of devices in particular the disks were slow memory is very fast, you need to balance between the two.
flash won't be fully exploited until we're running software layers on middleware that built around a different set of assumptions, understands the properties of flash, you make different trade off maybe compression is more important than access orders, so spend a less work into trying to serialize your rights a little more, working into organizing your data so that is more highly compressible, because after all it's still an expensive resource. So those are the types of things that I see evolving and there's that occurs slowly, these projects take some years to really gain traction. MySQL, it's been evolving over 15 years, has done a tremendous job of working with [disk]. The Oracle database has been very well tuned to support the limitations of the devices I grew up with. I think that all of this it will take a bit of time but we're going to see infrastructure software that is designed from day one around flash and would fail miserably if you placed it on a disk.
Unidentified Company Representative
Yeah, [Denise] can you go back to that chart that we had up, we had that bad periodicity what's amazing about that chart is that you'd save yourself from a software guy, hey if that's been known for so long why wasn't it – why wouldn't been solved, I mean there are clever-clever people out there, why hasn't anybody saw that? I'll tell you why. Because they know Fusion-io drive because they wouldn't have seen it. You couldn't get that bottom line latency, because the latency of a disk is so high that that periodicity never shows up. It's all of a sudden when you try to go at a much faster rate and you're the medium that allows it, that you start to see it. And this is really the thing, most of the software that's built over the last 20 years has been built with assumptions as Jeff points out. That are changing and they're changing radically, and a lot of the stuff needs to get redone, for the better.
I mean the advances that Oracle and others are going to make over the next five to ten years, is dramatic. And actually the race and the database world, these other worlds will be which set of teams can move – migrate their code faster than the other teams to take advantage of this new meaning, because the capabilities are unprecedented.
So what was it that originally attracted you to get involved with Fusion-io, what was the – what's the history there for each of you –
Unidentified Company Representative
Paul may just start.
I mean other than your charming personality, or…
That's a great (Inaudible) yes, of course.
To be, it was the fact that early on flash was being positioned as a premium technology to storage. Right because it was accelerating a storage environment and I have been of the belief that flash is actually you should view it differently it's cheaper than DRAM. And you know in today's incantation maybe not as fast and it's a different access Symantec block versus very addressable but I thought that it can play some very interesting performance, cost tradeoffs in optimizing for scale that environment where you could have again that scalable persistent memory sitting on top of our long-term repository could really do amazing things especially ask the interconnects speeds continue to increase, that is that was one.
The second one was the fact that rather than treating it as a component, the team here at Fusion was looking at it from the application, looking that into the infrastructure, and their focus was on understanding workloads and tuning for workloads. And I think you know the phrase that you used in the past was reducing IT waste. And that was attractive because that was exactly the type of issues that our customer base was grappling with.
Thank you. Steve?
Well, actually, there's really two connections, right. The first, as you mentioned, are advisor to the IO Turbine before they were actually IO Turbine. And you know that was one connection for me personally. But the other connection is, is that we've realized from the beginning that flash implied that you can now separate IOPS from capacity and that there was a place, an important place for it on the host.
And when you know NetApp went to work with other people on an open basis to do the flashing, this caching tier in host, there were obviously a number of vendors who had flash there, but when you look at the environment, Fusion-io is really among the best-of-breed here and if you for -- on a performance basis. And if you want to do that at that level of performance, you'd have to go with Fusion. And so we're happy to do that and it's a valued relationship.
Yes. In terms of myself, I mean, I would say competition. I came to Fusion-io because we -- I was tasked at Bank of America with building out the next generation electronic trading facility, software and hardware. And I was looking in 2008, I was looking to solve two problems, latency and throughput. And they are different problems, and the Holy Grail will be solving both at the same time.
And I've got a Fusion-io card with the hunch that it could help us in some of the things that we're trying to do transactionally and high performance with the card and ran some tests and it did almost exactly what I wanted. I think now it does exactly what I want. But I saw the path, I saw that you can get there and that would begin the discussion actually David on the software side. The hardware was pretty darn good, but we needed some additional software and that's really how I got invested in Fusion-io.
We started looking at flash with some intensity in 2007. Thought through the various ways it could be deployed, different form factors and recognized that with our scale out architecture, the best solution was going to be one where the flash was distributed through the servers. When -- but why Fusion-io? It's not because of the product, it's the company. When you choose a vendor to work with, you are looking at far more than who wins a performance benchmark or at least give you the best numbers on some chart. It's about I've got the following set of problems and how to balance, how now does the vendor address those problems and do they aggressively try to identify those?
They are owner. They aggressively try to identify them as somebody else's problem. And so when choosing a vendor, we look for those who look to own the issues, always look for ways to improve their own product and we found that we had a very good working relationship with Fusion-io. So, it takes some time. I think we are hammering on these things for well over a year before we put one into production. Well, we didn't even have product at the time when we first engaged.
Yes. That's a good way to say it. So we serve maybe an extended QA lab for the first half.
Don't worry Dave I have been told all your code is out.
But the bottom line is, is that we look for a relationship where we can work as a partnership and we have one focus which is making a great product. It's not just that we can't do that with other vendors, but it is no matter when you start down that road, there is a lot of work that's involved.
Thanks, Jeff. Let's see. Open it up to what do you wish Fusion-io as a company did or as a product portfolio had. What would be -- I mean got some of my top partners and customers here, so when is the better time to ask what more do you want, what additional things can we do better with?
Well, I can start. Right, it's public information that you guys announced that we've entered into our relationship to yield product consumed that will be using in our UCS product. So I am very interested in looking at how can we integrate more tightly into our manageability ecosystems with service profile and that's what drove us by the way in my current job to work with Fusion was customer demand, right.
We are a relatively young server business, growing pretty fast and we've grown on the advice of our customers. I have had the fortune to have Jeff Birnbaum with some [info] in the past and sometimes he put a hand on my shoulders, sometimes he put the baseball bat in, but he always had things to say. And similarly with our customer base, they pointed us towards Fusion, I have prior experience and looking forward to it.
And other one is I would like to see more integration with our fabric by that -- the way that we approach fabric is changing very fast. And I think there are some opportunities to -- we're about to enter into a low latency Ethernet space. And I think that low latency Ethernet, we have our own custom mix where we show very low latencies in virtualized, expands physical environments, the combination of that with Fusion is good. And then may be at a more and I think that you are working on this. And Jeff mentioned it earlier is looking for ways to extend the logical size of the card depending on the workload and the access patterns with compression or de-duplication would be good.
One of the interesting things that look so attractive about the caching paradigm is that it brings the data management into unified framework. And as we pointed out today, it's mostly based on the current protocol and what would be really interesting is to again on an open basis to look for how we can change things in order to move the data around more efficiently and get the best of both worlds and we want to work with you guys and what the rest of the industry did to enhance those protocols to take advantage of the best that the storage system can do and best that the hope flash can do.
I think when you have a storage system that has its own cache and you have caching in the server, there is all kinds of important we can do to coordinate how the data flows between those.
Things like service level, things like de-duplication so that a snapshot coordination. There is whole things that are coming down type that I think are worth doing.
So between those two we have a lot of work to do.
Yes. I think for me being a New Yorker, I don't like to wait until the street is cleared to cross and because of that I'd like to take the software schedule and move it in a year. I think that the product, the products that they are talking about that I am aware of are such game changers in terms of the capabilities that application programmers can use to build out new and interesting products are so good and so valuable that I know that there is two or three things beyond that. And so I like not only to have the ones that we are currently thinking about ready to go, but I'd like to be dreaming about the next two or three things that I already have in mind that really will solve things.
Because I think there is – it feels although I imagine Fusion-io, they are so far ahead of everybody else because as I said everybody else is trying to get the hardware and they are not even thinking about the software. And it's not until you actually have the hardware into the hands of hundreds of people to be right ideal start to percolate, I mean necessity is the mother of invention in the sense that you may not have the problem, but if you have the problem you can see some of the solutions. And that's where Fusion-io is, I just wish it was deployed in about a year.
Well, let's talk about that the cost of flash will go down over time. So also being a patient, it's nice to see us advance along that that dimension as well. But more specifically just greater density, it would allow us to apply this technology to a broader range of applications and eventually that will come. But the benefits that could be realized are pretty obvious in terms of reliability, power consumption and performance. So we are looking forward to seeing that happen over the next few years.
Yes. Interesting point there is the reason why that comes up is that you start off in this caching behavior so awesome and it's at terabyte, it kind of got ahead of 2 terabyte or 5 terabyte cache, I can do ever more. And I think part of the challenge of this technology is that it gives you like I said that's scratchpad. And then as soon as you get that scratchpad is working, you dream about the next set of things you could do, it's like, god, if I only had a bigger scratchpad, I could do so much more.
So as a wrap-up question something nice and broad. What's your vision for where it goes from here? What happens over the long term? Where do things arrive? What are your thoughts and predictions for the industry?
I think that we've all touched on this theme, but we are really at the beginning of the transition of application middleware to exploit flash, just first surface of that has been scratched. The major analytics systems today are all based around how do you live within the constraints of very slow cheap disks, those are some that generally don't with the assumption SATA. They tend to access all data sequentially. They put tremendous demands on the network infrastructure for the movement of data between those.
As the cost of flash becomes realistic for deployment in analytics systems, I expect that we will see a whole new generation of analytic software, hopefully open source. We are very strong proponent of open source solutions. That will – that are based on the assumptions around flash. Clearly that's already happening in the area of structured information. So there are nascent database projects which were designed from the ground up to run on flash, they fall over instantly if you try to run it against the disks because they simply weren't based around disk assumptions. And while they haven't achieved at a deployment of scale today, you can see where there the future is going and I think we are going to see that across the board.
And I think there are two dimensions which is a software dimension. I think that Jeff mentioned early in his comments about the whole software stack. I think the whole software stack needs to be rethought, redone, reevaluate it, and it will happen, and it will happen because people can see what the benefits are and then you have the hardware dimension. And so today it's solid-state, I imagine that that will evolve over time. There is a lot of worry that flash will end at some point, but I don't think flash will end as much as it will become something else. And I don't think – the good news I think for Fusion is that it doesn't quite matter that it flashed today. It think if it's PCM tomorrow, Fusion is actually probably a bigger winner because then it's all about the software and the software works seamlessly whatever the medium is and that's really a valuable point to take away.
I think that looking ahead to the perhaps end of the decade, it's a bunch of competing technologies to replace NAND flash. And what's clear is that it will eventually become part of the motherboard in every server, like what's really interesting is some newer technologies on horizon like Spin Torque and MRAM that can make, can replace DRAM if the costs come down enough and that makes – means that all memory is persistent which at that point some of the – all the freedom you have for programming really comes to path.
And like I said I agree with everybody on the panel, but that databases and (inaudible) will be have to rewritten in order to take advantage of things like that. The function for the past are not – weren't accurate for this new world. And so that's going to be an interesting time. I think there will be a lot of startups in that area. There will be a lot of change in the industry, but hopefully there is still a bunch of – when it all comes down to, it is still a data management problem, but there is still going to be a lot of data. When you can process it at this speed, that means you can generate more of it and that's I think what's going to happen.
Yes. So in rough about I agree with the new breed of applications coming online. I think that there will be some burning pains in terms of some of the software stacks changing. The [candid] replacement technologies to NAND flash have really interesting characteristics right the cost for bid will start approximating that of disks, the durability is a lot higher, the speeds are similar to what DRAM is today. But I think in addition to it being on the motherboard, those technologies are all very planer and I think potentially even tighter integration between computing cores and that type of new persistent memory very close to computing cores with the hybrid and Fusion software is going to be a very interesting combination.
Well, thank all of you for your participation in the panel. We really appreciate. I think let's give them a round of applause. This has been extremely quantitative. If you guys want to, you sit down. I will finish up with the couple of remarks and (inaudible). Thanks again.
So just to wrap up quickly and then we will break where we will kind of reception to be able to chat with the team. I am super-excited about this next year to answer Jeffery's concern about getting the software done faster. We have been very successful at attracting some of the world's top operating system engineers like the gentlemen who wrote the I/O subsystem in Linux originally, the guy who has written the new Linux file system called Butter FS, that's Chris Mason and his team, literally a dozen plus of these guys, it's a very elite crowd.
It's exciting to be involved in something so fundamental that you attract this quality of people. So it's – we are excited about the roadmap from a product and technology perspective over this next year and you can see it in our investments in the company. We are really investing in two areas. All of the money we can afford to reinvest in the business is going into one of two areas. It's going into the engineering which is primarily software. However, we do maintain a huge differentiation in our memory controller technology and we will continue to invest there. We think it's only getting trickier to make the newer higher density memory reliable and performing. So, that the engineering side of things is increasingly going up stack in software for providing these interfaces to applications to this new generation of application.
That includes the technologies around how to share the data over the network going to what Paul was talking about to make it more efficient, to get data distributed between systems, as well as caching technologies to integrate those tighter with the capacity tier from a software perspective with partners like NetApp. So when we look at the three deployment models that Gary laid out, flash as a memory in the server with interfaces for applications to get the leverage from it, flash as a cache with interfaces with the backend storage to get transparent data management from it, and then flash shared over networks to where you get the benefits efficiency of sharing. You can see why we have and value the partners that we do in the networking and the storage space and in the application space, very excited about that investment being made in the technology.
We are also making a very large investment in our go to market, in our sales team, why because you have to open people's eyes to the possibility of a world that's different than the one today. It's very – it's been something that has excited me since the beginning to see when people have that [aha] moment that the constraints that they lived in, that box they have lived in for so many years is artificial now. There is no reason to do that. It's -- we are so used to the I/O constraints of today, it is like you remove the box and people are still sitting there crouched down like what's going on.
So the investment in our demand creation, in our brand, in our ability to articulate it really boiled down to this, the ability to articulate the value proposition, the ROI and how do you benefit from this in an enterprise today. So we are investing there and scaling our enterprise sales team to new geographies and to new market segments like the workstation market putting a particular emphasis on the cloud infrastructure providers. We are developing our go to market also with our channel partners.
One of the fun things about the software defined storage notion is that VARs, value-added resellers can integrate and customize systems for their customers. So we are able to enable the channel to have more ownership and more creativity in the solutions. We think that's going to drive a lot of business there in the channel. Super excited about our new partnerships with NetApp and Cisco. We believe that those will drive significant business and over the longer term significant value creation in solving some of these challenges that flash newly being introduced to computer systems present.
So we are happy to answer questions. Let me introduce you to the team here. We will answer questions at the reception, but let me introduce you to them. You met Dennis and Gary already. I would like to introduce you to Rick White first here. Rick co-founded the company with me and is our Chief Marketing Office. Jim Dawson, I think many of you have heard from him, he is our Head of Sales. Rich Boberg runs our – he's the GM of our Caching business. He came in with the IO Turbine acquisition as the CEO there. He is also running business development, our strategic alliances with the large software vendors. As you have heard, that's a big – help those guys. He had to change things that is super important. And Neil Carson, our CTO, and let's see who I forget, oh I can't forget Lance standing up impeccably dressed in the back there. Lance runs Operations and Engineering.
And with that, I think I have got everybody who is here, Shawn is our Chief Counsel. So please come up and chat with us and thank you very much.