Western Digital Corp. (NYSE:WDC) Investor Day Call December 4, 2018 11:00 AM ET
Peter Andrew - Vice President, Investor Relations
Steve Milligan - Chief Executive Officer
Michael Cordano - President and Chief Operating Officer
Phil Bullinger - Senior Vice President and General Manager, Data Center Systems
Mark Grace - Senior Vice President, Devices
Jim Welsh - Senior Vice President and General Manager, Client Solutions
Dennis Brown - Senior Vice President, Worldwide Operations
Ganesh Guruswamy - Senior Vice President, Flash Product Group
Siva Sivaram - Executive Vice President, Silicon Technology and Manufacturing
Martin Fink - Chief Technology Officer and Executive Vice President
Mark Long - Chief Financial Officer and Chief Strategy Officer
Mehdi Hosseini - Susquehanna
Amit Daryanani - RBC Capital Markets
Wamsi Mohan - Bank of America/Merrill Lynch
Karl Ackerman - Cowen
Steve Fox - Cross Research
Christian Schwab - Craig-Hallum
Good morning, everyone. My name is Peter Andrew, Vice President of Investor Relations here at Western Digital. I wanted to thank everyone for joining us today here live or via the webcast.
Before I begin, I want to make sure everyone has seen our Safe Harbor statement. I will not read all of this, but the key takeaway here is that we will be making forward-looking statements that involve risks and uncertainties that could cause our actual performance to differ materially. We do not undertake any obligation to update or revise any forward-looking statement. For further information about these risk factors referenced here, please look at our Form 10-K and our Form 10-Q available on our website. In addition, we will have a GAAP to non-GAAP reconciliation at the back of the slides, which will be available on our website at the conclusion of today’s events.
So very quickly, before I turn the mic over to Steve, a couple of quick points here. First, we have the entire executive staff and many other members of the management team here in attendance. Please take advantage of this and reach out and talk to as many as you can. Secondly, in an effort to really address as many questions as possible as well as to give more exposure to the WD management team, we’re going to try something a little bit different today. We are going to have a fireside chat, as you can see, right after the break. And the questions that I’ll be asking are the questions that you and the audience gave us as you registered for this event. So we’re going to try something a little bit different here. Please give us feedback on how that goes. We will also have another Q&A to wrap up the day right before we go over to lunch.
Finally, your feedback is critical. Now for those of you here on the room, as you registered, we gave you a quick feedback form here. If you can please take a few minutes at the end of the day and fill out that form and return it back to the registration desk, we will give you a little token of our appreciation for taking the time to fill this out.
So with that, let me turn the podium over to Western Digital’s Chief Executive Officer, Steve Milligan.
So, thank you Peter. I have one opening comment. How dare Urban Meyer to quit on the Western Digital Investor Day. For those that don’t know me, I am a big Ohio State fan and it’s clear that my influence at the university was not as significant as I thought. So with that, so again, thank you, Peter and good morning everyone. It’s my pleasure to welcome you to Milpitas for our Investor Day. We are very excited to talk with you about our company and how we are executing on our long-term growth strategy. Also, toward the end of the day, we will give you a sense for what all of these means from a financial perspective. My talk today is going to focus on two principal areas: one, the dynamic role that data continues to play in all of our lives and two, the ongoing evolution of Western Digital as we look to capitalize on those dynamics.
So let’s talk about data. In a very short period of time, data has transformed from being merely a byproduct of digital life to becoming the very engine of the global economy. We live in a world where our relationships, our jobs, our health and our safety increasingly depend on data. We all know that data continues to grow at a dramatic rate and new technologies and products are required to extract ever-increasing value from it. Storage is absolutely fundamental to creating any value from data. At Western Digital, we possess the capabilities necessary to build the data infrastructure that will enable people to capture, preserve, access and transform all of these data. Data is no longer a static, one-sided interaction. People expect their data to thrive, adapt and learn with or without human involvement. Today, all business is digital business, and it requires the most innovative, intuitive and predictive tools to unlock the most value. Those who don’t adapt to the ever-accelerating pace of change will not only miss out, they will be left behind. More specifically, artificial intelligence, machine learning, autonomous vehicles, mobility and IoT are all examples of big data and fast data applications that are transforming industries and disrupting traditional business models.
At Western Digital, we have built the capabilities necessary to participate in a broad range of growth segments across the spectrum, from the core to the edge and the corresponding endpoints. We will talk more today about how we are strategically positioned in each of these areas. In short, we believe the future will be built by the most innovative companies providing the most advanced building blocks for the most intelligent data infrastructures. As a market leader and innovator in this space, we believe the future will be built on Western Digital.
Let’s talk about the evolution of Western Digital. I originally joined the company back in 2002 when we were merely a 3.5 inch disk drive company with approximately $2 billion in annual revenue, operating at roughly breakeven from a profitability standpoint. Over the years, through both organic and inorganic means we have built an incredible platform that will continue to drive long-term profitable growth and value creation. Our SanDisk acquisition in 2016 enabled us to bring together a world class team to scale from both the size and portfolio perspective and become a more strategic and valuable partner to our customers. We are realizing the benefits of our strategy as we merge the technical capabilities of some of the brightest minds in storage components, data center systems and the open and compostable infrastructures of the future. With our capabilities and ongoing commitment to innovating across the ecosystem, we are uniquely positioned to be a growth brand that delivers strategic value to our customers and shareholders. The Western Digital platform is indeed unique, enabling us to perform well across multiple dimensions, including from a financial, operational and growth perspective.
What does all this mean for our investors? It means an ongoing evolution of the company from being merely a provider of storage components to being a seamless enabler of data infrastructures of the future so that our customers can drive the greatest value. It’s about leveraging the fundamental technology building blocks we have developed to help our customers find answers to the world’s biggest questions, and it’s about developing and delivering products that meet our customers’ needs with the right technology at the right time and at the right cost. So it is an exciting time to be at Western Digital. There is no industry in the world that is not being transformed by data, and we at Western Digital are committed to enabling that transformation.
Looking forward to the rest of the day, we will discuss one, our efforts to leverage IP and technology to deliver solutions for a data-centric world; two, our advancements in component technology; and three, our progress in scaling the data center systems business from both a growth and profitability perspective. But before I finish, I would like to comment on two additional items. I would be remiss if I didn’t comment on current market conditions. Mike will go into a bit more detail in his section. There is no question that the current market conditions are challenging, both in terms of the demand environment and from a flash supply/demand perspective. Make no mistake, we are keenly aware of the impact of these challenges that they have on our company and our shareholders. I am fortunate to have a highly experienced management team that is well versed in dealing with challenging conditions.
From a market perspective, we will remain nimble as those conditions fluctuate. Internally, we will intensely focus on those things that we can control, enhancing our technology and product capabilities, critically managing our costs and expenses, carefully adjusting our supply to current demand and continuing to allocate our capital in a balanced and intelligent fashion, all, at the same time, continuing to deliver superior value to our customers. The near-term challenges will unfortunately continue to persist for the next few quarters, but I can assure you that the management team at Western Digital is intensely focused on not only weathering these near-term challenges but emerging as a stronger and more capable company going forward. That then leads me to my last point, our stock price. To our shareholders and as a significant shareholder myself, I want you to know that I am not at all satisfied with the near-term performance of our stock price. And as I just mentioned, the management team is intensely focused on improving both the near-term and long-term prospects of the company. That being said, I firmly believe that the current stock price is not reflective of the long-term value creation opportunities for the company. Today is an opportunity for us to give you a sense for those long-term value creation opportunities. So as you go through the day, I encourage all of you as difficult as it maybe to look through the current haze reflective of the near-term environment and consider the long-term value creation opportunities for our company and our stakeholders.
So thank you. And with that, I’ll turn it over to Mike, our President and Chief Operating Officer. So thank you.
Thank you, Steve. Okay. Before I get started, just to give you an idea of what I am going to cover today. I am going to talk a little bit about market context to frame our alignment with that context and our strategies against that. And then at the end, I’ll talk a little bit more in detail about current market dynamics, as Steve suggested.
Okay, this thriving growth in data has been driven by a few things. When you look at the innovation that’s occurred across the core technologies that underpin IT compute, you talk about processing, storage and networking, some simple sort of quantitative facts. If you look at what’s happening, if we scale forward the last 250,000 years, we have the capability in our ecosystems around the world to process that amount of data in a day. Another thing that is happening, we have a lot of data growth. You hear us talk a lot about data growth, but there’s also a lot of data that exists out there that is historic in nature, right. One of the things that we are doing in this new environment with new capabilities is being able to reach back into what is commonly called dark data and light that up and use it in interesting ways to create value. A real simple example of this is healthcare, right. When you think about to use a specific case study, our work with University of California, San Francisco around cancer research and mammography, they are trying to accumulate 6 million images. At that level of scale, using machine learning and other advanced artificial intelligence techniques, they are able to diagnose more obscure versions of breast cancer.
Now in the current world, in the current construct where you have data siloed within an institution, it is very difficult to make that happen. So what they are trying to do is find new and innovative ways to share data in secure ways. That way, we can use all these historical data, bring it forward and combine it for multiple institutions to get the kind of volume of data to do this work and the type of innovation that’s possible is unprecedented.
The other trend that we are obviously seeing, we see it in the news everyday is this notion of data and data privacy. So, all of these things are increasingly relevant. They present both challenges and opportunities for us as we look at our products and we look at partnering in the ecosystem in the future. These are things that will continue to evolve. We’re in very early innings of this data-centric world, and there’s tons of opportunity for us to innovate around this in a way that is differentiated for Western Digital. Underneath all of this data-centric world you need infrastructure. Western Digital is an infrastructure provider. We have been innovating for 25 plus years. And in that innovation, we are able to provide new architectures and new capabilities that allow us and our partners to deal with the volume, velocity, variety of data that allows us to create value in new and unique ways. So, that’s a tremendous differentiation for our company. A specific example of this new emerging world is the notion of a data marketplace. So this deals with what I talked about previously, the sharing of data. So, this is a slide that represents some joint work we did with Accenture and this talks about the opportunity by the year 2030 for data marketplaces and what do they look like and what potential do they offer to us.
But let’s use another common problem. When we look at our agricultural industry and the movement of products through that to feed our country and our world, there is no real way to combine the data up and down that supply chain to understand when we have sort of contamination in the supply chain or anything else. So, one of the opportunities for a data marketplace is to securely exchange that data in a way that we have better visibility throughout that ecosystem. So there is health benefits for the country. There is of course value around being able to discreetly identify problematic products and take it out in a more efficient way.
So what this study would tell us is about 80% of organizations by this time will be able to monetize their IoT data. So that is the ability to share data within their institution in ways that they can create value directly. And obviously, the other alternative is or the other thing that will happen is about 70% of organizations will actually do that. You see the size of this market and the value created with it. About 80,000 organizations will be monetizing their data by that year. The data per day that gets exchanged, about 12 exabytes and the revenue from this exchange is estimated to be $14 billion. The total value of the data that’s within that marketplace, as you can see on the slide, is astronomical at $3.6 trillion.
So the data landscape has been evolving. We have been talking about the cloud, the core cloud and the evolution there and the trends that have happened there. We have talked about the growth of endpoints. But what has been sort of the new area for innovation is the edge. We are going to talk about that more today. The opportunity of the edge is really about the need to move intelligence and capability and performance closer to where decisions are going to be made. So, it’s a key element enabling all these new endpoints that are evolving. I’ll talk about that in more detail.
An interesting statistic and why this is so interesting to our company as a point of focus is it’s estimated by 2025, 75% of enterprise workloads will be processed at the edge. So that’s an area that needs capability and evolution, and it’s an area of tremendous focus for us. There is a few reasons for this. There is some physical limitations. One is the speed of light problem, latency. So if you need to make a decision in real time, you do not have the time to wait for that sensor data to move from the sensor all the way to the cloud and then back again with an inference and a decision. So you have to be able to do it much closer to where the decision needs to be made. There is obviously issues around compliance and privacy that would be in consideration and then there is a cost dimension. It’s not inexpensive to move data around the network so you want to minimize the amount of movement.
Alright. I will talk a little bit in more detail about each of these areas. The core cloud, the cloud really began by centralizing compute and storage and being able to allocate it in more efficient ways. It was disaggregation of resources. That was the first step. The next step through technology advancement like virtualization is the ability to now provision compute and storage as a service, all of these things leading to more efficient and cost effective deployment of IT resources.
Underpinning those capabilities are evolution in hardware. You see some examples of this now, the rapid advancement and processing capability over the years from 1990 to present. You can see that number. Storage volume is increasing at very rapid rates. And then with the advent of machine learning and other artificial intelligence, you see the booming market for specialized processors like GPUs. All of that, in combination with software evolution and architectural change has really led us to what is the current version of cloud infrastructure.
There is a lot of debate around cloud. Is it on-prem or is it public? What has now become more clear is the world is going to evolve into a hybrid cloud model. Depending on the workload, depending on the situation for some of the same reasons I have talked about around the edge, because of latency, because of control and privacy and because of cost, the model that’s going to prevail is a hybrid cloud model and ultimately, that’s going to be the winning model. That was endorsed this week when Amazon talked about their Outposts offering, which is an acknowledgment of the need to move the cloud class capability, server and storage closer to those endpoints.
Okay. Moving to the endpoints, obviously, in history up into this point, most of the data created has been from mobile and PC. That’s been the primary focus of this ecosystem. What we see in the future is the advent of new endpoint technologies. And you can see here some of the capacity and some of the volume of data that’s created by each of those endpoints. Obviously, mobile and PC continue to be big drivers of content creation, but many of the other ones are more around machine-generated data. A few of them to talk about automotive, so a car will – an autonomous car will generate 4 terabytes of data per day as an example, you think about creation of 8-K video, surveillance, 1.1 terabytes. So this is about the velocity and volume of data being created from all of these new endpoints. Again, we talked about the traditional endpoint creation. About 3 billion PCs and mobile phones will be out in the world generating data, but this new connected IoT ecosystem is an order of magnitude bigger. So when you think about the potential for data creation, that’s the scale of the IoT infrastructure. That’s the scale in which data will be created on a day-to-day basis and when you look at the amount of data creation, by the year 2021, 850 zettabytes per year being created.
Now this data point, you see a number of them published by different people, so let me take a minute and be clear what this is. This is data generated by all endpoints, including endpoints that today are not connected. Those all represent opportunities in the future to be connected, to collect that data and then to make determinations in the future on how to make that data valuable. Okay, the emerging edge. Really, the edge is the enabler of this IoT infrastructure. Without that, without the ability to move that capability much closer to those endpoints, we would not be in a position to enable that broader industrial IoT infrastructure. Things like moving fast data, SSD level performance, non-volatile memory, capabilities like virtualization, the ability to move artificial intelligence and machine vision all the way down the ecosystem to those endpoints are critical capabilities. Moving that level of intelligence close to where decisions are made, it’s really about enabling autonomy.
So we think about the obvious example about automotive, autonomous vehicles, but this applies elsewhere. You think about surveillance, the ability to make real-time decisions at the edge, all of those things require us to take the same cloud-based architectures and move them outside of hyperscale data centers and move them much closer to where that activity is so we can make real-time decisions. All this is really driven by the same physical factors. You have got latency, bandwidth and privacy concerns, all in the effort to enable this autonomy and that needs to drive this compute and storage closer to the edge. Again, reiterating this notion of 75% of workloads will be actually processed by the edge in the future.
So when we think about the growth of cloud, we see the traditional core and cloud endpoint markets growing nicely. We see accelerated growth as we see new application like software-as-a-service and artificial intelligence being deployed. Those are major growth drivers within the cloud as well as the edge. And then ultimately, this endpoint evolution and moving more and more intelligence both into the edge and the endpoints themselves and then that will drive new growth markets beyond that as we create more capability at the edge.
Okay. Now I am going to talk about our markets. Before I do that, 2 years ago, we talked about a series of strategies we were going to deploy as we brought SanDisk together with Western Digital. And as we looked at our strategy by market, we wanted to accomplish a few things. One is we wanted to get portfolio breadth. We felt we were too narrow as we came into the early stages of the combination. Second, we wanted to get customer breadth. Again, we felt we were too narrow across many markets. And third, we wanted to get leading products. We want to pick our spots, but we wanted to have leadership products and not be a fast follower. Those were all strategic considerations we have been talking about in our earnings calls from time to time. We have been investing to make these things happen.
So what I am going to do now is go through each of our markets and talk about how we have done over the last 2 years and give you some specifics. Okay. First, let’s talk in more detail around the core data center. So data center infrastructure represents about 37% of IT spend today. Obviously, we are seeing some trends like infrastructure-as-a-service driving the growth here and this is driving massive growth within some of the hyperscale partners that we have.
The growth rate within this market continues to be substantial, around 13% on a revenue basis. Between now and 2023, we estimate that market to be $45 billion in size, exabyte growth, around 39%. And you can see the size of that market again by 2023. We are in an excellent position at Western Digital capitalizing this market, because we really have expertise across all of the under – storage underpinnings that services this, both in NAND and HDDs that enable us to provide a breadth of capabilities and the broadest product portfolio in the industry.
Let me talk a little bit more specifically about where we are. So, first capacity HDDs, that continues to be a pillar of our strategy. We lead in that marketplace. Let me just go through some specifics. We have announced our 15 terabyte. That product is nearing production. We also have talked about the fact that we are sampling a 16 terabyte product today. That product is based on energy-assist technology. You have heard us talk about that in the past as MAMR. And what we have learned as we have gone through the MAMR development is not only the more traditional effect that people understand, which is an alternating current spin torque oscillator effect. It’s a tremendous benefit in terms of driving aerial density growth, but we have discovered a number of additional effects.
Steve is going to talk about that in a bit more detail later. But we will be launching our 16-terabyte next calendar year with that technology included. And the thing I’ll note is we feel like – and we are very confident of – we are going to be the highest aerial density provider in that timeframe at volume. And we do that by delivering the 16 terabyte drive with 8 platters as opposed to competitors that are going to do it with 9. So we continue to be committed to this MAMR-based technology. We broadened the definition just to incorporate this other effects. So, we are clear about what it is we are doing. But this product is tracking to our expectations. We expect it will ramp next year. Beyond that, in 2020 and beyond, there will be a 20 plus terabyte offering. That will also be enabled by energy assist and that would include multiple of these energy assist effects. So that technology development continues to deliver to the promise. We continue to be confident about that. But just to be prudent, we continue to make our own investments in HAMR. It’s an important energy-assisted technology. We need to be aware of what’s going on there and we are – so we are making good progress. But we continue to be confident that we made the right choice on productization around these energy-assisted MAMR-based technologies and we see a multigenerational benefit in deploying that technology.
Talk a little bit more about capacity enterprise and what it represents. When you think about the broader storage deployed industry, when we think about exabyte shipped as a measurement, today, capacity enterprise represents about 35% of the industry’s exabyte shipped, and that’s all – that’s flash and disk together. As we project out into 2023, that’s going to grow. So the relevance of capacity enterprise, despite what some people say, continues to be very clear. Again, Steve will talk about this. It’s driven by workload and use case-specific capability. It’s driven by the fact that over this time, given the aerial density scaling I’m talking about here, we will be able to maintain a 10x differential in cost per bit. So when you think about building infrastructure, cost at every tier is important, and you got to deploy the right technology for the right job. Capacity enterprise continues to be that. When you look at growth in revenue, today, it represents about 9% of revenue of the storage industry. It it’s going to grow to 17% by 2023. So it continues to be very strategic and very important.
Let’s talk about enterprise SSD. I would be remiss to note that this has been an area of disappointment for us. So of all the great things that have happened, which I’m going to talk more about in some of the subsequent market segments, we’ve been disappointed about our performance in enterprise SSD. In particular, we’ve been disappointed about how we’ve done in mainstream NVMe.
Alright. That is the volume part of the market. The market is moving there. That’s where the hyperscale providers are when they are buying finished product. And we have not participated in a significant way in calendar 2018. I am pleased to say as we stand here today that our in-house design, which is an internal controller development, internal architect, firmware architecture is now this month moving into qualification. So, we would expect to be ramping that product line in the first half of calendar ‘19 and then we will have subsequent generations coming behind that as we improve our relative position in this space. So, this is a lever for us to improve our performance in calendar ‘19 and we are on track to do that.
In addition to the NVMe, which has been the real focus of our internal development, we have made progress in other product categories. So we have just – we are in the midst of qualifying our most recent SaaS product line across our OEM accounts, so that is doing well. So as we move into 2019, our enterprise SSD portfolio is strengthening. We think we are going to be substantially more competitive in 2019 and beyond and you will hear a little more from Ganesh later in terms of our progress there. So, lots of progress. It’s an area certainly for us to improve our relative performance in 2019.
Alright. Client compute, so this is the PC space. It’s a very large market, not growing but large. There are some trends here that are interesting that we are trying to capitalize on, and I think we are doing a nice job. One is what we are seeing in this very large but not growing market, there is some price mix up, meaning higher end, more capable systems are becoming more desirable for PC gaming and other things. Obviously, it’s continuing to trend to laptops and really embedding more sort of mobile-like instant-on features. So those are trends within the space and $19 billion market growing at 5% on an exabyte level. We are extremely well positioned to work with our customers and our partners, and this has been an area that as we have brought on our own PCIe product line, we have been able to gain market share in the flash side as we manage this transition from HDD to flash within the PC space.
So, a little bit more on our strategy. We have talked about how do we deliver time to market value-leading products. We really do that with a strategy of developing platforms. I have talked a little bit about that within our enterprise SSD portfolio, but same strategy persists here in our client portfolio. We have got a core platform, code name Moonshot in our company that is really designed to be able to scale from entry level all the way up to the performance part of the marketplace and then it’s designed to scale and have lots of reuse both in our hardware on NAND generations as well as firmware. So what that, over time, allows us to do is deliver leadership products from a performance standpoint, cost-optimized products where we need it and do that with efficient development expense. So every time we go to a new product line we are not having to rewrite 100% or the majority of our firmware as an example.
So this is fundamental to our device strategy. One, have a very efficient development model so we can take core platforms and then scale them up and down within a market. And what you are going to hear from me in a few more slides is take that same platform and leverage it into adjacent markets. So this is around getting most market access with the right feature and functionality in the most efficient way. So that’s a fundamental strategy we’ve been deploying for the last 2 years across these devices markets. The result of that is – an example here is our WD Black. You’ve heard us talk about this on earnings call. This product has allowed us to do a number of things. Number one, it’s based on a 28-nanometer design. We are performance competitive with 60-nanometer designs. That talks about the focus and purposefulness of our internal controller design. So on a less capable logic node, we are actually able to deliver equivalent performance. So as we move to 60-nanometer with the same design, we are going to get that benefit on a relative basis, so tremendous work by our team doing this.
We also were able to expand our customer breadth, right. As we came together, our flash business was limited to around three large PC customers. We have now been able to expand this to 9. So going all the way back to our original strategy of saying, how do we get breadth to product portfolio? We have now got it. We have got PCIe coverage, both in the entry level as well as in the performance segment. That was not there previously. And we’ve been able to, through that effort, get access to a much broader marketplace and participate with a much broader set of customers.
Moving to our direct-to-consumer business, so as we have come together as a company, we were able to combine three amazing brands with unique value propositions. The WD brand remains in the market. That’s really about preservation and smart storage. So you see us do some things with network-connected products there and some software and services that delivers incremental value. The SanDisk brand continues to be all about sort of freedom and mobility. Obviously, form factor and size and scale play something into that. And then ultimately, G-Tech is our creative pro brand, and that’s really workflow based. So, all three of these brands coexist in the marketplace. You see the size of this market, $10 billion. It’s large. That $10 billion marketplace is really around the storage businesses that we engage in.
As I talk about on the next slide, I am going to talk a little bit about the capability of our channel and the scale of our channel. There is opportunities to use that channel in other interesting ways in time. So we can as things become more sort of appropriate strategically for us expand our relevant market and use that channel and those brands to give us market access. But ultimately, it’s about differentiated products and trusted brands that win customers. And what’s important within this, as you look at our channel scale, this is our distribution reach, 550,000 stores reached globally. The number of products shipped annually is 330 million. That talks about the scale and reach of these brands. You combine that with the trusted brand value that we have, that gives us an opportunity. As these categories in retail begin to consolidate, they consolidate around leading brands that have a lot of breadth. It’s been – another thing we have been able to achieve is within both brick-and-mortar and some e-tailers, we have been able to take a larger market position, more shelf space as we brought the companies together. So, tremendous progress there, tremendous relative performance within our direct-to-consumer businesses, Jim is going to talk a little bit more about that in the fireside chat.
Okay, alright. Mobile, so mobile, by 2023, we are talking about 1.7 billion smartphones, average capacity of greater than 200 gigabytes, a confluence of technologies including things like 5G, which is going to drive higher performance, higher video and 8K. That requires us to deliver more performance around the storage layer, which is performance that I will talk about shortly, of greater than 500 megabytes per second write speeds. All of these are areas for us to innovate. The size of this market is very large growing at 8%, $27 billion and from an exabyte standpoint, growing about 35%, $330 billion, again, all by 2023. Our strategy here is similar to other parts of the portfolio, is to broaden our product breadth and deliver leadership products.
So, let’s talk about an example of that. So our recently announced 3D TLC UFS 2.1 product is the first 3D NAND product in this category shipping. This product is capable of performing in a 5G world. So we are 5G ready today. That is uniquely differentiated. We are the only person that is capable of doing that and shipping product and sampling product with that capability. What have we achieved? We have grown our revenue in this category by 150% over the last 2 years and we have expanded our customer portfolio within China for the 5 China OEMs are now buying – are customers of ours. But in the broader sort of three largest handset providers, we have two of the three of them as customers as well. So our mobile participation continues to broaden. Again, it’s being done in the same way as I talked about previously. It’s a common platform. It’s about product leadership up and down, both entry level and performance and then it’s about breadth of customer participation and we have been able to accomplish that within the mobile segment.
Okay, growing endpoints. This is beyond PC and mobile. So this is automotive. This is surveillance, smart cities, homes and smart factories. Over time, an increasingly amount of data is going to be generated here and actually stored in both the endpoints and the edge. Principally, these are sensors of one form or the other that is driving these marketplaces. The other thing to think about this in some instances, are not always connected. So they will be connected and they will sometimes have to operate independently, which then again drives the need to put storage and compute and capability all the way down. Excuse me down to the endpoint.
One factor or one significant component here, again, back to our strategy, is this marketplace leverages our development for mobile and client compute to enable this marketplace. So these are derivative products. They have special requirements, but they’re fundamentally based upon the technologies we develop for the scale markets of client compute and mobile. So within this category, we are the first to ship 3D NAND in an automotive grade application. So we’ve been certified in that regard. We have a well-positioned product. This has been an area again where we see nice growth over the last 2 years. Same principles apply, broaden our portfolio, leverage the technology into more customer engagements and grow revenue.
Coming back to the edge. So the edge, as I have been talking about, continues to be about how do we take these core technologies that have been developed in the cloud and move them closer to where the endpoints are? So the virtualization of the network, the virtualization of the storage layer, all things that are going to be moving down, streaming closer to the endpoint. Principally this is the technology, the capability, the part of the ecosystem that really enables these machine-to-machine connections and allows them to perform and deliver the autonomy in real-time contextual decision-making that is required. We see the size of this marketplace on a go-forward basis, a very large marketplace. And we have got the flexible platforms and technologies to engage this. So again, it’s about repurposing technology that we are developing for other parts of the marketplace and making it appropriate and available to these emerging opportunities.
When we think about the size and scale of what edge computing represents, a recent study by McKinsey talks about the scale of this. Hardware alone by 2025 could be over $200 billion as we enable the edge. And you will see for the first time, I have been talking largely about devices and our direct-to-consumer products, but on this slide, you see our system products, which Phil will talk about just following me, this marketplace we really are situated not only to participate on a broad base of devices products, but we also will be able to add increasing value with our system and platforms products as well.
So let me talk about some specific examples about customer engagements that have changed since we have come together as the new Western Digital, including SanDisk. We are now able to elevate our engagement with customers. So we can talk to them on an end-to-end basis. Previously, our businesses were very sort of siloed, focused on a particular design win. We did not see the entirety of their objectives in terms of how they are evolving the ecosystem, how our customers and partners are evolving ecosystem. Now with the breadth of products that we have with our commercial relevance, we have a different kind of engagement.
This is a ridesharing company, you might be able to guess who that is that we have engaged across the end-to-end infrastructure they have. If you start in the left, we talk about location services and dispatch in payment. There is specialized infrastructure there. We deploy NVMe SSDs into that infrastructure to deliver the low latency requirements of their database.
And then you look at analytics batch processing of object storage. Today, that’s our capacity enterprise. That’s an opportunity in the future for our platforms and systems. Then you look within our machine learning, their big data analytics. They are running a different database on top of capacity enterprise and caching SSD. So, this is very different engagement for us. This is an ability to look across all of their infrastructure needs over time obviously participate commercially, but have a view into what the requirements are in the longer horizon and be able to develop products that are more purpose built and differentiated to advantage us in the future.
Okay. Here is another end-to-end example. So, this is a surveillance company. Again, when we look at them from an end-to-end standpoint, endpoints all the way to core, our products cover the broad radius. So, we are putting flash technology into their endpoints. There is the first hop to the edge gateway, so the first aggregation. We put flash products, and in some instances, depending on the scale, hard drives can play there as well. And then as you move back down those regional nodes, there is room for broader scale, more capable infrastructure to support the deep learning and aggregation and storage of and training requirements required to move back to the edge to do inference. So that end-to-end capability, again, similar to the ridesharing. This is a large surveillance customer for us. We are able to see their end-to-end requirements. We are able to engage across it and we are able to anticipate the needs of the future.
Few statistics on the scale of this, by 2021, an average smart camera will have 200 gigabytes in the endpoint itself. So, that’s in recognition of two things. One is if it’s not connected, it needs local storage. But even when it is connected, there is going to be inferences that are pushed back from the training down to ecosystem to allow it to make real-time decisions. So storage and compute are essential within the endpoint. The surveillance video recorder, average of about 4 terabytes per, and these are widely distributed. So you get an idea and a feel for what this looks like.
Okay. Here is a third example. Automotive, again, this is a very large automotive partner of ours. And again, from end-to-end, you see us participating from the endpoint, which is the vehicle itself where you are doing capture, inference and storage all the way to the back end core where you are doing the big data analytics and all the way up and down that chain. So again, we are participating with our devices products, but we are also participating with systems solutions into this marketplace. So we are able to provide storage infrastructure across this ecosystem on an end-to-end basis and the unique and sort of compelling part about this that’s different is not only the market access and the revenue TAM that we have access to, but it’s the strategic engagement that comes with that.
Again, to give you a feel for the size and scale of this. The average consumer vehicle on this time – by the way, there’s a wide range, is about 500 gigabytes of fleet vehicles or terabyte. And then the edge core is obviously multiple exabytes. The scale of this is tremendous.
Okay, alright. This ecosystem is really all about an end-to-end connection between the core, the edge and the endpoint. The size and scale of this by 2023 is 3.2 zettabytes and the revenue opportunity within this market is $146 billion, which includes around $110 billion for devices and $35 billion for systems and platforms. So our unmatched breadth and depth of technology across both big data and fast data gives us a uniquely differentiated position against these marketplaces from devices to systems. Okay, alright.
And before I conclude, I want to spend a little time on the current market conditions, so since the quarter has begun and since our earnings announcement, we have continued to see challenging market conditions on a global basis. So the overall macroeconomic volatility remains and we continue to see challenges in many of our markets, including in Asia. The hyperscale capacity optimization cycle, which includes both technical optimizations as well as inventory run-off continues and it’s translating to us on in-quarter TAM reductions and we continue to see that as the quarter has progressed. We do see the hyperscale investment cycle reaccelerating in the second half of calendar 2019. And mobile phones, we continue to see slowing in that segment as well.
One positive trend in the quarter is PCs are actually running marginally stronger than expected. And across all of our end-markets given the macroeconomic volatility, there is just a conservative inventory position being taken by customers and our channel partners. On a positive note, moving it to the flash part of our market, we continue to feel very positive about the long-term growth rates of flash demand, which we believe will be in the 36% to 38% range on an annualized basis. But given the factors that we have talked about here in terms of hyperscale investment and mobile phones in particular, our current view of calendar 2019 is that demand for flash will be below that long-term rate in the calendar year 2019. But despite that, in these short-term dynamics, as we have talked about – as I have talked about here, the long-term growth opportunity for Western Digital remains clear.
So with that, I will conclude and I’d like to introduce Phil Bullinger. Phil? Phil has joined us about 2 years now.
2 years ago, yes.
And Phil joined us from Dell EMC and we are going to take the opportunity now to sort of unveil in a little more detail our systems business and the progress we have made there. Thanks, Phil.
Thanks Mike. Okay. Good morning. We have been looking forward to this day for some time. We have been working hard here at Western Digital building a systems business, a strong systems business, another component to the Western Digital portfolio, but maybe some context to begin with. As Mike said, I joined Western Digital approximately 2 years ago, not quite 2 years ago. Before that for a number of years, I led one of EMC’s strongest storage divisions, the engineering and operations really all phases of the business, the scale-up NAS business at EMC. Prior to that, I was Senior Vice President and General Manager at Oracle, leading the storage business at Oracle as well as much of the engineering teams around the development of their storage business. And prior to that, I led the Engenio business at LSI for a long time, which was easily the largest OEM provider of system storage in the marketplace. So I get asked the question sometimes, you have been in storage a long time. I always say since the Reagan Administration. But I get asked the question, how does this time compared to other times in your career? And certainly, the storage industry, if you look back historically, is marked by technological leaps ahead where we brought new products to market, new ways of doing things, the evolution from traditional client/server computing into the cloud era, there has been a lot of transitions in the industry, but fundamentally, this is the most exciting time in my career and in the storage industry because of the rate and pace of how things are changing, the velocity, the variety, the volume of data is staggering.
I am continually amazed at the size of opportunities that we are competing in. Just for the enormity of the data opportunity in front of us, we’ve had longstanding technological standards in the storage industry that have held up for 30 years and are now being replaced, very aggressively being replaced. With whole new paradigms in terms of performance and latency and bandwidth and what’s possible. And so it’s an exciting time. It’s really an exciting time to be in the storage business, and especially in the storage systems and platforms business. It certainly creates a landscape where there is tremendous change. There is a fluidity in the market today that encourages new entrants, encourages better solutions, encourages companies that can keep pace with the scale at which things are developing. So many of you know and of course, we have earned it over many years. Western Digital is long regarded as an innovator and leader in storage technology. At the core fundamental media layer, at the device-layer and Client Solutions. Many of you, however, are probably much less familiar with the scale, scope of the momentum, the capabilities that we are developing in our systems business in the company. So today, we would like to kind of pullback the curtain on that and give you some insight into the markets, the opportunities, our objectives for the business, our capabilities, the progress we are making with customers and how the business is growing.
So we’ll get into this a little bit. Mike and Steve did a great job of sort of framing what I would call the data universe, this incredible explosion of data that we have. And I think no one can really refute it, argue with it. I think we all accept the fact that data is growing at just unprecedented rates. Steve said and it’s very true, data has relatively quickly gone from kind of an artifact of our lives, something that we just deal with and accumulates. We put it someplace and we store it. And most companies, their data protecting strategy is just keep everything forever because they don’t know what they’re going to do with it eventually. That has gone from something that’s just an artifact of doing business to the core engine of growth. Data today is what everything pivots around. And the reason we spend so much time as a company thinking about data, talking about data, understanding data is days because it’s the incredible engine of growth and it’s creating tremendous opportunity. This explosion of data is creating tremendous opportunity for a global economy and certainly for Western Digital going forward.
I would like to narrow the lens a little bit. In this section, what I want to do is kind of make it real in terms of the physicality of how we deal with data from the core data center through regional and edge architectures out to the endpoints. I want to make it tangible enough that you can see some of the trends we are seeing, some of the things driving change in this marketplace. So there are a number of factors that are changing the way people think about data centers. Data centers used to be largely the bastion of very large core environments that large enterprises built. They built the brick-and-mortar themselves, they managed it themselves. It was filled with very traditional storage and very traditional computer architectures. It wasn’t that long ago that most data existed in row column spreadsheets through traditional compute and storage architectures in very traditional data centers. A lot of that is changing. It’s significantly changing.
One of the biggest trends in the market, and it continues, is there is a tremendous movement of enterprise workloads from that traditional data center where it’s brick-and-mortar that a company built and manages itself to some kind of third-party data centers. I don’t necessarily mean the public cloud. The public cloud is an example of a third-party data center, but there are many, many, many more examples of that from colo facilities to the facilities that are managed by companies that do a managed service offering to private cloud providers to managed service providers, if you just want to pay for a service as you use it or consume it. And then, of course, the public cloud providers of all sizes, right. But there is just tremendous shift of companies choosing not to go build another data center, but to leverage the infrastructure, the resources, the capabilities of companies who do this more for a living.
The second thing that we see is most data center innovation today is not occurring in the public cloud, in the hyperscalers I would say, but it’s occurring at the edge in these regional and edge data centers. It’s the confluence of storage technology, compute technology, networking technology and certainly now more and more, wireless technology. And as 5G emerges on the scene, this is going to become especially true. This confluence of technologies is really these universes kind of match each other. They meet up at the edge. And that’s where we see a lot of the data center innovation happening. Today, I wouldn’t say we are at the point where we have got micro data centers underneath a cell tower, but that’s coming. Very soon, we are going to see data centers start to show up at the bottom of cell towers and then everywhere in between. So most of the innovation and how we think about form factor, performance, capacity, latency, everything, most of the innovation is occurring in the regional and edge architectures and out to the endpoints.
Data latency is now driving data center architecture. Again, that long ago, you could walk in almost any data center and they would say there is my Oracle closure, there is my ASP cluster. Here is where I do my Microsoft exchange applications. They don’t talk like that anymore. If you walk into a data center today and they are pointing to data, they are saying here is my data environment for this. Here is my big data analytics environment. Here is where I do my real time analytics. They are not talking about application stacks. We are talking about data. So data is now driving architecture, placement, interconnect. It’s largely driving the decisions about where and how and why a company chooses to invest in physical infrastructure in a given spot.
Artificial intelligence, we call it AI. There is a lot of buzz words. As an industry, we love buzzwords and now AI, IoT, trust me on this one, AI, artificial intelligence, has been for the last few years and will absolutely continue to be the seminal, the most transformative trend in our industry. This idea of bringing cognitive processes and data to every facet of business decision-making of predictive modeling, of being more prescriptive about how we make decisions today and more insightful about how we make decisions going forward. This whole thing comes together around AI and this is really what’s driving much of the growth of data and certainly data center architecture. There is a direct correlation between the power and capabilities of endpoint devices and the amount of data being stored. If you remember, Mike presented a slide that talked about some of the data generated by endpoint devices. One of the examples he gave is a car. We are seeing more and more driver assist and autonomous driving vehicles in development and on the road and that will continue to progress. I think we are in that turbulent time where we are quite one extreme or another, but we are in transition.
A lot of the data that we see show a car generating maybe 2 terabytes, 4 terabytes of data a day. But the highly instrumented cars, the ones that are on the road today that are actually being used as R&D vehicles to capture data, to learn more about how to make decisions regarding following roads and signs and people and traffic and obstructions. These are generating 10 terabytes, 15 terabytes, 20 terabytes of data a day. The car companies today are swimming in data, but they know their ability to compete, even it’s an existential question. Their ability to even be in business in the future depends on how well they can contend with that data and take advantage of it and make decisions around it. So that’s just one example. As 5G emerges, as processor technology gets embedded deeper and deeper into endpoint devices, they generate more data, they drive storage growth.
Finally one of the last trends we will talk about on the slide is this idea of the clouds effect on data center architecture. No doubt, the cloud, especially the hyperscale has had a tremendous influence on how people develop, engineer, develop and deploy applications and infrastructure in the industry. This idea of scale, elasticity, workload mobility all of these were really driven by the advent of large public clouds. But that technology is not just the purview of large public clouds anymore. Everything we do, we think about hybrid workflows, we think about application mobility, we think about how to scale applications and storage, how to make them even more elastic. And these technologies are deployed everywhere, even in what we would call traditional enterprise data centers. So it’s really permeated the market and how these technologies are deployed.
The next point I want to make is a comment regarding hyperscale public cloud infrastructure versus what we would call maybe more traditional data center architecture. It’s important because as Western Digital thinks about investing in the business that I am leading, data center systems, if you ascribe to the model that the world is all going to the cloud, that everything in the future is going to be hyperscale public cloud, we probably wouldn’t invest in this business, right? This business primarily is pointed at infrastructure, platforms and systems built for the non-hyperscale market. You will see through the course of the day and certainly Mike talked about it, we participate significantly as a company in the hyperscale market. The data center systems business is pointed at more of the traditional enterprise, the private cloud architectures, the edge data centers where I just described most of the growth is occurring. And there are some fundamental drivers behind the continued investment in those architectures. So the industry is going to find its balance point, right. There’s been a movement to the cloud.
And certainly if you are a contemporary company that was born in the cloud era, you have no IT infrastructure you are totally dependent on the cloud, that’s a great architecture. But a lot of companies are making different decisions today and in fact, we see actually what’s been called in the industry the repatriation of data out of the public cloud back to more of these architectures and infrastructure. And there is really three drivers that are causing some of these transitional trends here. The first one I would generally bucket as business drivers, things like business criticality, service responsiveness, the economics, data security. If something is really fundamentally critical to the existence of the company, it’s considered its most important asset from a data point of view, all of it or at least a lot of it is not going to be in the public cloud. It’s going to be in infrastructure that we would call a little more traditional. And certainly, economics has a play. When you get to a certain size of dataset, it becomes very, very expensive to manage that in the public cloud.
The second one is a workload thing. Innovation is happening at a very rapid pace around all flash architectures. So the advent of a persistent storage-layer based on transistors is changing the way people write applications. And increasingly, applications are written with the assumption that I can get to any, any byte of data in literally a couple of microseconds anywhere I want to find it in a rack and that is changing the way people think about building data centers. It’s no longer sufficient to just build things at scale. If they are slow, the application is not going to deliver the value it was created for. So this workload notion of performance, latency, it’s definitely driving data center architecture going forward in decisions.
The last thing is from an architecture point of view. As I mentioned on the first slide, data centers used to be constructed fundamentally at the application layer down. Now they are being constructed around the data layer. It’s a data-centric view of the world. I mentioned flash first. This idea of data locality, Mike mentioned the speed of light problem. If you are capturing a tremendous amount of data at the edge of your enterprise, there is no time to move all of that to the cloud to run your analytics up there and then move the result back to the point where you can make decisions that affect the consumers of the data and creators of data. You have got to do it right there, very close. So this convergence of compute and data closer and closer together with wireless technologies, that’s what’s driving a lot of this architecture.
The last thing I will mention and I will talk about it more when I talk about our portfolio, is this idea of composability. For quite a while now that a) primary struggle in IT administration, in IT architecture is how on earth do you build flexible data center infrastructure out of fundamentally inflexible building blocks? We think there is a better way to do it, and we’re innovating significantly in this area that’s called open flex. I’ll talk about it as an architectural initiative of the company here in just a second.
So, what’s the meta point here that I want to make? It’s an exciting market for us. The reason we are investing in it is well, there is a number of reasons that the company is investing in data center systems. It’s a tremendous opportunity and that opportunity is largely what compels us to innovate in this area, to bring products to this space, to be a credible at-scale relevant competitor in this space and to bring value to our customers. It also of course builds stronger, stickier, more resilient customer relationships that – where the engagement of the company is not necessarily around the preference of a device purchase, but we are solving a problem for them at scale in the data center.
It’s a large market. You can see how it breaks down, $35 billion total market. $18 billion is in traditional all-flash and hybrid storage. So part of our portfolio, our traditional purpose-built primarily and secondary storage solutions and I will talk about those. The middle $13 billion is in storage servers. This is the hyper converge, converged infrastructure part of the market. This is obviously a fast growing part of the market. We have platforms, building blocks that enable this part of the market space and then $4 billion, but very quickly growing. We think is going to be one the most exciting areas of growth going forward is this area of software composable infrastructure, the part of the market, where physical resources can be composed through software very dynamically to address workloads.
Okay. Let me move to the next part of the presentation here. So our objective is to establish Western Digital as a top five strategic provider of data center solutions. That’s the mission of the business. That gives you a feel for scale. That gives you a feel for relevance. That gives you a feel for the impact of the business on the company. It starts by focusing on emerging and high-growth markets and workloads. Our goal at Western Digital is not to be all things to all people in this market space. When we are building DCS, as we call it, data center systems, we are focused in those parts of the market that we see as pointing forward as where the puck is moving to in terms of the high growth opportunities in the data center space.
The second thing is it’s fundamentally important that we deliver unique value. Unique value, again, our purpose is not to just be another dull EMC, to be another HP, to be another net app. The world has those, right. It’s our job. It’s our mission to bring products to market that are very unique, that have unique value and exist significantly, because we can create them and other people can’t. And I will talk about what that means in terms of how we engineer these products, but a big tenet of our business to bring unique value to the marketplace. The third thing of course is from a business point of view, we are absolutely committed to a profitable business growing faster than the market. That is very, very important to us. This business needs to be a growth engine for the company, and that’s a core tenet of our objectives.
Okay. The first thing I want to start out with is a slide that you probably expect me to present. And frankly, creating PowerPoint is easy. It’s fine. It’s graphics, right. Execution is the only sustainable advantage in high-tech. That’s always been true. And it’s our opportunity as a company to execute in an area that nobody else can. It’s this idea of silicon to system innovation and engineering. The ability to start at the core fundamental technology layer, the media, whether it’s manufacturing the aluminum platter that builds the disk drive and the head assembly and everything else that goes into that or fundamental innovation at the transistor layer. Steve is going to talk about our leadership in this space, and taking that all the way to the end customer experience of the data center.
We are uniquely positioned to do that. It starts at the component level at the fundamental technology layer, in our NAND technology, in our controller technology. These layers of the technology stack are heavy, heavy in software. You think about Western Digital maybe as a hardware company. We build physical things certainly, but much of the innovation that we focus on is in the software layers, and it starts at the fundamental base layer of persistent storage bits. Moves from there into the hard to describe space with heads and media and re-channel, controller firmware, the mechanical design of these devices. We build on a technology stack now moving more into the business that I am responsible for in our platforms business with devices, the electrical and mechanical design, the firmware, the diagnostics that we develop around these devices. The full expression of this vertical innovation, the vertical engineering, is in the system layer, where we are delivering purpose built complete storage solutions into at-scale data centers globally.
So what does this mean in practice? As I mentioned, execution is the only sustainable advantage. We have today engineers, for instance, in our primary data center product, our IntelliFlash product, sitting very close to Ganesh’s team designing the next generation enterprise SSD, not just talking about requirements, we would like it to do this, we would like you to do that, but co-engineering this thing. Talking about how capabilities at the device layer can be expressed in an all-flash NVMe primary storage product to reduce latency, to increase device durability, to increase performance over its lifecycle. Similarly, we have engineers from my team in our factories in Asia where they are building hard disk drives at vast scale looking at the test flow for those devices. And where we can intercept and that and grab those devices and sort of complete that tuning process in the systems that we actually ship.
Giving us the ability not only to ship a high-quality product, but innovating in the software layers to take those devices, for instance, and dynamically on-the-fly in-situ in the system actually reformat the drive and for instance, logically depopulate ahead. If a head fails in the field, every other storage system in the market will eject that drive. It’s a field service event costing hundreds of dollars. For us, we can logically drop that head out of the system, reformat the drive, bring it back into the pool as a brand new drive with no physical access required to the system. So our ability to extend the durability, the lifetime, the performance over the life cycle of the media layer is significant.
There are just enormous opportunities in this area. I feel like we’re just scratching the surface. We have tangible advantages we are bringing to market today, and I am extremely excited about what we can accomplish going forward because we are the company of record that’s working from the transistor layer, the magnetic media layer all the way to the end customer experience in the data center. It’s what makes us unique and our customers understand that.
Also want to talk about our capabilities. It’s important to understand as a data center systems business what we, Western Digital, can bring to our customers from a sphere of capabilities point of view. It obviously starts with products. And as I said, we have chosen to invest in the growth parts of the market. So whether it’s hybrid and all-flash storage, this is a very fast growing part of the storage market, systems and platforms. We want to deliver to customers both complete solutions as well as the building blocks that are increasingly being used at scale with software defined stacks. We are also invested significantly in cloud storage or cloud scale object storage. This is the fundamental architecture of storage that the cloud runs on. We have great technology inside of the complete system level that builds on that, another very, very fast growing part of the storage market and also extremely well aligned with the capabilities of the company. We move from there into kind of our core technology areas. I’ve talked about silicon to system engineering, this idea of vertical innovation. We also focus significantly in the software domain – system OS and management software and we do develop physical enclosures and systems, yes, but most of our engineering research and development activity is in the software layers of the product. That’s really what defines the personality, the capabilities the value proposition of what we do.
Just as important as all of that is this idea of vertical integration. I didn’t say innovation, vertical integration. Because we go all the way from the device-layer to the end system layer, we are effectively eliminating a lot of overlapping value chains. When the company sells a device to third-party OEMs, there is a lot of duplication of validation there is a lot of duplication of design and integration points. When we leverage our own devices and our own systems, we essentially shortcut a lot of that and we collapse that overlapping value chain into a very, very direct bright line between the manufacturing – the manufacturer of a device and delivering that to a customer in a data center and that has great value. It allows us to optimize the supply chain significantly, because it’s largely our supply chain. Customers also know that when they partner with us at the system layer, they are essentially going straight to the source. They are really not concerned if they need another petabyte of flash on Monday because their business just doubled in size or they picked up a new customer or like some of our very large online – I will show you examples of on online one that we have. They are doing 100 million transactions a day on top of our infrastructure. That company wakes up everyday and worries about how are they going to get the next petabyte of flash, how are they going to get the next 10 petabytes of flash. They know when they partner with us. They are going straight to the source. We are optimized to deliver capacity on time to our system customers.
The last area is flexible go-to-market. One of the things we have built here in San Jose, on our Great Oaks campus is something we call our platform integration center, think about it as kind of a hyper config to order integration facility. It looks like a manufacturing operation for systems. What we do is we offer a service to our customers where they can mix and match our technology with third-party technology, servers, storage with third-party software, with their own look and feel to the product. We will assemble, whether it’s at the system-layer all the way to a full rack, we will assemble a product to their specifications. We’ll test it. We will ship it into their logistics channel. What we do is we of course incorporate our own media, our HDDs and SSDs into those products. But it’s another level of flexible go-to-market capability that we offer our customers. And many of the companies you would point to as kind of the darlings of the storage start-up world are actually our customers using our service here in San Jose to deliver a very tailored solution into their channel. We are essentially not only their hardware partner, but we are their supply chain partners as well in a holistic sense of the word.
And finally of course the world has changed from a consumption option point of view. Customers want choice. Some people still want to buy storage primarily from a CapEx model, which is still the least expensive way to buy storage. But increasingly, people are looking at hey, my business model, I get paid as I deliver a service, I would like to pay for my storage as I deliver that services as well. So I would like to pay you as I get paid and we offer those options as well from a flexible consumption point of view.
So here is the portfolio. I will just pause a little bit and spend a little bit of time on this. I don’t have a detailed slide for each area of the portfolio, so I will just spend a little time on this slide and give you a feel for the products that we have in our portfolio and bring to market. The first on the left there is our platforms business. We call platforms for us are building blocks. They are enclosures that have either disk drives or SSDs or a hybrid combination of those either just as raw capacity or sometimes we have designs, that includes server motherboards in them as well, so they are kind of a server storage, more of a node-based scale-out architecture. We deliver these to market in various form factors, configurations, capacity points. But one of the fast growing parts of the storage market today is this area of, in generally speaking, software-defined storage.
What it means is software storage stacks that are less coupled to the underlying hardware. A lot of these stacks are deployed in what I would call the excess P market, the managed service providers, the cloud service providers, the people building at scale infrastructure. Think hundreds to thousands of racks of infrastructure and their data centers, they are deploying some of these software-defined stacks. And generally, these markets are a jump ball at the device layer, right. They are just trying to mix and match devices maybe with third-party or white box enclosures and building a data center out of them. And they are the ones that have to sort of mix and match and integrate and make it all work. What we are providing increasingly with our platforms business is a more integrated option for that.
So we have a number of customers that buy on a quarterly basis hundred thousand-plus disk drives from us, and they’re building these kinds of data centers. Typically that pressure, that responsibility for mixing and matching drives with enclosures falls on them. What we provide to these customers is that integrated solution in our platforms business. So we have some great engineering that has gone into these platforms. We have patented technology when it comes to vibration isolation and thermal management because we understand the devices better than anybody. We designed them and built them. And so we can deliver this complete solution into markets that heretofore have predominantly been just discreet drive customers. What does it do for us? Well, it creates a stickier relationship. Those devices are not jump balls. Number two, it increases revenue per spend, revenue per device for us and it gives us greater insight into their use case, allowing us to make better decisions for their roadmap going forward. So our platforms business has been growing fast. It’s a natural extension of all the capabilities of the company, and it is certainly an important part of our data centers systems business going forward.
An extension of our platforms business is what we call Composable Infrastructure. You have seen – if you have been tracking the market HP and Dell and others have introduced what they call Composable Infrastructure, composable servers. These predominantly are modular building block approaches to a server architecture that simplifies the purchasing experience, it allows customers to kind of mix and match components of the server and bringing this altogether into an integrated top level assembly. We think that’s great. We think that’s an important step ahead in the marketplace. It’s a natural evolution of converged infrastructure, to hyper-converged infrastructure, to composable, but we think we need - we want to go further. We think that’s largely an incomplete vision of where we think the storage, the data infrastructure market is going. Our view of Composable Infrastructure is this notion of physically disaggregating compute, networking and storage into separate physical resource pools that can be, through software mechanisms only configured into specific combinations of compute networking storage to deliver that capability to a particular workload and being able to reconfigure that over time.
Again building now for the first time flexible data infrastructure out of flexible building blocks, if you manage the data center that had 1,000 racks of infrastructure, you would walk into that data center every morning worried whether your infrastructure still was optimally matched to the workloads and the customers and environment that your company was in. With Composable Infrastructure, that worry largely goes away because you can reconfigure it on the fly, you could completely change the mix of storage whether it’s high performance flash or capacity disk through networking and compute resources for different applications. And you could re-provision it tomorrow if you wanted to, again without physically touching the rack of infrastructure.
We really believe this architectural idea is how a lot of storage, a lot of data infrastructure is going to be deployed going forward. And if you haven’t looked at it out in the lobby, we have the physical examples of products in our Composable Infrastructure and our OpenFlex product line that are the first examples of this in the industry anywhere. So I would encourage you to take a look at it, get a feel for what these form factors look like. We created something that hasn’t existed before. This is category creation and the notion of a fabric device. If you think about a disk drive or an SSD, it’s media behind the controller talking to an interface. That interface today heretofore has been an I/O bus kind of subservient to the CPU. Going forward, fabric devices are media behind the controller talking to an interface, but that interface is now Ethernet and ubiquitous Ethernet. And the intelligence now exists inside those fabric devices to self virtualize themselves. In other words, they can take their capacity and they can divide it up across a number of different servers and compute devices. So it gives tremendous flexibility on how infrastructure is put together. So that’s our OpenFlex Composable Infrastructure, it’s a growth area of the business where we are excited about products coming to market in the next quarter and growing from there.
Our cloud opportunity is really defined by our ActiveScale business. This is an object storage platform through an S3 interface that has had a decade of continuous innovation embedded in it. This came into the company through the 2015 acquisition of Amplidata. Amplidata was the company that really ushered in the second generation of object storage systems in the marketplace. They pioneered wide and efficient ratio coding architectures. This platform is built for scale. And it is now involved in many of the largest on-premises object storage opportunity is in the market. It’s a platform that is well matched for our value proposition and our capabilities and capacity enterprise disk and delivering products at scale.
Our primary data center product, this is the wide applicability, this is the product that fits almost every data center. It’s – 95% of the deployments of this product are in virtualized workloads databases, performance block end file applications. It’s hard to find a company that doesn’t have a use case for this product. It’s our Active’s – our IntelliFlash product. Again, another product that has had almost a decade of continuous R&D invested in its capabilities. This was last year we acquired the Tegile business and brought that into data center systems. The IntelliFlash product is our product identity for that family of products. So between these four lines of business, we really do cover a wide spectrum of the storage systems and platforms marketplace, very carefully selected for the growth parts of the market, the dynamic parts of the market, the market where our capabilities at the device layer are extremely well suited to deliver that – those capabilities in their fullest expression at the system layer and the platform layer in this portfolio.
As I mentioned, a big part of what we do is software innovation. And I want to emphasize as you sort of look at the lens of DCS, just how significant our investment is in building core software competencies and capabilities, whether it’s extreme data durability and again this is where vertical innovation, vertical engineering really pays off. It is about the bits. Every bit matters. And in our systems and our platforms, with the intimacy we have at the system and platform layer to the design of the devices, we can do this better than everybody – anybody else. Data protection and management, a lot of innovation going into how we protect data, how we help customers manage data at scale. Collecting hundreds of petabytes of data is one thing, but making sense of it, making sense of the metadata, being able to index it and search it, these are the things that we are investing significantly in. Simplicity at scale, helping our customers just managing – manage this deluge of data, a lot of software innovation there. And finally, it’s very important that we don’t exist as an island. One of the things we have invested a lot in is ecosystem development, independent software vendors, the relationships we have, the partnerships, the certification points of our products with the industry, very critical to the success of our business going forward.
I wanted to talk a little bit about workloads. It helps me in conveying sort of where we are focused, what we concentrate on as a data center systems business. The first one of course is low latency applications. These are applications that are defined by very low latency. In other words, real time, real time analytics, database workloads, commercial high performance computing. These are business applications where time equals money in its purest sense. And customers will pay for products that actually outperform and deliver a faster response than others. And this is where innovations in NVMe and the work that I mentioned going on at the enterprise SSD level between our system teams and our device teams will and is paying off significantly. Virtual and container applications, so the world today is virtualized in the business data center environment. We work closely with VMware and the OpenSat community around integration with the predominant business workloads and the virtualized infrastructure that they run on.
Cloud scale storage applications, I will talk about the markets that we are in and some examples of wins. People bringing cloud scale data sets into on-premises infrastructure, we look at as one of the most significant growth opportunities that we have as a business. So we are building systems that address things like big data analytics and large scale digital asset management. We have relationships with some of the large media companies in the industry, where their most important assets the authoritative digital copy of their media assets, from things that you grew up with as a kid all the way to the most recent TV shows are stored and protected on our infrastructure.
And then finally software defined storage applications, I mentioned that the world is largely turning to this and in at scale service provider environments, whether it’s scientific simulation, generally built test applications, Test/Dev, as well as just unstructured data in general, another workload focus of data center systems. And these are spread across a number of markets. So part of our mission in the company is to develop the resources and the expertise in vertical markets, understanding workloads, understanding use cases, understanding how customers are using our products from the device layer up. Some of these markets that we are particularly strong in, the area of automotive. As we like to use the example of the automotive manufacturer swimming in data, this is an area of tremendous growth for us at scale. In the area of life sciences, whether it’s human genomic research, some of the largest institutes in the world are using our storage systems now to hold their rapidly expanding libraries of human genomes. In the finance area, quantitative analysis, this is the classic definition of high-performance big data analytics. We are doing a lot of business there. And in retail, I had mentioned an online retailer doing 100 million transactions a day. That’s an example of the online retail environment running on scale-out infrastructure from Western Digital.
Okay. A window into the growth of the business, I am going to grab a little drink of water here. I wanted to give you some feel for the scope and scale of the business and the progress we have made. The company has been working on building data center systems for several years now. And over that course of the last 3 years, we have grown the revenue by 17x. So, it’s a fast growth rate on the business. We expect that to continue. It’s going rapidly. Very nice momentum in the business, we have reached more than 3,000 total customers now. So some people might look at DCS as part of Western Digital, as kind of startup business. It’s not a startup business. As an early stage business, I would say we are exiting early stage into at scale operation now, 3,000 customers, 8,500 systems deployed in the market. So we have got a nice market footprint that comes with it, repeat sales momentum in the business, where we are not hunting every single PO that we are winning now. So with a larger installed base, we have more consistency and growth in our revenue stream.
Year-to-date, in 2018, this is the calendar 2018 statements, so just up until this point, we have shipped more than 3 exabytes of capacity into the marketplace with our platforms and systems. Every corner now we are adding about 150 new customers to the business. So, our rate of customer acquisition has been increasing as we go forward. And just to give you a feel for the technical capability of the business, we have more than 400 R&D engineers now. Most of them in the software disciplines.
Okay. What I wanted to do is give you three sort of glimpse into each one of these product lines in terms of some of the customer wins that we have been able to achieve. Because I think it’s important, it’s probably the most tangible way for you to get a feel for the momentum in the business and what we have been able to achieve so far. In the area of our IntelliFlash all-flash array, so this is our primary data center product, this is a product built for performance for low latency, for primary data center applications. The first example is a major vacation rental company. You can probably assume who that might be. Their key value that they were looking for in our product was very high performance, exceptional total customer ownership characteristics, a very strong company, very strong customer of our primary data center platform.
Another example is a Formula One racing team. And so our IntelliFlash infrastructure provides the high speed analysis capability that they depend on, not only after the race, but in the race itself. And so as you know, Formula One is a highly digital sport now in terms of the analytics, the metrics, the real-time data coming from the cars, the micro-tuning that they do to improve the capabilities and the performance of the machine, our IntelliFlash systems are at the heart of one of the leading F1 racing teams. And finally, an example would be one of the major league sports franchise. This is a major league basketball team. They run all of their fan experience, in-game fan experience capabilities off of IntelliFlash. So high-performance, very dependable and it’s built on all-flash. So it’s just another example. I gave three examples here and they are pretty diverse. And it’s a good example of the diversity of customers that we have in our primary IntelliFlash business. That really does service just about everybody in terms of a performance application, whether it’s block or file with a lot of data services to go with it. Most companies use IntelliFlash as their primary central storage asset of the company. It’s what they would point to when they say that’s our most important data. That’s what we depend on in real time to run the operations of our business. So we take that responsibility very seriously.
The second line of business that I will talk about is the ActiveScale business. So again this is our S3 protocol, cloud object or cloud storage object storage platform. Just some examples of wins of this business. The first one is in the bio-imaging, the human genomics arena. We have 60 petabytes today growing quickly. This is one of if not the largest European genetic research institutes in Europe. So we are adding to this with other customers now as well, but the application of storing and accessing and running the analytics on top of human genomic data on object storage platforms is a great match. And we have got a platform that can scale. This particular deployment is a 3G scale out implementation, where we have systems in three geographically separated sites that are all consistent with each other. So that researchers can access the same data, irrespective of what site that they are in. You can write data to any one of the locations and it’s immediately available in all three locations and it’s resilient enough that they could lose an entire data center. You could completely eliminate one of these three sites and the data would still be accessible to everybody and consistent.
The second example is an emerging automotive manufacturer. This is the great example of the leading edge of driverless technology, driver-assist technology. We are at 30 petabytes a day, again growing very, very quickly. It’s one of the preeminent design wins in the industry this year. We competed with everybody to win this one. But the company turned to us, because of our expertise and analytics. We were the ones that actually helped them architect their end-to-end big data analytics workflow on top of our products so. And we were incredibly responsive. Their comment to us was you guys work just like we work. If we need something, we are going to get it done today, we are going to get it done now, we are going to do it right and we are going to bring the right expertise to the table. No endless series of meetings, we just get it done. And so we won this because of our technological capabilities in analytics and our responsiveness as a partner.
The final one is a hedge fund. This is the second largest hedge fund in the world. They have more than 140 petabytes of our storage. This is where they do all of their quantitative analysis. And so our architecture is their fundamental storage layer for the data that they consider to be most important. This is a rapidly growing business. Every quarter, they are purchasing more and more storage, very successful company, very successful relationship. The last one in our business, I would just want to give you some customer vignettes on is our platform business. Again, these are the building blocks of the storage market when it comes to software-defined technologies. The first one is an example I gave several times, multi-petabyte e-commerce websites. It’s just staggering how fast these guys are growing. It’s an international company. You would know the name, 100 million online transactions a day. They recently had a series of 4 days of selling activity, where I think they sold 100,000 washing machines in 1.5 days. It’s just the scale of customer activity on this site is tremendous. We actually put engineers in their data centers during that period of intensive activity, just to not only learn the workloads, but to ensure that everything went well and it was flawless. It was just completely flawless in terms of our ability to underpin that kind of activity.
Another customer of our platform business is a company that does online gaming. This is another business that’s just exploding. It’s growing quickly. The gaming market is enormous. I am not a gamer, but people that do. It’s significant investment in physical infrastructure in your residence, a very high performance PC to take advantage of the performance of these games. Well, this company is kind of changing that paradigm, delivering that kind of performance and latency over a WAN connection. So, you don’t have to buy a high-performance device. You could do it from your iPhone, if you want. And we are this storage layer that underpins this company and their tremendous growth. Final example is a very large wireless provider, one of the largest wireless providers in the United States. They run a lot of their infrastructure on our storage platforms, particularly their backup and archive capabilities. So in this example, a little more of a pedestrian example of storage, but an example where we are working really closely with the storage application vendors to provide this capability.
Okay. Just to wrap up then. As we approach customers, they look to us for certain things. It’s our job to create very innovative solutions. Frankly solutions that are disruptive to the current landscape and environment, but they look to the company for certain values and capabilities. They expect us to deliver better products, because we do have this capability of full stack innovation. The breadth of our portfolio matches where primarily, where they are trying to solve problems today, the most dynamic and fast growing areas of the on-premises storage landscape.
Vertical integration, they know working with us. It’s like the ultimate I wouldn’t call it so much this, but the ultimate white box play. We have a direct connection from the manufacturer of the device to the end customer experience in the data center and that includes this notion of supply chain ownership. This assurance of partnering with us means we are largely in control of the entire value chain now in delivering that capability for them. Our market reach, the fact that the world trusts us with their data, that’s one of the hallmarks of the Western Digital and of course our financial strength. We are going to be here a long time and we are investing significantly in this business. So hopefully, that gives you a bit of a window into data center systems, the progress we are making. It’s an exciting time to be here at Western Digital in this business.
So with that, I think we are moving to a break, right? I think I don’t know if it’s the next slide. Yes. So we are going to take 15 minutes and then we are going to come back with the panel discussion. Thank you very much.
Okay. If everyone could please take their seats, we are going to go ahead and get started. So as I mentioned earlier, we are going to try something a little bit different today. So as everyone here in the room was registering to attend this event, there was a little section there where you can type in, here are the key questions or the key topics you would like to see discuss. So what I did was I consolidated all that information down, brought up these individuals who we will ping questions to, so these are real questions from those in the audience. So what we are going to try to do is go across, ask each presenter here a question or two and then we will turn it over to Siva to continue with the day.
So let’s start with Mark. Mark, can you give us a little bit of your background before we start in the Q&A?
Sure. My name is Mark Grace. I manage the, what we refer to as the Devices business, which is what you would probably traditionally think about Western Digital. It’s our commercial hard drive, SSD and card business that anything we ship onward for further integration, whether it’s to our own emerging businesses or our commercial customers, represents about 75% of our company’s revenue. From a background standpoint, I am in my 35th year in the industry, in the IT hardware industry in general, about 15 years at first with IBM in other parts of the IT hardware industry and then about 20 years ago started in the storage industry. And so I have been in the storage industry for 20 years. My genealogy coming to this meeting today is through the HGST part of our history and I have been in these kind of market-facing roles for about 10 years now.
A - Peter Andrew
Okay. So clearly, you have been around the HDD industry for quite a long time, but with the SanDisk acquisition, you added a leading flash portfolio to your overall product line. Can you help explain to us why having both HDD and flash together has benefited your business?
Yes. Mike touched on a lot of this. I was thinking as he was talking with you guys how to add to that. And so from kind of this Devices business, from our big hard drive and SSD and devices portfolio, I think there are a couple of factors to think about, and then I would just add to what you heard about earlier. One is that these businesses have been externally complementary to each other. Even if you say three businesses, the HGST business and the Western Digital hard drive business and then the SanDisk business, the focus of those companies was slightly different, had strengths in different areas, had customer relationships that were not completely overlapping, in other words, each company brought depth in terms of customer relationships to the party as we integrated. And all three of those companies brought particular supply chain and/or customer facing strengths. So I think the businesses, if you get below just the rotating magnetic storage and flash underlying technology, brought also many other dimensions of strengths to the resulting company. There is another aspect that you might think about, which is just how nicely they have come together over the last several years. We undertook to bring these companies together. There were a lot of synergies we expected from the business. And we have been able – one of our fundamental design premises was to bring one face to the customers, be technology agnostic as we dealt with our customers. And so we have largely succeeded now in bringing all of those customer-facing functions together in a very efficient and very complementary manner, while learning best practices from each other in supply chain or in technical support in service or whatever. I think that the last point is harder to measure than those.
The last point is about our ability to build customer intimacy with those core customers that we value in our business. If you take an example, data center customers, our relationships had become more than one dimensional, had become more than a commodity hard drive supplier, if you will as an example. Even though we are not quite at the point where we talk about lots of revenue in our enterprise SSD space now, we are deeply engaged with these customers in terms of value propositions and special used cases and their own priorities. Take the PC space, we are a well-rounded, we are the one-stop shop for storage solutions in the PC space. That’s tremendously helped us manage our business priorities up and down as the mix of storage technology in the PC world has changed. And I think we are right on track to manage it properly. And then lastly, some of these other markets that are emerging, that are tremendously exciting and much more interesting to talk about sometimes such as the case studies Mike mentioned in surveillance or gaming or in-home entertainment or the connected home, these provide us opportunities to stretch our wings and talk with these customers about a whole range of offerings up and down the ecosystem that those markets are creating. So it’s been tremendously exciting, it’s built that breadth and depth that Mike talked about.
Moving on to the next question, one of the key things I get asked quite a bit from this community here is given the current dynamics in the flash market, where or what are you seeing from an elasticity perspective?
Yes. This, if you are into this, this is a tremendously exciting – you have got some PhDs upstairs that are studying elasticity all day long for us. What I would say about elasticity is, first of all, straightforward answer is yes, we are seeing price elasticity in the marketplace today. We are seeing new demand being created. I would say two things. One is price elasticity has been a feature of our industry for decades. We built this business. We built this industry around continually bringing increased value each year for better economics. It creates a virtuous cycle of adoption, creates a virtuous cycle of economies. So we built this industry on essentially a long-running price elasticity equation. In the shorter term, price elasticity is a combination of those long-term new applications being developed and short-term incremental business that can be garnered inside of kind of decision horizons. So in this current year, we are seeing short reaction kind of elasticity in the PC space in terms of both mixing up to higher capacity points as prices enable that as well as some acceleration of the hard drive to flash substitution model that’s been going on and we have anticipated for some time. We have also seen some elasticity in the mobile phone space, but these things take time. Time is the governing factor in terms of elasticity. It’s not something you don’t fertilize around and the whole thing pops up right away. So, these are things that happen over time through multiple platform cycles and through multiple innovation cycles. So, we are seeing it, time is part of the equation. And if we outpace the ability to market to react on its time cycle, then that’s where we end up in a little trouble, but we are seeing a reasonable amount of price elasticity of demand, particularly in those most sensitive areas.
Okay, thank you. So let’s move over to Jim. Jim, can you please give us a little bit of insight into your background?
Okay. I run the Client Solutions business, which is all our business that goes directly to those products that are sold to end-users, through our channels whether it be e-tail, retail or the various customers. I have been with Western Digital for 13 years. When I started, the business was about $80 million a year to its current state, multibillion dollar business. Prior to that, I was with Maxtor, where I launched the first external, meaningful and external storage in the industry followed very quickly by the well-known OneTouch, Maxtor OneTouch. And I actually was with NEC prior to that in the compute area and display area. And I started my career in retail in New York. So I have got a real understanding. I think it’s given me an understanding of consumer behaviors and what drives their passion to engage with their products and solutions.
Okay. So let’s follow on with Mark’s question where we talked about having both HDD and flash within your portfolio. How has that combination of those two technologies enabled your business to be successful in the consumer channels?
So it’s a combination of the technologies was the business is evolving very quickly, the basis for all the brands was mostly came from compute, the add-on storage on computers. And then on cameras, digital cameras is that that formed the core base, which today by the way is pretty resilient. We have got a movement in external HDD. We gave got the add-on of external SSD. But the opportunity with IoT with all these connected devices, mobile devices is offering an even bigger opportunity. And with the technology, we are kind of technology agnostic. In the past, when I was – I just had the portfolio of hard drives, it was kind of limiting what we were trying to accomplish and we are trying to shoehorn technology in a solution that the customer wanted, but it was not a perfect match. So now we are totally agnostic to whatever technology is needed for that solution. And that – you bind that with our knowledge in networking, wireless in our software, we are in a really good position. And we are very relevant to our channel because of that. They want to engage more. When we look at the 2 years into this, compared to when we combine all three brands, each brand on its own kept on growing. They came from a very broad footprint in – worldwide, the addition of some channels for WD, addition of some channels for SanDisk. So in consolidation, as we do more and more relevant things, our retailers, e-tailers, channel partners want to engage more with us.
Okay. So what about looking out into the future, what are some of the things or the new emerging solutions your group is working on bringing to market?
So as I mentioned, with the movement towards more solutions, needed, required, consumers wanted to do more with their content. Before it was about preserving them and storing them, but now they want to do more. They capture more it’s richer, it’s more emotional and they want to share it, they want to preserve it, they wanted to know where it is, so there is a big opportunity for us to engage more. Now the key here is to really understand more what they are doing. So we are focused intensely on what are the problems, what are the major problems they have and focusing on solving them and really the big piece of it is really what we can do with software and with services. And we are really proud the people are engaging. We are getting a lot of four star, five star reviews, four to five star reviews on our software and services. And we have already had – just think of the opportunity because we have 330 million customers every year. So we have already had 25,000 downloads of our applications and we have on a weekly basis, 2.5 million active users. So the scale, we have a great platform to build further. As Mike mentioned before, we have a great platform to engage more and do more for the consumers.
Okay, that’s great. Thank you. So let’s move over to Dennis next. Dennis, can you give us a little bit about your background?
Yes. So I will probably talk a little bit longer on my background than the other guys. I have been in the industry for 39 years. Well, I have been in the industry 39 years, 37 years in the hard drive business. I took a 2-year hiatus to do a startup company in a non-related business. I started back in 1979 at a company called Dyson Corporation as a line worker at 18 years old. So I kind of I learned the technology from the ground up. 5 years later, I was recruited to a startup company. I was the 14th employee at the company that was later acquired by Seagate Technology and became their, what they call, Seagate Magnetics. I was a big part of the team that built the Fremont Manufacturing facility in – for Seagate and ramped. In 1990 I was – I met Bill Watkins and he asked me to come over and I have joined Conner Peripherals and help them ramp the MINT technology, which was what they purchased from Domain Technology. And then 6 years later, Seagate came and bought Conner and I was back at Seagate and I went – during this timeframe. So the ‘80s was all about really building manufacturing in the U.S. and then the ‘90s was about transitioning that manufacturing over to low cost countries. So I have spent time in – a couple of years in Singapore, built the facility in Singapore and ramped that facility. Came back to the U.S. and was recruited by Hitachi in 2005 to help with the integration of IBM and Hitachi. And then that went pretty well, bringing in a little bit of energy. You have got two conservative companies Hitachi and then IBM, so I came in with a little more aggressive approach. And it worked really well. We saw some great results. However, Hitachi overall wasn’t really turning the business around if you will and I had an opportunity to go do a startup that I took in a non-related business, it was in the lighting business. And then I got a call in 2009 from HGST, with the new management team, Steve Milligan and the team, joined that team as Vice President of Media Operations. In 2011, I took on HDD operations and head operations. And then recently, in the last couple of years I have picked up HDD product development. So my current role is Global Operations with HDD R&D.
Yes. And that’s one of the key things I wanted to follow-on and the question was, you do have a little bit of a unique role, you are responsible for Global Operations for both flash and HDD plus you are Head of HDD R&D, so can talk about how having visibility into both sides, the flash and the HDD exposure has helped you?
Well, I think it takes a lot of the mystery out of the things. When you are just an HDD team, you wonder what is flash doing, what is the capability there, what should we be doing from a product perspective, but after integrating with SanDisk now, now we have visibility into the capability and the products and where flash fits more appropriately and the speed at which they can enter those markets so that we can actually make the appropriate investments or disinvestments, timing them better with confidence, so rather than sort of a guessing game.
Okay. So next Ganesh, can you please give us a little bit of insight into your background?
Sure. Good morning. My name is Guruswamy Ganesh. I have been in the semiconductor industry for 30 years, started my career designing microprocessors for Advanced Micro Devices, doing the microprocessor designs for them and then went on to do system on chip designs for Motorola, in multiple market segments from networking, wireless and automotive. And then I have gone off to work in SanDisk. So I moved to SanDisk and that was my first experience on storage. And one of the things I have seen in my career is I could predict some trends. So around 2012, 2013, I could see data was becoming the fulcrum of innovation. So when I joined SanDisk, it was a great opportunity for me to see how data was transforming the industry in multiple market segments. I saw that happening in the networking, wireless and automotive side when I was working in Motorola and Freescale. So when I have joined SanDisk, I have been with them for 5 years now after the acquisition of WD and when WD acquired SanDisk, I got the opportunity to head all of flash product development and I have been responsible for all flash product development from consumers to client, to mobile, to enterprise. And it gives me a good idea from a semiconductor perspective how the storage industry is evolving and how we can benefit our end customers with our vertically integrated technology of controller, firmware and pretty much like the system on a chip for controller space as well.
Okay. So kind of following on the other questions, you obviously came from the flash side of the house, how does being part of the broader business, including HDD, how has that helped you in your day-to-day operations?
Yes. It was a pretty interesting journey. So when we were part of SanDisk, we thought okay, flash is going to rule the world. And my first leadership meeting, I have figured out that it’s going to take a long time, because that’s amount of storage that exists in the world for flash to go and enter into that space is going to take a long time in terms of both the capacity and the volume of data that is being generated. So we started to clearly see the trends of flash is going to play a unique space in fast data and accessing data much faster. And hard data whereas the capacity, the hard drive is going to be more on the whole data or archival data, it’s going to be much more volume of data. And as we saw both of these, we could also understand what our enterprise customers wanted. So when you are a part of SanDisk, we didn’t have deep reach into our enterprise hyper-scale customers. With WD acquiring us, WD had a history of a huge relationship with enterprise customers, so we actually could see the pain points of what you need to architect from a solution perspective and build those kind of solutions for our enterprise customers.
Okay. So let’s really address this question head on, can you comment on how we are executing on the internal NVMe enterprise SSD roadmap, given the tight time lines that Mike just laid out a few minutes ago?
Yes. So we didn’t have a hiccup. As I have said, the enterprise team was formed by multiple acquisitions from both sides of the companies. So we had SanDisk acquiring multiple enterprise companies, trying to develop enterprise products, sort of WD. So when you had multiple cultures, we had that IPs, but we – as we started integrating the IPs together, we could see that things are not integrating pretty well and we had some hiccups there. That’s probably here. But we will over come that. We will be as Mike said starting this month in qualification with some of our key customers, with our enterprise customers, with PCI NVMe products. And we believe we do have a unique product offering that will excite our customers.
Okay. And here is a question that Mehdi brought up to me the other day, so this – Mike, you and Mark might want to tag team this one, but can you comment on how you are really going to differentiate your NVMe enterprise SSD versus the competition?
Yes. So I will share this with Ganesh, he can do a lot better job on the bits and bites. But I would say the differentiation is going to come in two parts; one is I mean we are very excited about the actual product. It’s going to leverage our NAND technology all the way to our experience with the system interface very, very well, more to come on that. But the second piece of – the second piece to think about in terms of differentiation is the differentiation we have as a company amongst these customers. I would say generally speaking and almost uniformly the customers are anxiously awaiting us to enter this market. We have ongoing discussions about when and how fast we will be able to take a part of that market space. We continue to be engaged with all of these same enterprise customers with what I would argue is the world’s best forward-looking and current state capacity hard drive portfolio on the planet. And these customers have over a long period of time come to under the company we are, the kind of partner we are. We seek to be the most transparent, responsive, highest quality provider in the space as part of what we try to build as our reputation in the market. And these customers know us for that and they know that that this kind of support and relationship and expectations on the product will pervade the flash side of the business too. So I would say there is a differentiation at the company level the way we approach the market, the way we differentiate ourselves. And we are very excited about the product itself.
Just adding on to Mark, I think our architecture is extremely modular and scalable. We believe we can service from low capacities all the way to high capacity enterprise markets. And this is not a – we do have a PCI product in the market. But it’s in the previous generation. What we are talking about is the Gen 3 version of it. And we believe that we definitely have low latency, very good QoS and we believe that power is going to be a key role. A lot of the enterprise customers are also very sensitive about power. I think we do have a very attractive high performance low power solution that we can take it to the customers.
I will just back Ganesh up on that. Ganesh’s team last year delivered – we undertook at the time of the integration two areas where we said we were going to develop the fundamental IP ourselves relative to firmware and the ASIC technology. One was a platform that was primarily centered in the client compute space and one was a platform for the enterprise space. Last year, Ganesh’s team delivered the client platform. It has won awards right and left. We are competing extraordinarily well with that platform and it’s extensible into generations in front of us. And that credibility lays in front of us for the enterprise space. And I am very confident we will take the part of that market that we have lined up in terms of data center customers and traditional OEMs quite assertively in calendar ‘19.
Okay. So let me go ahead and wrap up the fireside chat. Again we just wanted to get some relevant timely questions addressed to the broader management team. So there will be another Q&A session after Mark’s presentation where we will have mic runners so everyone in the room can also ask their questions. But with that, let me transition it over to the next speaker. We have Siva Sivaram, EVP of Silicon Technology and Manufacturing.
Good morning. So this is your mid-morning electrical engineering graduate seminar. By the time I am ready to administer the test at the end of the session, you will be able to tell me who or what an STED is. You will tell me how to measure an electron volt and we will be able to easily transition from an angstrom to a nanometer and back. Alright, that’s the objective today. Now in all seriousness, we do want to talk technology. We want to talk about technology, both from the hard drive and the silicon technologies.
First and foremost, I want to make sure we establish for you our leadership in both HDD and flash technologies. You will see why we claim this mantle of technology leadership, why is it important to us and what are we doing continuously to maintain that leadership. The second is what Ganesh was talking about. What kind of a vertical innovation platform that we have that we can take these technologies and deliver them as solutions to the customers. It is very important that technology leadership is not just for the sake of technology leadership, but it is for delivering value to the market and to the customers. And then what Dennis Brown was talking about, the manufacturing muscle with the agility that comes with it. That platform and solutions that Ganesh is creating, how do we get it to the customers’ hand at the right time with the flexibility that is needed in a market like this and then we do talk about this unique advantage, this built-in structural advantage that we have in flash with the joint venture with our partner, Toshiba. Toshiba Memory and we have a long-standing partnership in flash manufacturing that provides us with some unique advantages.
The net of all of this is this. Repeatedly, we will show you technology leadership, but we will actually see how we deliver that technology leadership into the customers’ hand at the right time. A technology by itself is not enough. It has to be at a time where the customer and the market can derive the maximum value out of it. So you will see that as a thread going all ways. And of course you are talking to a company that, as it came through, each of these components of the companies have been traditionally pioneers in the field. The original hard drive was invented in one of our companies. The first system flash was introduced in this company. The first multilevel cells, the first helium drive, so on and on and on and on. In the entire storage space, you can come back to see every seminal advancement was introduced in one of the companies that constitute Western Digital today. And of course Steve talked about these 14,000 active patents, we continued to be an IP leader, very highly valued intellectual property portfolio and we continued to be growing this further as we go along.
There was a lot of talk today both from Mike and from Phil on what’s going on in the data center, what’s going on with data, this data engineering that is going on, it’s not just happening outside, it is happening within this company. We are eating our own dog food first. We are in our business, whether it is in development or in manufacturing, are the prime archetype of how data is transforming ourselves, testers and workstations and multiple places, manufacturing lines, feeding volumes of streams of data into one of Phil’s beautiful babies, the objects to where we are storing our own information. If the Hadoop plaster is sitting on top of it, analyzing data with high inquiries and real time flows of information that are transforming, all this is happening within the company. And this is transforming the way we are doing our development. So I will show you one example. On the left side, you see these three rings. Those are memory holes that we create. When I was a process engineer couple of years ago, I used to have a ruler and a slide ruler to sit there and measure every last one of them to go, okay, how do I optimize the structure with this multiple layers? These days, hundreds of thousands of pictures, dark fill, bright fill, get fed into this new modified convoluted noodle network that’s coming in. It is learning by itself and coming back and tell me what’s wrong with my own memory hole, orders of magnitude, orders of magnitude improvement in the pace with which we are developing technology. We are just a prime example of all that you heard today in how data is transforming out on development.
So with this, let me go to what we call our technology leadership foundation. Let me start with HDD. People have talked about this over a long time, perpendicular magnetic recording the workhorse of the industry has been producing, for a long time now, running out of steel. As we think, as the magnetic coercivity increasing and the magnetic field strength that is needed to flip the bit, the oyster that’s needed to flip that bit is getting harder to do. So, we need some new technology and that’s where energy-assisted technology has come into place.
And what we have done as Mike talked about is develop a platform. The platform on top, we are talking about is this platform is what is being now starting to be sampled to customers. MAMR asset technology is a very complicated technology that is coming in multiple phases to you. Today, we are starting to sample the 16-terabyte 8-disc, 2-terabytes a disk, unbelievable technology that’s already going into the marketplace. Of course, this will continue to go further as we continue to develop the technology, this 15% a year growth in aerial density. For that, we are going to be inventing, discovering and adding features to that. And that’s how the MAMR technology and energy-assisted technology will be delivered to you. I mean 20 terabyte by 2020 looks like it’s – Mike already announced it. I mean, it sounded like a nice sound bite to come back and say, 20 terabyte in 2020. We probably will do better than that. Dennis is shaking his head saying yes. So, this is already happening in technology in the hard drive.
Let me switch to the solid state side of it. The charge trap cell, something that you hear – it sounds like something a hunter would use at something. A charge trap cell is something that – is not an easy thing to think of. This is something that we started using in 2013. You see the vertically integrated 3D NAND and inside it is a very conventional cell. What you are seeing is a standard blocking oxide plus internal oxide, charge traps, storage cell that has been built many times over. What our innovation here was to integrate it in a vertical structure, but the beauty of it is when this becomes cylindrically confined, this cylindrical confinement focuses the electric fields and gives us this enormous new advantage in building multilevel cells.
And I want you to stay here for a second. This cell was introduced for the first time in volume manufacturing in 2013. Today, by the end of 2019, this is projected to be the highest volume device ever shipped by mankind, period, not transistors, not resistors, not diodes, not capacitors, not DRAM cells, this charge trap cell, which is barely 5 years old, will be the highest volume device ever shipped by mankind. And Western Digital is the leader in this device. And I will tell you why this device is so important. Traditionally, you can see an SLC cell. An aerial stick, it stores the charge, you flip the bit, you get a 0 and a 1, very simple device. But this device, because it’s cylindrically confined and it’s got enough margin that it can use easily make two bits of cell, okay. So 00, 01, 10, 11, those are the four states that are needed, including the aerial state to go get your two bits of cell, but it’s still bright enough, broad enough, you can retrieve it to cell easily. For 3 bits of cell, you need 8 states. Of course, 4 bits of cell, you need 16 states. Don’t expect a 5 bits per cell for quite some time more, that needs 32 states. It’s still not that good. But what you are seeing is because you have that good a control on the threshold voltage, electron volts as we are talking about, that threshold voltage distribution, that top cell, the single-bit SLC, I can trade that for very, very high endurance. I can get 0.5 million, 1 million cycles of endurance. I can trade it off for very short access times. I can get sub 1 microsecond access times, lead access times. Now we are talking about something interesting. So what we are seeing is this cell is very, very, very versatile. And this versatility allows us to productize them in unique, interesting ways.
So, what have we done with this cell? In the last 5 years, since 2013, when we had our internal BiCS1 a 24 layer which we never showed off to anybody outside, to a 48-layer, which we minimally shipped in volume to 64-layer, which is the darling of all the industry everywhere, this is the highest volume product that is shipping right now to being the world’s first 96-layer. The world’s first 96-layer, we introduced this product into the marketplace about a year ago. Since that time, we have been ramping it in volume and I will show you in a minute, this is the lowest cost bit in the world, bar none, the lowest cost technology in the world because of the ramp and in volume is BiCS4 today. And of course, we are not stopping. We will have the next generation coming in. I just saw last night, International Solid-State Circuits Conference proceeding that’s coming up in February and there under the Western Digital name is the 128-layer circuits under the array paper, very nice to see it. So we are taking technologies further and further. And the 128-layer circuits under that is an interesting idea. People talk about, oh, I have circuits already. This is where I want to talk about technology leadership versus system solution products.
Having a – and I want you to spend a second watching this perspective, a BiCS4 that’s going up. You have about 1.7 trillion of those memory holes in a single wafer. And look at it when it goes vertically, how deep that structure is. Just as a technology, it is mind-boggling that something along this line could be created this fast and in high volumes. As I said, this today is the highest volume device being produced and I do want to be my showman today to actually show you how that actually looks. So lot of questions are being asked about, hey, here is this 128-layer stack of a device and I wanted you to see how deep an aspect ratio that is and why this is – there is a thing in glowing green in the middle. This is an innovation. You could go, etch the whole thing in one go as a big deep hole. What you do is if you do it all in one go, let’s say, a one degree change in how vertical this is will blow up your die size. So, it is actually a feature to come back and say, hey, I can build all the way up here, move the cranes up on top and build a next set of the skyscraper. So the 2 storey versus 1 storey, people talk about, hey, which is better. The 2 storey allows you to tightly control your die size.
So, let me go back to talking about where it is all headed. The innovation in 3D NAND is continuing dramatically generation after generation coming up, whether we put circuits under the array, whether we put circuits next to the array, whether we do two layers, whether we do one layer is all geared on only one thing when do we deliver the right solution to the customer at the right time to produce the lowest cost bit, to produce the highest performance bit, the highest endurance bit. That’s the only thing that matters. In the end, what matters is what value, when can I deliver it to the customer? All of these technologies, for instance, the circuits under the array, we produced about $400 million of revenue with circuits under the array in 2002. This technology has been here for a long time. So we will in 3D NAND introduce it at the right time, at the right place.
And I want you to take back on to this idea of this low latency flash. In this graph on top, you see on yellow is what is happening with DRAM. So, DRAM, over time, cost per gigabyte is not going down anymore. It’s not scaling. The device is not scaling anymore so the cost reduction stopped. Hard drive, on the other hand, as we are just talking about as Dennis was talking earlier, on a routine basis, gives 15% aerial density improvement. They are going steadily. NAND is matching it step-for-step, but it’s still 10x more expensive. As fast as NAND is coming down, hard drive continues to be keeping pace, whereas DRAM is not. That’s where we introduce low latency flash. Because that charge trap cell is so powerful, we can come back and say, aha, I can use it for other application, other than just the mainstream TLC. So this device is able to bridge the gap between the two. Given the fact that this continues to scale the applications where low latency flash now starts to take on additional uses that are traditionally reserved for DRAM. And you can see why that’s the case. A low latency flash can give you access times as I was saying under a microsecond sometimes, but still 10x cheaper than DRAM. On the other end, an X4 device is now starting to approach hard drives. So, this charge trap device gives you the breadth of usefulness, which we are uniquely positioned to take care of. It is our expertise in productizing this low-latency flash all the way to very high-density flash in X4. That’s our unique strength.
Let me switch from the component technology to how we convert these component technologies into a product. When HDD and SSDs came together and when WD and SanDisk came together, as both Mark and Ganesh talked about, there was a lot of knowledge sharing. How do we work with the customer? What does the customer need? How do we qualify with a customer? That knowledge intrinsically became part of our development methodology. In manufacturing, lot of good habits from the very high volume manufacturing that we have done both in hard drive and in flash is starting to come together. Of course, supply chain, the scale and complexity of the supply chain, when we start to put them together, leverages the volume and the reach of the supplier base. Together, we create what we call our vertical innovation pyramid, whether it is a controller, whether it is assembly, whether it is firmware, whether it is test, whether it is system integration, these come together as vertical innovation. This is what we call a platform.
The synergies of hard drive and flash in many ways are reflected in how fast we can develop and deliver these to customers. So, if I take this, this vertical innovation system, this pyramid, this becomes the core of our platforms. So, when Ganesh develops a platform for either a retail or a client or an enterprise or a mobile customer, he uses all of those to create platforms. The underlying memory, whether it is BiCS3, BiCS4, BiCS5, BiCS6 whatever it is, feeds into this. Whether it is X3 or X4 or low-latency flash, feeds into it and he creates based on these flagship products, the flagship products such as the WD Black that we were talking about earlier or the new NVMe enterprise that is going out. These are flagship products. But what is more interesting is right out of them followed a whole series of additional enhancements and automotive product or a surveillance product that falls out naturally with small amount of effort on top of these flagship products. This is the integrated platform the way we take a technology and deliver it as a solution.
So, I was talking about the flash nodes. When do we introduce them and why? The 15-nanometer 1Z technology was our workhorse, the world’s best 2D NAND technology period. The 2D NAND technology that we had is 15-nanometer from 2015 on has been the benchmark with which you measure that, the lowest cost die highest performance technology. When we introduced the 48-layer, the world was talking about 24-layer and 48-layer and claiming the leadership in 3D NAND. We always maintained that, that is not the right time to introduce it. 3D NAND did not make economic sense compared to 2D NAND. We waited. We waited till 64-layer became lower cost compared to 2D NAND when we introduced – and we introduced it across the board. Multiple platforms, everything that we were just talking about, retail, client, enterprise, mobile, everywhere, we went with 64-layer. Today, BiCS4, 96-layer, as I was telling you is in high volumes. It is the cheapest bit in the world.
So now, we turn our mind into productizing BiCS4 across the board. But you can see dilemma here. If I was still on a 64-layer and I had an X4, which is what people are doing, people are starting to productize X4, QLC on 64-layer. I can make them on an X3, much better product, cheaper, higher performance on 96-layer. So, premature X4 is not yet the right solution for the customers at the right time. We do expect X4 to be a very important technology, but more in the 96-layer to the 1xx layer technologies than in the 64-layer. We introduced this X4 in a conference a couple of years ago, but it is not the right time yet. So just the same way as we are talking about whether it is circuits under the array or the X4 or when we introduce the two tier, we do it at the right time for the customers.
Which brings me to this graph, I was a bit nervous showing this graph even though the facts are the facts, the lowest cost bits, rest of the industry is a good 20% higher than us in cost per bit. And I put this graph up. I do have to come back and say, this is what we run the business for. You can go back and compare and say, hey, on a 64-layer, on a 256-gigabit, on an X3, are you, it doesn’t matter. The reason we are the lowest cost bit in the industry is because of one reason. When the right technologies are available at the right cost, we ramp like crazy. We convert over to 64-layer from 64-layer to 96-layer when the technology is ready and the cost switches over, very aggressively. And we have some structural advantages as to why we do it so fast and I will talk about that later. But today, last year and this year and we will continue to be next year, the lowest cost bit in the industry on an average for the whole – all the bits that we ship. On an average for the entire industry and on an average for WD, we are the lowest cost bit.
So, this brings us to the productization I was talking about, 1Z, broad, full-spectrum technology productization, 48-layer, not so is. It was not the right technology. It was not right cost structure. 64-layer, on acquisition, Steve Milligan said to the entire company, there is only three priorities in the company, 3D NAND 64-layer, 3D NAND 64-layer, 3D NAND 64-layer, which is exactly what we did, 64-layer, all the way from 16 gigabyte to 32 terabyte, full 50 plus product lines. And what are we working on now? Taking it this is the platform that Ganesh was talking about, that one product that can take you from 16 gigabyte, 128 gigabit to 64 terabyte across all platforms. Retail, mobile, client, enterprise, this is what’s happening right now, 64-layer to 96-layer transformation is in full bore. 96-layer today we have the lowest cost and we lead the industry in the conversions. We have bar none the lowest cost leading edge into which we are converting.
Alright. I am going to stop here and talk a little bit about the industry, the industry supply demand dynamics and capital that we talk about a lot, which has got a direct bearing on where costs are headed. Capital intensity in 3D NAND, this is not just raw CapEx. Raw CapEx to go from 2D NAND to 3D NAND from successive innovations is much higher. But when we switch from one generation to another generation, the number of bits per wafer is also growing. So the cost, the capital needed to produce the additional 1% of bits, that’s what’s being shown here. And this is sort of the range and industry estimates. It is high and sort of if you squint your eyes it’s starting to level a little bit, but it is still substantive. Successive generations of 3D NAND are costing more to produce that extra bit. Capital intensity is growing. Capital intensity is growing substantially as we grow from 2D NAND to 2D NAND to 2D to first generations of 3D NAND, that 64-layer to 96-layer to the 1xx layers, right. And this is sort of others have talked about it, this I have just normalized to the additional bit growth.
But more interestingly, this additional amount of CapEx that is going in is not producing additional wafers. What’s happening is this is the industry floor space. Clean room space is growing at a healthy 12% to 13% to 14% year-over-year. However, the number of wafers produced is not changing, 1% CAGR. There are no new wafers. This is all about conversion. So, the prior page I showed is really conversion and not Greenfield, because there is not a lot of Greenfield 3D NAND coming up. When there is Greenfield come up, there is corresponding wafers that are going offline. So net-net, there is not an additional lot of new wafers coming up. So, what we are seeing is we are even for ourselves in the last 2 years, we have introduced Fab 2 and we are halfway into filling up the Fab 6. And in a couple of years, we will be introducing the new fab in Iwate. With all of that, the area goes up, but not the number of wafers coming up. The combination of these two not producing additional wafers and the intensity in capital is what’s leading to the industry CapEx. The industry CapEx, when the big 3D NAND conversions happened in 2016 and ‘17 went out of hand. We are starting to do all the conversions, but not quite getting the bits out of them yet. We got ahead of the curve. We produced 64-layer much ahead of everybody else and everybody else caught up with it. Then you see the intensity of it and so you start to see some stabilization of CapEx, but because the capital intensity is so high, bit growth rate continues to come down. So, we did have a peak here reaching close to 45%. But over time, because of the capital intensity and the overall industry CapEx being sort of as a ratio of revenue, it is fixed, you are starting to see stabilization of bit growth and that given the demand 38%, 39% that Mike was talking about, when demand is in the 38%, 39% and supply in the 35% to 37%, this is what’s going to lead to a little bit more normalization of demand/supply balance over the next few years.
Alright. From here, let me go back. Given this set of dynamic, what are we doing with our own factories? As you know, Dennis was talking about our global footprint of factories. We have been a traditionally a powerhouse manufacturing company. We have always been around the world, whether it is in hard drive or in flash, been a strong manufacturer. I will leave the Western hemisphere. This is our Fremont wafer facilities and the Brazil contract manufacture. If I leave out of it, all of our concentration is here in Southeast Asia and East Asia. The wafer fab of course is in Yokkaichi in Japan where just as we produce 9,000 wafers or so per day right plus our partner who acts on top of it. And our SSD plants in Penang and our retail and mobile plants that produce flash products in Shanghai. And then if I add to it, the rest of our manufacturing footprint, at least just the big factories, the media plants in Penang, the heads in Philippines and the drive and head facilities in Thailand. You put all of them together, this is a very, very strong and powerful and agile manufacturing footprint around the world.
And I want to spend some time on our Yokkaichi manufacturing facility. This is one mammoth integrated manufacturing facility for fabs. Little over $30 billion has been invested in this factory together between our partner and us. And this is unlike most of our fabs, is one integrated fab. A wafer starting here in Fab 4 might end up in Fab 6 halfway through the processing. An equipment that is looking, sitting idling in Fab 2 will get a wafer immediately out of Fab 3. I mean, if you go there, it is like shopping carts in Costco on a sale day. They are zipping around. And they are going – and this I want you to get a feel for the scale of these plants. This is the number of wafers coming out of it in 1 year, if I just put the wafers one above the other, it’s taller than Mount Fuji. Okay, I just want to make sure you get the image of what the scale of it and each of them has over 1.7 trillion of those memory holes. So, you get the scale idea of how big these operations are. And this fab complex is where because of the proximity between fabs and the development center, we end up transferring and converting very, very, very rapidly when the next generation node is available. So, this unique partnership that we have with Toshiba Memory Corporation over 19 years, 14 generations of technology have been developed all the way from a gigabit chip to now a 1.3 terabit chip, a 1,300 X growth in that same 100 square millimeters silicon over the last 19 years generation after generation. And you can see like clockwork, every 18 months or so, 50% of all bits get converted from the player technology to the next generation technology. And this gives us some amazing scale advantages that others don’t have, because about 40% of the world’s flash comes from here from this site. 40% of the world’s flash comes out of here between us and TMC together. We both leverage each other’s scale. Combined efficiency of equipment, labor, e-learning, everything happens, because it’s a much larger facility than each of us individually have.
The technology leadership, the two companies are pioneers in the area. Our partner invented 3D NAND. We invented MLC, TLC, QLC, together, we have created a technology powerhouse that we share our IP between us. But even more important, not just the IP, we share cost. So far, half the R&D expense, you get twice the product development. Designs, they do have, we do have. IP is shared. So there is a big leverage, the fact that the fabs are right next to each other. The development site is in the heart of the fab. I don’t have to develop in one continent and then transfer to another continent. I don’t have to have 1 fab, mother fab in one country and the next country, which does not speak the same language, I need to transfer out. Everything is done in the same place. But you know what is most important is the fab diversity. The two of us come at the same problem from two different directions. Often, we look at the markets differently. In the end, when they go yin, we go yang, we come back. We come back to where we produce the technology leadership. So this gives us with a structural advantage in the way we ramp flash.
So to summarize what I have been talking for the last 30 minutes, technologically, the lowest cost NAND, the highest aerial density and the highest density hard drives period. Technology leadership, we lead the world in productization of 96-layers. The lowest cost per bit comes from there with the broadest portfolio of products. The charge trap cell, which I lovingly explained, has a versatility that you will be surprised how broad are its applications. The vertical innovation platforms that we have developed that can take this technology and deliver it as a product with the highest value to the customer and the structural advantage that is built-in because of our joint venture with Toshiba Memory.
With that, let me invite my partner in crime, Martin Fink, who is hiding over there to come and tell me where he is going to take all this and make interesting new architectures with.
Thank you, Siva. Am I on? Okay, good. I wish I had a professor like Siva when I went to school. It’s just – it’s always captivating to hear Siva speak. You might have thought you were done with technology after Siva, you have to tolerate for just a little while longer. So let me kind of set this up for you so you kind of understand how I think about the notion of technology within Western Digital. A couple of years ago, I have been here for just about 2 years. I had spent over 30 years at HP, and I had lived through all of the transitions and separations at HP and I had retired. And in fact, people were questioning, did you really retire? And I said, yes, yes. No, no. I am not kidding. I have put my house for sale. I am getting on a plane. My grandkids are in Colorado. I am out of here. And so what happened was I started a conversation with the team here at Western Digital and what was interesting is that the conversation wasn’t, hey, Siva is working on BiCS4, 5, 6 and 7. Can you come help him build BiCS17? Alright, that was not the conversation. As you just heard from Siva, he doesn’t need help with that. We have the best teams, the best capabilities in the world to build that technology. But what the team did say is the industry architectures are changing and we need to start to think about how data fits in these new industry architectures. And I had just spent the past 10 to 12 years of my life basically thinking about architectures and data center architectures and how that world changes. And in the end, it was just too compelling. It was too interesting of a carrot for me to continue my trek to Colorado took the house off the market and said, sorry, honey, you got to fly to go see the grandkids and decided to stay here and work on this. And so that’s why we talk here about not a next generation of NAND, but rather about technology architectures. So you have seen a version of this slide already from some of the presenters.
And basically I like this construct of a big data and fast data because it’s a simplification construct. And while we can talk about all sorts of varieties of different data, it’s always good to have a simplifying construct and so big data is about scale. It’s about amassing massive amounts of information. It’s largely what we, as an industry, have been doing for the past 10, 15 years in the early days of Hadoop, for example, as doing analytical work. But more recently, over the past few years, we have seen more of this construct of fast data where the idea that all the data gets shipped to some data center in the cloud doesn’t actually fit. It doesn’t actually work all that well because my favorite little example, if you are driving an autonomous vehicle and you are doing processing, the idea is saying, hey, I just saw 6 pedestrians on the road. Let me ship that data to a cloud. Let me wait for the answer, come back. Okay, should I hit them or not? It’s just – that model just doesn’t work for fast data. Fast data is about computing close to the data and it’s about the immediacy of being able to make decisions.
And so that’s how we think about the idea of architecture around data. Now, the reality is that we have spent part of our entire lives from a technology industry perspective, working with what we call general purpose architectures. So, all of the processing elements that we think of today, whether they are Intel or ARM or those kinds of things, they essentially fall in this category of general purpose processing. They are a lowest common denominator effect and that’s not a bad thing, because for a lot of years, we were able to leverage that for more and more workloads. But the reality is that we have kind of reached a saturation point, I will call it and I will show you that in a second, where the idea of saying, we are going to be able to continue to use these general purpose architectures to solve all these next generation of problems from analytics to machine learning to AI, it actually doesn’t really work. And you say, what’s he talking about? Well, think about this, because you have seen evidence of this already. If you pay attention to the industry, you will have heard Microsoft designing FPGAs to optimize a specific machine learning algorithm. Google introduces TensorFlow processing unit, or TPU, in order to optimize their machine learning workload. Last week, I think it was last week, Amazon announces more processors into the industry for their machine workloads. So what’s that telling you is what is generally available as a general purpose thing is not meeting the needs.
And the GPGPU is a special case. So you might have heard of NVIDIA and NVIDIA started more as like a gaming graphics processor. And people said hey, that’s pretty cool, because I can use that for machine learning. It’s actually better suited for my machine learning, there is a vector mask kind of thing that graphics processors do and so this GPGPU thing kind of falls in the middle, because GP means general purpose graphics processing unit. So, it falls in this category of general purpose, but it’s also very focused in a very specific place, which is vector arithmetic. And so what it means is that it is good at doing one thing, but it also comes with all sorts of extra baggage, because it really is trying to solve this broader problem. And so why is it that we get to this point where the general purpose world doesn’t actually work for us anymore? Well, think back in time, if you have been around the industry for a long time through the ‘80s and ‘90s, it was all about clock speed, right. When you bought a processor, let’s say, from Intel or AMD or whoever, you essentially went for your maximum megahertz back in those days, right. My first computer I bought at home was 4.7 megahertz. Yes, megahertz and we cranked that up, we cranked that up, we cranked that up and around 2002ish, we kind of hit a ceiling at the 3 to 4-gigahertz range.
So turning the clock kind of ran out of steam, but hey, some creative people said, let’s go multi-core. So now what we are going to do is solve the problem in parallel. So rather than make each one go faster, we are just going to use lots of them. So the analogy, I explained this to people for multi-core if you are not familiar with this stuff, it’s basically to say, instead of having one airplane take you from San Jose to Denver very, very quickly, all I am going do is use multiple planes to get more people to Denver. But every plane goes the same speed. So that’s essentially what the world has done. And so now, there is no more clock speed. You can’t continue to add all sorts of the multi-core thing. So the only thing that’s available is to think about an architectural paradigm shift. We have to think about the architecture differently. Now I should stress, it doesn’t mean the general purpose world is going away. So, this is not an either/or thing, right. We are not trying to replace things. We are basically saying, when we think about data as a center of the universe to customers data, that general purpose world is not fulfilling the need that our customers are saying going forward. And so that’s why we think about this data-centric area where things are going to be more purpose-built and solve very specific problems. But you will also notice I put that line today to say it’s already happening. And that’s why I gave you examples of how that is already happening today.
And so we basically think about this construct also along this big data, fast data. So now we are going to start getting a little bit more into geek land. On the left side, if we think about big data, what we have been doing so far is this notion of shipping data to where the compute happens to be, right. So we basically have a huge volume of traffic on the Internet or within data centers to essentially send all of our data to the big compute engine to do all of the processing. On the right hand side, with fast data, essentially, what we’re dealing with there is the notion that the CPU has been the center of the universe. And the design and architecture of the servers and systems we buy today are all built with this notion that the CPU is the center of the universe. And we think that we need to make our customers’ data the center of the universe. So when we think about how this model changed we think, hey, let’s bring the compute close to the data. And then let’s rethink the architecture so that rather than the CPU be at the center of the universe, we think about memory or the customers’ data being at the center of the universe.
So, let’s talk a little bit more about composability. Phil introduced the construct of composability, and so I’m going to take it a little bit further. So what Phil was talking about when he talked about open flex and what we have done with composability is what’s at the top part of this picture. Now what I have done internally is I have used an analogy to help people kind of get through. Because composability, if you are new to the compute world, if you are not a super geek, you kind of go, okay, what’s he talking about? So let me build an analogy that at least so far people have been able to resonate with. And the reason I came up with this analogy is when people think about composability or composition, music tends to come to mind, right. It’s just sort of a natural they think about music. So, if you think about music compositions, they are really made up of five elements. They are made up of notes, volume, tempo, keys and instruments, that’s it. So all music, whether you are a classical music person and you love Bach, Tchaikovsky, Beethoven or you are a I am a Lady Gaga all-day long person, right. All music is really composed with those five things, right. And what the composers do, the songwriters do is they mix and match and they just mix these things along in order to create these beautiful works of art.
Now, imagine if I went to a music writer, a composer, and I said, every time you crank up the volume, you must increase the tempo. You have no choice. You increase the volume, you got to go faster. Music wouldn’t be so great anymore. Well, now let’s translate that to geek land. Servers are comprised of processors, memory, fabrics and storage. And the world that we have lived with has been this constrained world where if you want more memory, you need to buy more CPU. If you want more storage, well, also, you are going to need to buy more CPU. So, as the CPUs, the center of the universe and these things were constrained and locked together. So when Phil talked about composability, the top part of this composability picture, the data fabric, what he is saying is our open flex is breaking down that barrier, this idea that things are locked together where you can mix and match the amount of storage, compute and fabric or networking that you need to optimize for your workload. It’s your data, your workload, your application you should be able to optimize it. But there is one problem. Given today’s technology, the one part that Phil, smart guy, but he seemed cannot do, he cannot do the bottom part of this picture. He cannot compose main memory. It is still locked by the CPU. And so today, that takes the form of an Intel CPU and the memory attached through an interface, you might have heard of, it’s called DDR4. And Intel controls how much memory you can attach. And if you say, I want more memory. Intel will say great idea, here is another processor to go with it.
Now for those of you who also follow the DRAM industry, the one I will say good thing that we have been living with is that, that memory interface that I call DDR4, pretty industry standard. So when you want to buy a memory for your service say I am going to call Micron, I am going to call Samsung, I am going to call SK, I am going to have some choice, some price and competitiveness, etcetera, because it all connects to DDR4. So let’s imagine a scenario where Intel said I have a new memory connector it’s called, I will just make it up, DDRT. And they said if you want to connect to my new great processor, you must connect to this new Intel DDRT bus. And the only thing that can attach to this new DDRT bus is Intel memory. Can’t possibly be, could have. Well, guess what folks. That’s exactly what is happening. So we are extending into this world, whether it’s a DDRT thing from Intel and NVLink from NVIDIA, a HyperTransport from AMD or others, these proprietary interfaces that limit your ability to connect processors together, to connect memory to processors. And we are passionate about unlocking that world for our customers so that we can maximize how they want to use – how they can use data. So if they say, I want one processor and petabytes of memory, because remember, Siva just talked about how we can do low latency flash and create petabytes of main memory. Why shouldn’t I be able to do that? And that ratio of compute to memory, it should be up to me as a customer as the owner of data to figure out what is the optimal mix of doing that. And that’s the architectural construct that we are trying to change and mix together.
So, what are we doing to actually do that? Well, it turns out that this morning I was at the Santa Clara Convention Center. And at the Santa Clara Convention Center today is the RISC-V Summit. And RISC-V is a completely open instruction set architecture for compute. And I was doing the keynote speech this morning. My big challenge today was to not mix my speeches between the two venues and I will be heading back there this afternoon, but we made a set of announcement. So last year, what I announced is that Western Digital was going to transition all of its processor cores that we use in our controllers, the stuff that Ganesh builds, right. We are going to transition all of those to RISC-V over a period of time. And when we did the math, it turns out that we ship about 1 billion of processor cores a year. And so we said, we are going to transition all of those. So this year, when I was at the RISC-V Summit this morning and if you check your newsfeed, you will see there was a press release there. We actually announced our first RISC-V core. We called it SweRV. RV is for RISC-V. We, is for two things, Western Digital and for we, as in collaborative, working together and sharing and the S or swerve is about swerving around general purpose architectures to the purpose-built world. And so we announced our first RISC-V processor core. And we announced that it would be completely open source.
So in effect, think about this as doing to the processor world what Linux did to operating systems. We are at the very early stages so think of this as Linux 1999, right, so this is not happening in a week, folks. We are in Linux 1999 timeframe, but we are committed for the long term because we just see where all of this data, you see the charts that say we got zettabytes of data coming through. And for us, like we cannot make sense of how the existing architectures are going to be able to solve our customers’ problems with that volume of data. Something has to change and we are happy to take the leadership role in order to go do that, but we are going to do it in a completely open, industry standard way. And so we welcome the industry to download the source code to our quorum. And also, the other thing I announced remember, I just said this memory thing, it’s kind of all proprietary depending on who you go to where we said, well, guess what, we are going to deal with that one too. So, we announced a completely open source, open memory fabric, a coherent, which is called a coherency fabric. And I will be happy to give you a chalk whiteboard conversation on coherency if you are interested, but we also announced that also as completely open source.
Now, let me give you a little interesting data point. When we started this with RISC-V and developing our own cores rather than acquiring them from the outside, we said, okay, we are on a marathon. We are not on a sprint. And so we need to have modest goals to get going, right. So we said, let’s just try to achieve parity with the cores we are using today, right. So we shipped 1 billion cores a year. And so our first modest goals say we are learning something new by putting this together. Let’s just basically achieve parity to what we have. That was essentially the goal we set for ourselves. Here is what happened, 30% improvement in power consumption, 40% improvement in performance and 25% reduction in footprint, Version 1.0. Not bad, right. So, how was that possible? Let’s go through a couple of the reasons. One is we assembled a pretty smart team that combined has about 500 years of processor design experience, so having a strong team, clearly, very important.
Now the question you should be asking is, Western Digital is not a processor company, Western Digital is not known to be a processor company, why does somebody who has decades of processor development experience come to work at Western Digital, like why would they do that? You could go work at all these people could easily get jobs at Apple, Qualcomm, Intel, ARM, you name it. Why would they come to Western Digital? And if you go talk to them, you don’t have to ask me, you go talk to them, what they would tell you is this, this was the only opportunity in the industry to not just turn the crank. If they went anywhere else all they would do is turn the crank on Version 72 of the processor in whatever family they were on. To actually have the opportunity to design something brand new from the ground up that fundamentally alters the architecture of computing was an opportunity available absolutely nowhere else and that’s why we were able to assemble that team. The other reason we were able to do this goes back to the very foundation of all of this, special purpose, general purpose. When we are acquiring cores, great cores, nothing wrong with them, from third-parties, they are general purpose. They are trying to solve a problem for many of their customers using one set of IP. When we set out to design our core, we were solving our problem or more precisely, our customers’ problem, how do we optimize the data for our customers. And so because we weren’t constrained by putting all of this extra superfluous stuff that we didn’t need, this was the end result.
Now, there is a bubble on this slide that I normally never ever include, but because of the audience, I included it. And it’s the cost bubble, okay. There was zero part of the decision for us to adopt RISC-V and change architectures that was motivated by a cost element. So at no point in the decision that we say, oh, it’s going to be cheaper, so let’s go do that. But at the same time, we do have to be responsible with shareholder dollars, with investor dollars and we have to do this responsibly. Well, it turns out – so there is a couple of things that come out of that. First of all, the reason we are doing all of this in open source is because we don’t want to take on the burden, right. So if you use the operating system analogy, we don’t want to do a full UNIX stack top to bottom at $250 million a year of development. While we are very much involved in the development of the Linux kernel and we have teams that develop a lot of storage drivers for the Linux kernel, we don’t develop all of Linux. So we don’t pick up the cost of the entire thing. So we did some initial work. We are seeding the industry. We are seeding the ecosystem. But we don’t want to long-term have to pick up this massive cost of development. But it turns out the other thing is right now, this is costing us probably less than 1% of our overall platform investment. So if you play a disaster scenario, which is a good thing to do, you play a disaster scenario, hey, Martin, all your notions about architecture, it’s not going to work. It’s all going to be in RISC-V. Nobody in the industry is going to come play. Like just play every possible disaster scenario. The reality is from where we sit, we have achieved these numbers and these results for our customers at a cost profile that is very manageable.
So we are quite happy. Obviously, we are not aiming for a disaster scenario, we are aiming for a leadership scenario and the industry is coming on board. Just by the way, quick stat, last year at the RISC-V summit, the audience size was 400 something and this year was over 1,000 and that’s why I had to move to the Santa Clara Convention Center. So the ecosystem is coming onboard and we are getting a lot of traction. So, all of our trajectory, all of the data that we have on the health of RISC-V is all up into the right. I live through the entire life of Linux and this is happening at a much faster rate than Linux did, which is surprising to me, because software typically can move a lot faster than hardware, but open source was a new construct back then. This is happening at phenomenal speeds. So with that, hopefully, you get a sense for how we think about data, how we think about our customers’ data and the importance of thinking about the architectural paradigms that are going to need to shift over the next 5 and 10 years in order to allow our customers to fully monetize and take full advantage of their data.
So with that, I am going to turn it over to probably the one thing you really, really, really wanted to get today, which is our CFO, Mark Long to talk about the finances of the business. Thank you very much.
Thank you, Martin. Good morning and welcome everyone. As we covered throughout the day, we believe Western Digital is fundamental to an increasingly data-centric world. In this final presentation, I will discuss our financial profile, capital allocation and capital structure as well as offer some historical perspective on the NAND flash industry. Specifically I’ll cover these four main areas: first, our leadership in data infrastructure through the industry’s broadest product portfolio, leveraging our technology strengths and delivered through relentless operational execution; second, the compelling long-term growth opportunities for our company; third, our financial model designed to deliver long-term profitable growth while enabling the company to navigate periods of market volatility; finally, our capital allocation strategy, with its focus on shareholder value and returns. I will also highlight the optimization of our capital structure and explain the operating discipline and efficient capital investment framework.
We have built a robust platform with multiple levers to create long-term shareholder value while enabling us to address the cyclical aspects of our business. There are strong secular growth drivers across the majority of the end markets we serve. We have strengthened our balance sheet and paid down approximately $6.3 billion or 40% of our debt since the closing of our SanDisk acquisition. We also recently paid down $500 million in our revolving line of credit. Shareholder returns have been one of our top priorities, and in the last 12 months, we have returned over 80% of our free cash flow to shareholders in the form of dividends and share buybacks. That takes into account mandatory debt pay-downs.
As you have heard today from each of the presentations, the evolution of the data-centric economy creates massive opportunity for our company. Over the last decade, Western Digital has generated superior shareholder returns against the S&P 500. Our strategy has been to position the company to capitalize on the long-term opportunities and create compelling shareholder value, recognizing that we must also navigate periods of short-term volatility. The upward trend of this chart demonstrates the successful implementation of this strategy. We have the scale, technology engine and portfolio breadth required to meet the evolving needs of our customers and partners. We are fully vertically integrated in both hard drives and flash products. And just as importantly, we’re able to offer unique technical expertise and architectural insights across both data infrastructure technologies.
I would like to spend a moment describing how we, as a management team, have significantly diversified our revenue base to focus on strategic, high value products. As we discussed 2 years ago, we have transformed from a company with a significant dependence on client PC hard drives in fiscal ‘13 to a far more diversified business today, with client PC hard drives representing only 14% of our total revenue during the last 12 months. We have significantly expanded our flash portfolio, with flash representing now approximately 50% of our total revenue during the last 12 months. And as I just referenced, approximately 60% of our total revenue today is coming from high value products versus 27% in fiscal ‘13. This is a result of our focus on strategic growth markets and high value applications.
Now, let me provide you with an update on our progress towards the strategic and financial goals we set during our Investor Day 2 years ago. As you can see in the slide, for fiscal ‘17 and fiscal ‘18, we have achieved and in many instances, exceeded our targeted long-term financial model. We have successfully integrated both HGST and the SanDisk acquisitions. We have realized our near-term synergy targets for both transactions and remained on track for our long-term targets. We’ve continued to allocate our capital in a balanced way through de-leveraging, dividends and share buybacks. We have continued to reduce our gross debt, bringing the total leverage ratio below 2x. Over the last 12 months, we have paid $594 million in dividends and bought back $1.2 billion in stock. In addition, we have optimized our capital structure, significantly reducing our interest expense, enhancing our financial flexibility and improving our liquidity. While we continue to believe in the compelling long-term opportunities in the data infrastructure industry, the current industry dynamics create near-term operational and financial volatility, which our management systems and operating model are designed to mitigate.
With respect to our long-term financial model, it’s a target model reflecting how we expect the company to perform in most market environments. At times, we will operate above the model as we have done periodically over the last few years. And at times, we will operate below the model. Currently, the market environment is such that our near-term financial results are expected to be below the model. However, the cyclical aspects of our business are well understood and our strategy and operating model are designed to enable the company to successfully navigate through the down phases of the cycles and be well positioned for leadership when we enter the up phases.
Let me begin with some historical perspective on our NAND flash business that builds on some of what you heard Professor Siva talked about earlier. The chart shows the past two NAND flash cycles in addition to the current one that began in the first quarter of this calendar year, what’s represented through our fiscal fourth quarter of ‘16 or the dotted line which is the SanDisk acquisition date, are SanDisk flash revenues and gross margins on a standalone basis, while the balance of the chart shows the combined Western Digital and SanDisk flash business. You will note that the dotted flash revenue trend line over this long-term period demonstrates an upward trajectory. With our portfolio breadth, we entered this cycle with nearly twice the scale that SanDisk had when they last navigated the cycle as a standalone company. As you can see on the chart, we recorded $2.6 billion in fiscal second quarter ‘18 flash revenue versus the $1.3 billion recorded by SanDisk during fiscal third quarter ‘15 and $1.4 billion reported during fiscal first quarter ‘12. By combining both hard drives and flash, we have tempered the impact of the NAND flash industry’s periods of volatility as a result of more stable hard drive gross margins. This enhances our financial model resiliency.
Although the NAND flash industry exhibit cyclical volatility, the industry has continued to become more economically rational as it matures. We believe one of the strongest barometers of the industry’s long-term economic health is its return on investment. As you can see, over the last 10 years, the industry has delivered a healthy upward ROI trajectory in spite of its periods of short-term volatility driven by transitory supply/demand dynamics. The other key barometer is elasticity of demand. Historical evidence demonstrates that as prices normalize, the NAND market has not only exhibited demand elasticity in existing segments, but has time and again enabled new opportunities and new market applications.
With this deeper understanding of the NAND industry dynamics, I would like to return to our total addressable market at an aggregate and sub-segment level. Client devices, includes both hard drives and flash products for PCs and consumer electronics and flash solutions for mobility. This also includes high-growth flash application such as Internet of Things, autonomous vehicles, AI and machine learning. Client solutions is our branded retail flash and hard drive business. Data center devices and solutions includes our data center and enterprise, hard drive and flash products as well as our platforms and systems business. In the next few slides, I’d like to offer some further insights into each of these end segments and how they contribute to our overall business opportunity. To put it all in context, we serve large, growing markets. They are forecasted to total $111 billion for our core business and $35 billion for our data center solutions business by fiscal ‘23. We have the portfolio breadth and depth to participate across all major sub-segments of this market, which enables our long-term revenue growth, operational efficiency and cash flow generation.
Let me highlight some of the important aspects of these markets. Client devices has a $57 billion TAM in fiscal ‘23 with a 4% CAGR. Flash is expected to grow at twice that rate or 8% to $51 billion, while the hard drive TAM is expected to decline at a 13% annual rate to approximately $5.6 billion from hard drives transitioning to flash mainly in PC applications. All of which is factored into our product and operating plans. In client solutions, we have a large TAM of approximately $10 billion in fiscal ‘23, experiencing a slight decline of approximately 2% on an annual basis. With extensive worldwide distribution in leading consumer brands, we generate strong cash flow from this segment and continue to build on our leading position across the segment. Data center devices is expected to have the strongest growth during this period, double-digit CAGRs both in hard drives and flash. The market is expected to be approximately $45 billion by fiscal ‘23. I’d like to highlight that capacity enterprise hard drives remain a key engine of growth for our company and one in which we’ve demonstrated technology and product leadership, with the industry’s first helium-based hard drives and the recent announcements relating to our energy-assisted recording technology. Finally for data center solutions, we see the $35 billion TAM in fiscal ‘23 as a compelling up-to-stack growth opportunity for us. This is another segment where we believe we’re positioned favorably, thanks to our vertical integration and vertical innovation advantages, as Phil described earlier.
Overall, we expect a $111 billion TAM in fiscal ‘23 with a 6% CAGR, with flash growing at 8-plus percent to $84 billion and hard drives growing at 2% to $27 billion, again, primarily driven by capacity enterprise. To enable greater understanding and modeling, I’d like to offer some additional commentary on each end segment, particularly the key trends for our business. In client hard drives and solid state drives, we expect increasing flash penetration in desktop and notebook applications as well as a slight decline in PC units over the next 5 years. We expect average flash capacities per unit to increase to approximately 700 gigabytes by fiscal ‘23. In consumer electronics hard drives, we expect a growth opportunity in surveillance. In mobility, growth is primarily driven by the substantial increase in average capacities per smartphone unit, which is expected to reach 200 gigabytes per unit by fiscal ‘23, while the unit growth is expected to be a modest 3% per year. The 5G rollout and proliferation of smartphones in certain developing regions like India could provide additional tailwinds to this business. And finally, for other embedded flash, we expect significant demand for NAND flash in various applications from surveillance and security, automotive, industrial IOT and gaming.
For our client solutions business, we expect slight declines of both retail hard drive and flash-based products as more consumers get comfortable with the cloud for their personal storage needs and as average smartphone and personal compute device capacities expand. However, that decline in retail TAM is offset by an 18% growth rate in removable flash cards, which go into high-growth IOT applications.
For enterprise hard drives, the growth is primarily driven by capacity enterprise. We expect the TAM to expand from $9.9 billion in fiscal ‘18 to $18.8 billion in fiscal ‘23 for a 14% CAGR. For enterprise flash, another one of our strongest growth opportunities, we expect significant demand from the continued transitions to the cloud and the major infrastructure build-outs required for AI and machine learning. We expect approximately 10x growth in PCIe Enterprise SSD bit demand from approximately 12 exabytes in fiscal ‘18 to 135 exabytes in fiscal ‘23. For data center systems, the proliferation of big data and fast data applications will fuel our ongoing growth. We also believe hyper-converged infrastructure customers are progressively seeking a reliable, cost-effective, white box alternative as they move towards the composable infrastructure vision of the future.
Our strategy has always been to focus on growing the company profitably. We’ve grown our revenue 4x since fiscal ‘07 through a series of acquisitions and strong operational execution. We’ve successfully transformed from a pure hard drive company in fiscal ‘07 to a global leader in data infrastructure, with over $20.5 billion of revenue in the last 12 months. Our non-GAAP operating margins have expanded from high single digits in fiscal ‘07 to the mid-20s in the last 12 months. As you can see in the slide, while the long-term trajectory of our revenues are upwards we have achieved this growth by navigating periods of near-term volatility.
Let me provide you a brief overview of our value creation from combining hard drives and flash. Since the acquisition of SanDisk and until fiscal ‘18, we’ve grown our revenues by 63%, our non-GAAP operating margin by 102% and our free cash flow by 74%. Our strategy of combining hard drives and flash has delivered greater profitability to our stakeholders and the opportunity for continued long-term value creation. Our strong financial performance has been driven by sound execution. We accelerated our revenue growth as we entered into a stronger NAND flash market following the SanDisk acquisition. As you can see in the middle section, we did this through prudent organic investment, with a focus on research and development and product innovation while diligently managing our overall operating expenses. This has resulted in strong returns on invested capital for our shareholders. In the current environment, we expect our non-GAAP OpEx as a percentage of revenue to be slightly higher than our long-term financial model. We are focused on aggressively managing our expenses and investments without compromising our market leadership and our ability to serve our customers.
Next, I would like to discuss our cash flow generation capability on both the levered and un-levered basis. During fiscal year ‘17 and ‘18, we generated adjusted or un-levered free cash flow of $2.6 billion and $2.7 billion or approximately 14% and 13% of revenue, respectively. Now regarding capital expenditures, as we explained 2 years ago, we have an efficient capital investment model. As we stated in our long-term financial model update during our fiscal fourth quarter ‘18 earnings, we target a cash CapEx range of between 6% and 8% of revenue. The hard drive CapEx trends below this range and the flash CapEx trends slightly above this range. Finally, as we guided in our recent Form 10-K for our fiscal ‘18, we expect fiscal ‘19 cash CapEx to be between $1.5 billion and $1.9 billion.
Another area I would like to revisit from our last Investor Day and highlight again is the efficient cash CapEx model in our joint venture with Toshiba Memory. At a high level, the JV CapEx is funded from three sources: one, direct cash investments from us and Toshiba Memory; two, third-party equipment lease financing; and three, JV cash flow generated from selling wafers to Western Digital and Toshiba Memory. The wafers to JV sales to us have two cost components: fixed cost and variable cost that are reflected in our COGS. By reducing the wafer starts through the recent actions we described on our last earnings call, we eliminate the variable costs, resulting in cash saving. However, we still have to pay the fixed cost to the JV. The fixed cost for the wafer output we plan to reduce will be taken as a GAAP accounting charge.
As a management team, we continually focus on the external factors that may have the greatest impact on our business, operations and our financial performance. Understanding these external factors helps us better operate the business and develop our future plans. This slide provides a list of eight external factors and indicates what we believe the potential impact could be positive, negative or both. Many of these factors have both long-term and short-term implications. As this changes over time we will provide commentary regarding how these factors are impacting our business and, where relevant, how we plan to mitigate any resulting risks.
Since the closing of our SanDisk acquisition, we have successfully executed on our capital structure optimization strategy, which is focused not only on de-leveraging but also lowering our borrowing costs through a series of debt transactions, repricings, prepayments and pay-downs. Although this effort is ongoing in just 2 years, these highlighted events have resulted in a reduction of approximately $470 million of annual cash interest expense, a reduction of approximately 3.6% of our weighted average borrowing rate, which has been accomplished against a backdrop of rising interest rates, and finally, we have reduced our gross debt by $6.3 billion since the close of our SanDisk acquisition. The resulting impact on our business is greater flexibility and liquidity, critical for navigating near-term industry volatility and for positioning us for long-term profitable growth.
With respect to our capital structure today, we have a strong balance sheet, with liquidity of $6.5 billion as of September 28, including $4.3 billion of cash and equivalents and $2.25 billion of un-drawn revolver capacity. Our debt is now $10.8 billion on a gross basis and $6.5 billion on a net basis. We have significantly reduced our cost of debt in spite of a rising rate environment. Specifically, our effective interest rate in the most recent fiscal first quarter of ‘19 was 3.8% versus the 5.6% recorded when we first closed the SanDisk acquisition. We will continue to optimize our capital structure over time.
I would like to share how we view overall capital management on an end-to-end basis. It starts with our robust business model with strong operational and financial discipline, our efficient capital investment framework and strong balance sheet, which allows us to invest in key areas of our business and return capital to stakeholders. Capital management is a priority for our company. This scorecard shows how we have performed over the last 12 months. We generated approximately $2.2 billion of free cash flow or 11% of our revenue. Our days inventory outstanding of 87 days is elevated due to a combination of excess flash inventory and some strategic hard drive inventory we need to maintain as part of our Kuala Lumpur site closure. As we mentioned earlier, we have sufficient liquidity and have reduced our debt balance after our recent $500 million revolver pay-down to $10.8 billion. We will continue to focus on de-leveraging and maintaining strategic flexibility. Our cash CapEx is near the high-end of our business model range due to investment in our flash business. We are in active discussion with our JV partner to manage our investments given the current market environment.
Also as I mentioned earlier, we have returned a significant portion of our free cash flow to our shareholders in the form of dividends and buybacks. Just as we stated 2 years ago, we continue to be disciplined in our capital allocation, focused on long-term shareholder value creation. Our priorities include investing in next-generation technologies, products and solutions, many of which we have heard about today; staying committed to our quarterly dividend; executing our recently announced $5 billion share buyback program; optimizing our capital structure and continuing to de-lever; and finally, conducting mergers and acquisitions to acquire key technologies, teams, market access, intellectual property and everything necessary to achieve our strategic objectives.
Here you can see the results of our disciplined capital allocation. Since the start of our fiscal ‘13 year, which was the first full year after the acquisition of HGST, we have done the following. We invested approximately $22 billion organically in our business in terms of both cash CapEx and operating expenses, the majority of which was in research and development. We delivered approximately $2.6 billion in dividends to our shareholders and bought back approximately $3.8 billion of our shares. We also paid down approximately $7 billion of debt – $7.1 billion to, be more precise, of debt. And we consummated almost $17 billion of strategic M&A, primarily through our transformational acquisition of SanDisk. Just as we have since the beginning of fiscal ‘13, management’s capital allocation strategy going forward will remain disciplined and balanced.
In conclusion, we believe we have a compelling platform for shareholder value creation based on a robust business model vertically integrated in both hard drives and flash and designed to generate long-term profitable growth; our positioning in attractive growth segments, coupled with our scale, product portfolio breadth, technology leadership and the depth of our customer relationships; our ability to operate efficiently, identify and pull all appropriate levers in the current environment while investing judiciously in long-term growth; our much stronger liquidity and balance sheet today relative to 2 years ago and our continued focus on further optimization; and finally, our disciplined capital management and allocation.
Thank you. Now we are going to bring Steve and Mike back up to the stage and move to Q&A. Thanks.
Alright. Fire away.
Thank you. It’s Mehdi Hosseini from Susquehanna. I have two questions. One for the team. Couple of times during the presentation, you talked about improving longer term prospect, especially if you look at second half of calendar ‘19, but I haven’t heard anything about how bad is it going to get in the first half. So if you could provide any update as to how you see NAND supply/demand and weakness in the near line going to trend into the first half of ‘19? And then one follow-up for Martin, you did highlight the challenges in unlocking the CPU and GPU, but I didn’t hear from you or Martin sitting here. I didn’t hear from you how you are actually going to implement it or how you are going to breakup the constraint you have. I think in the longer term, that’s very promising, very inspiring. But I still don’t know how you as a memory solution provider are going to go unlock a CPU and GPU lock?
Okay. Let me talk to the market dynamics question, so talking about flash first. So I think our view as I stated just to reiterate is the demand profile on a long-term basis is in this 36 to 38 range. In-market demand for this calendar year, we think there is a few headwinds that are going to affect the front half of the year. A combination of that and candidly dealing with the inventory overhang that we will collectively, as an industry, be bringing in will take the demand rate for calendar ‘19 below that range. So certainly, we would see it is a seasonal market in general. So it will be stronger in the back half, weaker in the front half, but we do see the demand side of flash to be below that 36% to 38%. Moving to the hyperscale and cloud build-out and capacity enterprise story, yes, as we said in the earnings call, we would say that sort of consumption phase or investment phase is going to be somewhat less in the first half of the year. We do have clear indications that the investment cycle reaccelerate in the back half of the year, and so that’s our current planning. So we think we’ll have a much stronger second half of calendar ‘19 relative to capacity enterprise, but that will apply to flash as well as those guys begin to reinvest.
You said the exabyte will be flattish in the first half. Is it still tracking to flat?
So exabytes relative to year-over-year is roughly flat here. And remember, that was 100% – nearly 100% growth rate if we look at the year-on-year compare. So yes, it’s in that same range. It’s not deteriorating, but it’s going to be a slower investment cycle in the first half of ‘19.
Let me – actually, I am going to take a stab on the second question. And then you can correct me when I am wrong, right. But see, part of the thing is, is that we believe and feel free to correct me if I misstate this, but we believe that system architectures need to change in order to move to a data centric world. Now, let’s be honest. We are not going to do that all on our own, right. We see RISC-V as an opportunity for an ecosystem, if you want to call it, a RISC-V ecosystem to invest in something in a collective fashion that allows the system architectures to move to a data-centric world. We also get a secondary benefit from that by varying inexpensively redesigning the cores that we use internally in our product, which we were talking about later, and we can optimize that performance to us. It’s really – but we are trying to help – we are not trying to break ourselves, the CPU or GPU or we are not trying to do it on our own, but we do believe that system architectures need to adjust and we are trying to catalyze it through RISC-V, through a very low investment level for us that actually provides us with direct benefit that has nothing to do with moving to a data-centric world.
So, let me just add to that, to reinforce and I should have said this. So one is we have no intent of going into the processor business and becoming a processor vendor. For us, processing is a means to an end, not the end in and of itself. And so that’s what’s different. To answer your question, there is a couple of different parts. So everything Steve said, exactly right, we are building out an ecosystem. And the work that we are doing right now is we are looking at what are the things that we can do in order to foster that ecosystem. One of the things that we are working on internally right now is a platform, codename Houdini and that platform is a memory-centric platform that will allow you to have a RISC-V processor, an ASIC, an FPGA and anybody else who wants to come play along, share one memory fabric through that omni-extend fabric that I talked about. And in fact today, at the RISC-V Summit, if you want to go to Santa Clara Convention Center, we are demonstrating the omni-extent fabric, talking point-to-point between two nodes. And so the other way in which we are doing this that Steve didn’t mention is through investment in startup in other companies. So a good example is we invested in a company called Esperanto. Esperanto is building a NVIDIA-style graphics processor, right. And so we don’t need to go build our own GPU if Esperanto is building a GPU and we can just use that. So, that’s the beauty of this sort of open source model and why we are serious and why we said we are open sourcing everything we do as it relates to RISC-V. So, that’s kind of how we are going through it, okay.
Hi, Amit Daryanani, RBC Capital Markets. Thanks a lot for hosting the event. It’s really helpful. I guess two questions for me, one on the NVMe SSD roadmap. I understand for a host of reasons, you guys were behind you will have a product in early ‘19. But maybe help me understand why would a customer want to choose the Western Digital solution when there is a equally good solution from Samsung and others. So what’s so different about what Western Digital will do beyond pricing? The second part, Martin said you guys talked a lot about near-term headwinds, near-term softness. I don’t know you guys updated the December quarter guide at all. Do we take that to mean that the numbers that are out there are perfectly fine and safe? Just a comment or a suggestion would be otherwise. Thank you.
Alright. Let me handle the PCIe question for enterprise. So I think couple things. One is that marketplace for enterprise is underserved. Although there is 5 or 6 depending how you account flash players, there are only a few that are serving that marketplace full stop. So, it’s an underserved part of the market in general. Even as that changes, we’re very confident in our architecture because this is not a standardized marketplace. Increasingly, it’s becoming more customized. So when you look at the platform I talked about, our ability to efficiently deliver customized solutions to meet specific workload requirements, we think we are coming in a very good place. So the capabilities that I talked about, Ganesh talked about will be deployed here. So this is not like the old days of enterprise systems. We are seeing very specific customized requirements coming down from us. Our ability to efficiently meet those requirements will be important. So we have a platform to do that and we are going to invest heavily to create that capability. The other thing that’s happening because of that specialization, you can’t service every single end market requirement. So the way we’ll evolve this, and it was a little bit referenced in my talk but also by Mark and Ganesh, is we are going to have more call it, joint development work, because again of the specialization. So people are going to partner up because this is not about the industry saying, here is the standardized product for 2 years from now. Everybody, go develop that. And whoever gets there first, wins. It’s not like that in this marketplace. So it’s increasingly customized. Our ability to do that efficiently off a strong platform is a competitive advantage. So underserved, number one. Number two, we think we have got a great platform that can meet over time.
And our customers want us there, too. That’s the other thing. So I’ll take the second question on the guidance. So we are not updating guidance. Let me explain why so that everybody understands it, just to be clear. Typically, in calendar Q4, which we are in right now, December represents, I don’t know what the exact number is, but anywhere clearly between 40% to 50% of our quarterly business, I mean it’s a big month. And so we are sitting here December 4, we still have a lot of the month left, a lot of the quarter left. So that’s one determination. The other thing that Mike’s chart, when he highlighted different elements and then there was kind of I don’t think you had any yellows. I think it was either red or green or whatever, but it was either positive or negatives, right, was the intent of that was to say, if you go back to when we set guidance, what’s kind of changed? In other words, what’s maybe gotten a little bit better, what’s gotten to be a little bit more headwinds? That was the intent of that. And so to be clear about that, what that does is it does show more of a negative bias. Just shows more of a negative bias. And I don’t think that, that frankly should be a surprise to anyone when you look at what’s happening from either what other companies have said, what’s going on in the marketplace. And so there is a slight negative bias to those numbers. But to be clear, we are not updating our guidance at this point for the current quarter. Did I say that the right way? Did the lawyer give me the way when everything is okay? He gave me a not such reassuring nod, well.
Thank you. Wamsi Mohan, Bank of America/Merrill Lynch. Thanks for all the details you shared today on your long-term models. I was wondering just philosophically right, if you just step back and I mean we heard Professor Siva you got the lowest cost bit at 96 layers. Why then should you be one of the players that is cutting wafer starts, why would it not come from the marginal cost player in the industry and you could use this as a lever to take market share? I have a follow-up.
Well, I can’t comment – I will comment on that and you guys can chime in too if you would like. I can’t comment on what our competitors are doing. The only thing that I can do is comment on what we are doing. And the reality of it is, is that whether you look at our inventory levels, which Mark talked about them being elevated, or if you look at our plans supply prior to the cutbacks versus the demand that we’re seeing at prices that we consider to be acceptable, we’re not seeing it. So we believe that we need to cut our supply. I can only explain that from our perspective. And I feel like it’s absolutely the right decision, but I can’t comment on what our competitors are doing. And oh, by the way, having gone through this in the drive industry at certain points and whether it was the Maxtor guys back in the day, I can’t explain what my competitors do all the time. What seems to me to be an obvious thing, and they make different decisions and that’s fine, all we can do is manage what we control, and that’s what we’re doing.
I think just to add to what Steve said I mean, we see what our planned output rate would be. We also see what the end consumption rate is. In a time like this, sufficiency ratio is quite important. This is a bit of a closed system, right. And so our view is, is – and Mark talked about this, this is an industry that’s coming through a maturing phase. We are going to do our part in matching our output to what we see end market demand is going to be for us and we don’t think about this in a contribution margin basis. We think about it about long-term return on capital and that’s the way we are running the business.
Thanks for the color. And as a quick follow-up, I was wondering if you can comment on the fact that there was some worry amongst investors that you guys might be losing flash market share based on the fact that some of your competitors are shipping multi-chip packages both DRAM and NAND combined versus you guys potentially not addressing some of that market. So, if you could just comment whether that’s actually accurate, not whether you think that creates any structural impediments for you or not? That would be so helpful.
Yes. So if we just go to sort of the facts of it, we lost a little bit share two quarters ago. And last quarter, we actually gained bit share. So we don’t – yes, we do have a) in that a specific investment, a structural disadvantage, but we think we have enough diversification in our portfolio to work around that. So from a bit share standpoint, no, I don’t think we are in a situation where we are systematically losing bit share.
[indiscernible]. I want to go back to the question about industry inventory, where in the supply channel is the inventory? Is it in chips, dies, wafers or modules or systems or every place? That’s one question. The second question is, you mentioned that there is a negative bias to what’s happening in the industry, but at the same time, hopefully, Western Digital have also started cutting costs deeply hopefully faster than the negative bias. That’s it.
Alright. Let me take the inventory conversation. I think, unfortunately, you are sort of everywhere is the right answer. So, we see customers that are holding inventory as we are at long-term constrained position. They took on a more aggressive bias in terms of loading up their own inventory, so they are consuming it. Certainly, manufacturers have inventory. We obviously showed that in our own balance sheet. So, it’s really the whole supply chain making an adjustment.
So on the cost and expense we are obviously taking very aggressive action to manage our cost and expenses, no question about that. And we will continue to see progress in that regard. I do want to make one comment clear from an expectation standpoint as it relates to investors. The first thing is and I don’t know, but this is the truth. The first thing is, is that given the downward bias in terms of flash pricing that we have seen recently, there is no way we can cut our costs and expenses rapidly enough to offset that. So, if there is a notion that we are going to be able to do that, it is incorrect. The other thing is that we are going to intelligently tackle our costs and expenses and not compromise the long-term future of the organization. That is not a smart way to run the railroad. And again, if there is an expectation from an investor standpoint, that that’s what we are going to do, it’s not correct. So I just want to – but we absolutely will do everything that we can to intelligently manage our costs and expenses down to help offset some of the weakness that we are seeing.
Right. So you will see – as we have talked about, you will see the OpEx trajectory come down and reflect this period of volatility. But again, the majority of our OpEx is R&D. And while we are taking a close look at that and making sure we are focused on the right investments and the right projects, we are not going to put ourselves in a position where we lose our great technology advantages, our great productization capability or where we’re not prepared to push forward as a leader coming out of the cycle.
So I’ll use one example just to give you a little bit of flavor on it. One of the things that we have announced is the closure of our Kuala Lumpur facility, the hard drive manufacturing facility. We, first off, it’s – I don’t want this to, I mean, everybody says they have a hard job, right. I recognize that, but it’s not easy to close a factory. There is a lot that goes into that. And you have to transition that manufacturing capability, production capability to other facilities. So that takes a little bit of time and there is risk associated with that. Given what we are seeing from an overall market dynamic, we are looking at the opportunity to accelerate that closure and that transition faster than what we previously expected. We haven’t really committed to anything on that, and so that’s an example where we will go look at something. But we have to recognize that if we go too fast, we could screw it up, too. So we have to be careful when we look at things. So that’s just kind of talks to like an example of where we are looking at accelerating something, but we also have to understand that when we do that, there is a flipside to it that you have to consider as well. In other words, you do add increased risk that we have to manage.
Yes. And I guess the last point is there have been players in the industry. And we’ve seen in the past where the move to cut OpEx in a draconian way to react to the cycle has resulted in the loss of the ability to continue to fund the right projects and the right products. So, we want to make sure we don’t make that mistake, because that actually proved far more harmful than the short-term benefit they got from the OpEx reduction.
Karl Ackerman from Cowen. I had two questions, please. You guys talked about 36% to 38% supply growth in NAND, I think next year and it sounds like that’s going to be here for a while. But to what extent does wafer supply growth play into that 36% to 38% bit growth trajectory for your company, specifically. I know you said the overall industry wafers are flattish and will be driven by conversions and tech transitions. But I think maybe you are a little bit further along from transitioning from planar to 3D NAND than some of your peers. So perhaps, the industry dynamic of flattish net wafer adds might not relate as much to you. So your thoughts there...
Yes. Let me just clarify that. 36% to 38% is end market demand for flash. We have talked about a long time this 35% to 45% growth rate on the supply side, having Steven talked about that on his chart. So in 2018, it was in the sort of mid 40s, right and now we are kind of watching. Obviously, we have taken actions. We are watching what others are doing relative to what’s going to happen this year. So our action is really trying to match our supply with our expected end market demand. So that’s the way I would characterize that.
Got it. I guess, as my follow-up, Mike, I think you talked about earlier being more vertically integrated in SSDs as you progress toward NVMe. I am curious how you think about using merchant versus in-house controllers across other SSD interfaces, particularly like SATA or SaaS? And if so, how should we think about the OpEx from here if the choice is to move toward those in-house solutions? Thank you.
So for what we think our strategic and fast growing markets, we are almost exclusively in-house and we are really there. So the OpEx you see from us today is enabling us to get there. As we look at other sort of maybe more niche markets, we will evaluate and do we want to use a third-party partner to prosecute that opportunity. So we will really augment what we think is the core in-house IP development. So that does a lot of things for us. Obviously, product differentiation is one. But ultimately, we can get to a very clear time-to-market advantage meaning being able to launch products very close to the node ramp, so, all of those things that we are advantaged from the internal development.
Hi. Steve Fox with Cross Research. Just one question. So from an OpEx standpoint, you have given a lot of great updates on how the data center solutions roadmap has progressed the last year or 2. If you are still on a slippery slope with say where the bottom is in terms of the cycle, what’s the commitment to continuing to invest, continuing to do R&D with a customer? And is there some sort of near-term offset with revenues accelerating further on sort of this optionality on the core technology?
Are you – which area are you referring to?
The data center solutions, so where you...
So Phil’s area.
Yes, where you had a $35 billion TAM and you talked about the 4 million product.
Well, Phil’s first goal and Phil knows this, is to get off the payroll, okay? And he is very, very close to that. And so now, Phil, close your ears because I still want to make sure you get off the payroll. But the drag from his business is insignificant at this point, alright? And so we believe it’s a good long-term investment for us. And several quarters ago, I couldn’t have said that, but at a minimum, we want to get that business to a breakeven level. And like I said at this point, the fact that it’s not is insignificant to our current financial situation. So that positions us to be able to make the right long-term investments, to take advantage of not only the product opportunity, the market opportunity but financially to benefit from that as well.
I think we have time for one more.
Great. Thanks for taking. Christian Schwab from Craig-Hallum. So I was confused on the internal controller conversation. So we are moving to NVMe with our own internal controller. You have had a couple partners for an extremely long period of time. Does that mean that you are trying to move away from working with them to develop your own controllers or you are putting more of a content, and they’re still fabbing those chips for you?
Yes. So let me just be clear. So within enterprise NVMe, that’s an internal controller. That’s well-known. Our partners understand that. There are partners that provide us IP, but it’s still our design and so on. And then ultimately, how we do the fabbing of that depends on the particular part, same situation for clients, same situation for our embedded products. Now, there are a number of other products that we have that are more sort of niche market. For example, our SATA client SSD continues to be an external controller. We don’t see that as an emerging growth segment, hence that choice. The last point I’ll make is on the enterprise side of things. Our Intel joint development plan goes on. So that’s a controller we did jointly with them. So that gives you, hopefully, a little more clarity on where we are with our controller investments.
And he was talking about flash obviously not hard drives.
Yes. And hard drives are all in the traditional models, what we have partners in that we work within that area.
Correct. And then I guess my last question then. Steve, we have been through a lot of cycles together.
What is the timeframe that you would expect logically to be back at the midpoint of your long-term target and a few plus and minuses?
Well, anything I say can be wrong. I mean, let me tell you what – I am going to answer your question, but it’s going to be a non-answer. What we have been trying to execute to is that the market will begin to normalize where margins begin to kind of start to get a bit better as we move into the back half of next calendar year. Now the challenge that we have is that the demand environment has incrementally gotten a little bit more negative. The signals are a little bit more negative recently. Some of the smartphone volumes, all that stuff, I have to be a little careful because customers don’t like us talking about them. And so that could conceivably push things out a little bit, little hard to say because it is a dynamic environment. But when we talked about the wafer start cuts that we were doing, it was to get ourselves into effectively a balanced supply/demand situation as we kind of exited the June quarter. Don’t know what others are going to do, but that’s what we were trying to orchestrate from our perspective. And we are going to continue to try to do that as the supply and demand to the extent that we can dial it in. It’s a little bit – now we are operating with these big giant mega fab kind of things and so you can’t dial it in maybe as much as we historically have been able to do in the drive business, little bit more challenging. So that’s the best I can do, Christian.
I am supposed to wrap it up. Well anyway, sorry, thanks for everybody coming. We appreciate your interest in our company. We appreciate you all being here and we are going to now exit for lunch. So you go out that way and then to your left. Alright. So thank you everybody.