NVIDIA Corporation (NVDA) Management Presents at Arete Tech Conference (Transcript)

NVIDIA Corporation (NASDAQ:NVDA) Arete Tech Conference March 3, 2020 2:45 PM ET
Company Participants
Simona Jankowski - VP, IR
Conference Call Participants
Brett Simpson - Partner and Co-Founder, Arete Research
Ian Buck - VP & GM Accelerated Computing
Unidentified Company Representative
…NVIDIA. Ian, great to have you join us today. We really appreciate your time. We also have Simona Jankowski. I am going to pass it over to Simona to read out forward-looking statement. So Simona over to you.
Simona Jankowski
Thank you, Brett. As a reminder, this presentation contains forward-looking statements and investors are advised to read our reports filed with the SEC for information related to risks and uncertainties facing our business. Back to you Brett.
Brett Simpson
Thanks Simona. So Ian maybe one of the set out maybe just set the scene here in the market I guess 2020 was a breakout year strategically and commercially for NVIDIA for your division specifically. You launched A100, you finalized the Mellanox transaction and launched GPU. There is obviously the proposed acquisition with ARM. There is a Chinese entity list restrictions that deal with starting sales. It was obviously just a crazy year.
It's clear at least, at least to investors that we're in this giant innovation phase for compute right now. Can you may be just break down what you see in front of you running this division, what do you see over the course of 2021, 2022 and I think we can all recognize it's still really adoption curve of accelerated compute, but what's going to define this year for you after a crazy 2020.
Ian Buck
It was a crazy 2020 and certainly from where I am sitting NVIDIA launched a whole new architecture. Their Ampere GPU or A100 we have it launched from our homes, in fact affect Jensen from his kitchen and it was I think while other companies slowed down or failed to execute, I think we thrived in an environment. We're already virtual a global company and we knew how to do what we do and we launched it from a kitchen virtually and A100 has been a great success for us and it seems to be so.
In fact the market today has only just now really gets to enjoy the benefits of what Ampere can do for the businesses. It’s a 20X improvement over our previous architecture. This is the kind of innovation that we kind of -- improvement that we make generation over generation. We do that because -- we were able to achieve that because we look at it from a whole stack perspective if not just the chip and what the chip you can do. Obviously the chip is incredibly important and Ampere added new technologies like TensorFlow 32 and a whole new Tensor Core of texture and on the gaming visualization side, it also had the new capabilities that's are amazing.
But we optimized all the software stacks as well. So what we're seeing right now of course is that adoption of A100, its targeted first and foremost of course to the AI and cloud markets and broader datacenters. So the ability to train some of the world's largest models we've done now at Ampere with A100. It also of course seems success in HVC and scientific computing.
Our strategy is to build one GPU, one platform that's highly leveraged and at some point in different vertical capabilities and markets and libraries to take advantage of that one GPU architecture. So what I see moving forward is the first AI continues to obviously grow. It's software, writing software that is defining the next generation applications that you and I interact within cloud are used for business insights and other things. It's a totally new technology, it's something that the world is still learning how to use even from first consult standpoint and you see that in our business. It grew a lot originally in hyperscalers the Googles and the Facebooks and Amazons and the others parts of the world who have the people and technology and the knowledge to go invent it and build it from scratch. We're currently evolving that.
So the cloud is starting to consume more and more of our GPUs as the rest of the world is knowing how to use this technology either NVIDIA's software partners and that are working with and finally and so on the training side for sure and also now increasingly in the enterprise, their companies who are now understanding how they can apply their business, their decision-making and uses this AI technology for things like understanding the data they they're adjusting, understanding the product adoption, understanding the recommend what to give to their customers and also changing how they interact with their customers particularly here at COVID where all kind of it's a whole new world where video chat in what we do.
Brett Simpson
This is largely technical audience. I wanted to spend a bit of time on the market and the strategic opportunities you see in the next couple years and maybe first of all, we've seen a lot of research breakthroughs in AI in the last 12 months, large but now these models sizes are going crazy and algorithms too, but I mean it still looks like we're in the applied research domain here with a lot of this breakthrough and I think when you get colleagues Brian was saying that maybe in the five year view it's possible that companies could be spending a $1 billion in computer time just to train a single language model. So I guess what's the practical applications we're going to see from all these breakthroughs that we're seeing on the model size?
Ian Buck
People are still for now the limits of AI for sure. I think one of the areas where it started was in computer vision and that was the idea of what is the picture of a basic question and there I think we're largely mature. We have -- well this was the starting point of AI six seven years ago you're asking that question and I think we're pretty good about that. We're getting pretty good also identifying different part of the pictures boxes around them even identifying individual pixels and what's consists of my face and the background that AI gives you and you're starting to see consumerization of that technology.
When Brian talked about these billion, these huge models the billion parameters that you hear about, that's in new areas particularly in natural language processing. That's an area where we're going beyond just understanding computer vision, which if you think about it bugs and cats and dogs and every intelligent way for us can some level of computer vision that brings pretty small.
Language and language understanding however is a much broader and bigger problem and challenge right. We only need to understand what I am saying right now, but also what I mean, my intent and understand it and take action on it and respond to it and communicate back to me in the same way that I am communicating with you. That's a much more intelligent problem, but the opportunity there is very large right. This is how we interact as humans and we now are going to interact as computers and how all of our data and how we interact with businesses actually they do through language, understanding and document. Things like chatbots or recommender systems or sentiment analysis or understanding a dialogue and making a decision or a call to a doctor's office or help request for your financial advisor or understanding the meaning and intent.
We have technologies like virtual systems or chatbots that wants to do this kind of -- to have this kind of understanding to improve the user experience and in the end make it better for customers. So that's where conversation AI is a big plus right now. We call that the general area of conversation and the market is huge. It's 500 million support calls a year, there's 200 million meetings going on. There is 200 million smart speakers in the world, that is where and obviously in the new of COVID everything is digital. So it's all going through digital medium or a scenario where that could the call to help transcribe it, understand it, summarize it, capture actions and you have people talking of those things.
Those are the big models. The other areas recommenders I think recommenders are as I think about it is how you interact with the Internet now. Search really is a recommendation, when you go to Amazon or you're browsing website to buy something, they're recommending things to you. When you visit your newsfeed they decide what you see in you see obviously the current implications of that. Those models not only have to present something that's active to you, it's something you're to buy and also moderate conferencing time.
Here we have a huge data problem of the amount of data across think about every user of social media ever used us and every product as the NASA matrix. So those models tend to be very large and certainly the data they're trying to capture is very large and are getting even bigger. So there we're seeing significant growth in the challenge is AI and also where people are buying some of the structures and it goes right to obviously they're having motivated for the right to the bottom line of their economics.
One of the examples there I guess in conversational with AI is Microsoft Office. So I don’t know if your audience knows this but if you turn on grammar checking, Microsoft Office is actually using an AI model based on Bert [ph] to same as AI model for doing language understanding. So it's finding the grammatical errors, highlighting and then deliver. That model got so complicated just trying to understand language that they had to move it to a GPU. So today that runs on NVIDIA GPU and is going live on 365 [ph]. It's a beta now that will actually do a highly accurate grammar correction for you by running all the censors that a GPU and when the AI in real time for instance.
Brett Simpson
Interesting. Maybe just switching gears a little bit Ian in terms of the adoption therapeutics the technology, you mentioned computer vision it was six seven year ago thing. We've kind of sold that, we're moving on to the next thing, but I guess when we do a little bit of surveys talking to CIOs, AI is now coming up and you're spending budget a big item today and I guess the technology still feels quite nescient outside to make it to top 50 hyperscalers who are developing next gen models and doing all the recommenders that you laid out. How many organizations do you think are actually using accelerator computes today and real scale?
Ian Buck
Yeah I think the adoption curve followed like I mentioned the people who have the capability use it. So if you think about the adoption curve or how it's going to be consumed, there's two parts of it. There's the training part which developing AI to model for your used case and then there's the deployment part the inference of deploying AI as live in the service and infrastructure to do so.
The adoption curve followed the people. So first you needed the people that understood data science. Understood like the machine learning ops flow and had the capabilities to understand what's been published in the AI community and apply to the business. So I think the Fortune 500 Fortune 5000 and others is still relatively small, where we see the traction actually is them going through an IC partner, a start up and start up using quite rich to find the right partner that understands or has the particular technology and working with them to apply to their particular business. Inventory of the used cases outside of the hyperscalers starting that way and I think you're tuning to probably see the start-up activity in AI right now and it makes a lot of sense.
A lot of smart people, some great ideas that could be applied to many different possible business used cases. At that point you develop that service with that partner probably with your own data scientist and with the company and then you need to turn on the deploy, that growth will come -- that's a different growth sector. One is training and developing the model, which of course scales with the number of people more of problems and models of that size. So you can multiply those through that and we're going to get the size of the training opportunity.
On the other side you have the inference and appointment. That is all about the data. So if service is running and our data coming in is an opportunity for AI to optimize and that will scale the amount of data and the size of business and there's an opportunity for you want a GPU in front of every one of those connections to run and operate and execute the AI and the latencies required for that service whether it be interactively in a newsfeed, you click on it, we got to do hundreds of thousand inferences in milliseconds or voice conversation where we have to maintain a 20 millisecond inference latency just to keep up with the number of utterances you can see when I talk.
Brett Simpson
I guess this is obviously going to take time and is a lot of data scientists out there where they understand this is why that we see a lot of guys prototyping today and taken the cloud trying to figure out what they're going to do with technology but how do you see the path and what sort of path do you see to corporate building industrial scale and AI and is this going to be more cloud hyperscale play and providing services and libraries in your view?
Ian Buck
So I think a lot of it starts in the cloud. Obviously it's easy to take that first step in the cloud when you're from a budgetary standpoint where you're paid by the hour and certainly we from an NVIDIA standpoint we make sure our architecture is available and the platform is everywhere. I work with all major five clouds to activate and make sure they have latest A100 GPUs in their cloud. So all the customers can use them or they're inferencing on their G4 GPUs. At the same time we make sure all the major OEMs have access to our technology and we offer the same base where it's same times. So the consumer how they want pictures to open that way.
The adoption tends to start usually where your data is. So if your date all resides in the cloud, it tends to start in the cloud or if it needs to be on prem or your own datacenters for privacy or security reasons people start their too. So that usually is kicking part for where things get started. They can do experiments from the cloud, but when they get to real estates, they start where the date is.
As this scales up, certainly people look at their cloud builds and they decide whether they want to do it on prem or do it in the cloud. In many cases I see also stratification. So people don't want to manage risk. So they want to make sure they can work with multiple different cloud providers and have them on prem used case, on prem capability as well to manage the risk of where they want to in different services. So it's not necessary locked into one particular cloud or one particular prem or hybrid. I think people are going to necessarily and naturally want to do both and it will vary from the industry to industry for sure.
Brett Simpson
And just talking about the cloud role do think it really plays here because from a silicon perspective on one hand they're buying GPUs guide that Google and [indiscernible] and the other, some of them are developing their own chips and software. So how do you see the internal chip efforts developing? They obviously don’t have the resources that NVIDIA has and the reach. So just love to get your thoughts in terms of these internal silicon efforts.
Ian Buck
Well sure. So I think first off, it's great that everyone is coming from Accelerated Computing. The AI used case is where a compute can turn into software which can turn into these business opportunities and we work with all of them. One of those things that is unique about NVIDIA is that because we are at the forefront of a lot of this technology and that we're a full stack company, I'm releasing not just the GPUs that everyone can get access to and start using but not using all the software stack that comes as well. We work tightly with our friends at Google and TensorFlow or Pytorch and many other stacks actually something all stacks as well.
We also released some of that our own container, our own optimizers and make them freely available and just to container registry to look down and get the latest certified, tested, validated AI software that works whether it be in the cloud or on prem. We're the only activity that's working with all the other AI companies including all the hyperscalers and we take all that learning, all that knowledge, all those engagements whether it be a recommender system, a conversational one and for stuff on AI for video chats focus conversation well, that informs us.
We help the customer, we make our products better, our libraries better and then we make it better for them and we incorporate back in our platform. So those incorporations are just improving and making our frameworks better and more optimized or the vertical stacks that we optimize and that same feedback is right back into the harbor teams and the architecture and we're benchmarking tune and test ourselves at how we're doing and that team also see how they can improve their architecture and make new investments in the architecture itself, whether it be the compute to the cash and the memory and define the next generation.
As someone who is safe in the middle of that, I can see how quickly that happens in NVIDIA as a result, we're releasing new containers every month for these different workflows and different frameworks as well as new architectures every year now to continue to innovate because we're super not done. This is an area of rapid innovation which is great for companies. They get on that train, on that bandwagon and then they can ride the wave if their programs and interfaces have high enough level just when NVIDIA comes along with a next version or software, they just see that 20X that they saw with Ampere and that sort of will repeat itself over and over again. So everyone is investing. I think it makes total sense. AI is the platform where you can just write software. I think from where we stand and they know is we're making our platform available everywhere to all those customers and they can benefit and we can benefit from it and I think mostly advances is the adoption of AI inside of the broader enterprise, which is very fair.
Brett Simpson
And then just I guess on the cloud specifically, we've seen in cloud compute a big concentration amongst a few big players and I wish AWS, they talk about 10,000 customers for machine learning today. Does this become a concentrated services market that is dominated by a few public private players and if not, do you see a large opportunity to sell pods or systems to corporate and build your own enterprise channel?
Ian Buck
Yeah I think a couple of things. One is there is not -- definitely we see -- there's not one way to consume AI for sure and notably these services that are going to stand up different. So AI is a capability. So services will stand up and serve that capability and we fine tune. This is not just one software program that runs them all. The models are highly different and the uses are different. Some offline from streaming, some were ensemble based, some are more and their latency is widely different.
So I think you'll see a wide variety of different services and capabilities and everyone will compete actually on those verticals and those capabilities in the market. In terms of the hybrid and on prem, that of course will be a decision that people want to make in terms of how much they want to consume in their utilization first and foremost and where their data is. From a super standpoint and pods for your audience, this is a ability to actually put together multiple GPUs in a system and create a AI infrastructure for a broad data science team. I think as AI gains more adoption, people want their infrastructure and internally.
First off, it allows them to optimize their infrastructure for their workload and their model size. Some models can train up to thousands of GPUs, some are more optimized to running an embedded used cases and may be more a different size and different scaling. So their scaling parameter is something that we can get today on prem that we're starting to see more of in the cloud but also offers a lot more diversity of different architectures in the cloud and they have to make their own decisions on when to make that investment because obviously they're hyperscalers who have decided to scale they wanted to put something.
We are seeing more and more options to the cloud, which is also very exciting because it offers different places and points for customers to choose from. In the end it boiled down to a rent versus buy decision for themselves and that just comes down to the economics, which is a different versus the same conversation I think we've had all the way between cloud versus on prem. AI will be no different there except there are differences in how you want to necessarily configure the machines certainly for training and in terms of how we want to put together going all the way to the silicon stack providing a high-speed store solution that's tightly coupled to your compute, which is usually very important for training at scale and one of the reasons why we build our own super pods and we made available to our customers is because some of those capabilities are obviously -- their base is super computers, which we can offer to the community and also to super too.
But we now are looking to get their head start and then cloud to start to figure how they can participate in that as well and we are seeing that happen in the cloud providers which is exciting and some of the different store options. Obviously it's a different challenge for them just to point out itself at scale across multiple region and making available for rent and we're making the economics for them as well, but I think you're going to see a lot more diversification instant types and capabilities of different clouds for people to get access to different ways of technology and all connected together.
Brett Simpson
Interesting. And then maybe you mentioned [indiscernible] Ian and I wanted to just touch a little bit on this infrastructure investment that NVIDIA is making. I look at the CapEx budge at NVIDIA and it's going up and finalizing over a billion dollars or so today and I know you're investing in campuses etcetera but does NVIDIA see a opportunity to offer AI services to enterprises, is that -- we see the GeForce now business model very early stage, but is there an opportunity or wider strategy to be in the services business yourselves?
Ian Buck
The reason you see us building [indiscernible] and building our own infrastructure is first off to be successful in AI, you have to be a practitioner. It's not like you have to understand the technology intimately to understand how to advance it. These like I said it's not just one chip or one core. The problems that people want to do today that capabilities are super exciting, but you need to think about like the data center as a whole and you can't just think about programming your one chip to these models are too fit on single GPU or even in some cases single server, you have to think about that these entire rack of our data center and certainly to train them in any time is reasonable which most two weeks really want to go more than that because data sciences team which is also very extremely busy.
You need to do multinode training at scale, which is training [ph]. So now I am thinking about the entire data center and you can't do that on paper. In order to be successful in AI you have to be thinking about that on the scale, and there is on engineering, we build it and we do it. We do it to make our products better for sure. We also create our own products into broad markets and we've chosen a few self-driving cars is one of them. So huge portion that we're introducing today has been used for our self driving car initiatives for NVIDIA drive and the work we're doing with our self driving car customers to give them a turnkey solution and partner with NVIDIA to deliver a self driving car capability.
So we had to build the whole pipeline from the data to labeling, to the training, to the simulated environment where we run the car in simulation and crash it a 100 times without ever hurting anyone and seeing what works and doesn't work before we ever put it in a car in time. So we have to build out that infrastructure. As a result, we also learned about our products and make them better. We certainly also have our own research teams, which is specializing in different and those are learning how to advancing the field for themselves but they teach a lot of our product.
In terms of services and what we can do, we work with -- this is not just one market. This is one used case right. So there are strategies to make available our AI platform to every different startup hyperscaler, OEM and enterprise so they can consumer and develop their own capability. Remember we're at the phase of a new form of computing and people are figuring how to deploy those different used cases. For some markets, we do choose to just go vertical and advance forward. We've done it in self driving cars. We've also done it in conversational AI like I mentioned. So we have a software stack called Jarvis, which comes with pre-trained models that can help do speech recognition and some language understanding and also some text to speech capabilities. We've done it for a recommender system. So we've published a software stack, which is a baseline capability for doing large recommender systems at scale and this is informed through our engagements with hyperscalers. We call it stack.
We also have a video collaboration stack called NXIVM which is used for improving the experience that they're not having right now I presume and helping those AI technologies to things like noise inflation, green screening, super inscription etcetera. So we'll choose some and really help advance the AI field forward and it usually doesn't go all the way to turnkey solution. It usually goes to enablement and the rest of the market can then take it from there and deploy the technologies into the benefits of it.
Brett Simpson
And can I just ask I guess if we look back in the V100 cycle for the data center division, it was a bit lumpy. We had good times and you had slower times and I guess it takes time for customers to digest what they're buying before they need the next amount of compute. Do you think that's going to be -- is that just the behavior that we're going to see in A100 or do you think that it will be a different type of market this time around?
Ian Buck
It's getting faster every time and I think it's because the market is maturing. So what we saw in V100 was that -- three years ago that wave of AI is to acquire some time for people to absorb and understand the technology and see the next ramp. I think the software has matured the way to go to the markets and the technologies the software has matured just so it's going it's much fast for us.
We certainly see some digestion since that we do a project transition. Obviously, people want to refresh some large fleet on hyperscaler that does happen, but it's what we have the experience from V100 to A100 is much faster and in fact we grew our datacenter business for many other companies and saw some downturn I think is because of the rapid adoption of AI and then the observation of the 20X and the need to go to the next platform really pulling the product forward and making sure that it gets in the market as quickly as possible making it available.
So I think that those transitions are happening faster. There is always going to be some transition. All we have to manage and we certainly spend a lot of time focusing on it. We prime the pump and we make sure that everyone gets their platforms ready and their technologies ready early. So people are going to understand it. It's a one heck of a market that everyone is right to absorb it. Compatibility is a big part of it making sure all the frameworks and all the AI software stacks are ready to go on day one. We don’t wait for people to test our hardware and try it before reporting ourselves ahead of it in simulation with early versions of hardware, so that when it gets ready to launch, we have all those stacks ready to do and I think we've improved significantly since this V100.
Brett Simpson
Excellent and maybe just on the competitive dynamic that you see ahead, you had a dominant franchise and particularly in training but also it's getting accelerated compute inference you're doing great. If you look at two to three years from now, how sustainable do you think your position in your market share position in accelerated compute looks like to you? There is a lot of products that are taking a lot longer to getting to the market but if we look two to three years from now, what's your perspective on market share?
Ian Buck
A lot changes in two to three years. As you think AI two to three years ago, today it continues to evolve and grow, the models and those things we're talking about are wildly different. We've things to talk about, we used to talk about net [ph] and how that's kind of table stakes if you can't do conversion and processing that's where a lot of the opportunity and the growth is coming from and we're well-established already, so that the -- I think the other -- that's one part of it. I think the technology of AI is evolving really rapidly and it's why it's still that you will be a practitioner of AI in order to keep up with the trends because to understand what model can do and actually do it, that's an art and a craft and you’ve learned a lot about system engineering, computer architecture, interconnect, store systems, the whole data center.
So I think if I look at the two or three-year timeframe and really it's happening now as you're thinking about the data center as a computer because that's really what these new data center have been designed to do. They're designed to train the network at scale to give a data science team the throughput you put in the tools to develop those natural language or conversational agents that are going right to those used cases and the problem sizes are trying to solve are huge. It's every product to every user and every data point. So those are -- now we're optimizing and NVIDIA has shifted into a company that's thinking about the data center as a compute and that includes not just our GPUs but the systems, the networking, increasing the CPUs and how they all fit together in a broader software stack in the workflow and how to manage enclaves of data scientist that are doing different services and developing those things and then turning around and flipping that infrastructure over and applying to inference.
We mentioned inference where before three, four years ago a vast majority of difference is done on CPUs. Today there are more compute for GPU compute for inference than there is for CPUs in hyperscalers if you just add it to the flops and the capability and that team from A100, when we designed A100, we started seeing this trend where the models are getting bigger, the capabilities wanted to get smarter and the CPU could not keep up in terms of executing that model at latency necessary for the NOP [ph] models or the conversational agents that we talked particularly in text to speech recommenders and some of the newer models.
As a result the workflow -- people started to point GPUs to that. We saw with our T4 GPU which is one that's is used in all the hyperscalers today for inference. With A100, we made the architecture excellent both, the way train GPU but also that same sensor core and all the operations necessary for inference. So the reduced precision stuff eight or 15, we also have this capability called MIG where multi instance, this GPU actually can inside itself, split itself up into seven separate GPUs that presented in kind of either system, so that people can take the same GPU for training and run inferences used cases and it's two to three times faster than our previous GPUs just in front of those single slices.
So we're seeing that adoption of you need to be good at both training and inference certainly having a workflow that allows you do both and seamlessly go from training to inference is really important and then being able to obviously run all these new models in the areas or conversations that are recommended that's super key.
Brett Simpson
And maybe just switching gears a little bit Ian, can we maybe just get an update on the arm situation I mean on the ARM situation I mean if there an updated on the ARM acquisition that will be great but I'm also interested in what you do with this asset from a data center perspective. We can see -- we can see that role of it that NVIDIA see a general-purpose CPU in its arsenal long time. Do you develop server architecture based in ARM general purpose of architecture? How do you see the ARM opportunity in the data center?
Ian Buck
Well certainly data center is the new computing. So as you look at all the apps how we can advance and how NVIDIA can advance the data center is certainly is my focus and what I can talk to. I think from an ARM standpoint this creates a premier computing company for each of AI. It combines the latest -- NVIDIA's leading AI computing platform with ARM's CPU expertise and helps position ARM and NVIDIA and all of ARM's customers and their entire ecosystem of customers for the next wave of computing that age of AI and of course AI is also powering the internet of things, which thousands of items bigger than the internet of people at this point. So that makes it very exciting.
We certainly expand ARM's IP licensing opportunities. It allows us to offer NVIDIA's technology to large and markets including [indiscernible] and it turbochargers ARM's server CPU roadmap by help investing the roadmap and advancing ARM and move faster and accelerate its adoption at the data center, the edge and AI and course in IoT and it certainly expends NVIDIA's computing platform from which -- from where we are today about 2 million to over 15 million developers. So that's very exciting. Customers are very excited and the partners are very excited. That opportunity is great and we can't succeed unless ARM's customers succeed.
Brett Simpson
And I guess you see it as an important to have CPU capability of datacenter in house. Is that how we should think about it particularly now there has been some scaling so much the CPU become more fundamental to your solutions going forward?
Ian Buck
I think we always needed a fast CPU. ARM is still very much alive unlike others. I can make an incredibly fast GPU, but if we don't accelerate the entire workflow, the entire problem, I can accelerate 80% of the solution instantly fast but I'm still only 5X faster stuck at 5X. So that's why you have to think about AI and accelerate putting in general at the data center scale by investing in the data center scale both CPU DPU and GPU all components of parallel cereal and IO networking.
Can you really get speeds 20X that we talked about and you think about the entire campus and that's the innovation canvas that we're talking about in order to achieve the next generation performance to see the AI continue to advance itself so that can continue to allow these breaks to happen and turn them into business opportunities with all the world's enterprises.
Brett Simpson
Excellent, very interesting. Well, I think we're out of time Ian. Kind of just say really appreciate your time, great discussion, we could go in for a lot longer with many other questions, but we really appreciate you coming on the event today and your time with us.
Ian Buck
No problem. Anytime. Thank you.
Question-and-Answer Session
Q -
- Read more current NVDA analysis and news
- View all earnings call transcripts