Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message|
( followers)  

Hewlett-Packard Company (NYSE:HP)

Technology Briefing: Big Data Conference

March 5, 2013 14:00 ET

Executives

Colin Mahony - Vice President and General Manager, Vertica

Nitesh Sharan - Investor Relations

Analysts

Katy Huberty - Morgan Stanley

Operator

Yes, hello good afternoon. My name is Maya and I will be your conference operator today. At this time, I would like to welcome everyone to the HP Big Data Technology Briefing. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. (Operator Instructions) Thank you. Ms. Katy Huberty, you may begin your conference.

Katy Huberty - Morgan Stanley

Thanks, Maya. Good afternoon, everyone. This is Katy Huberty, Head of Enterprise, Hardware Research at Morgan Stanley. And I am very excited to host this HP Technology Briefing event focusing on Big Data in HP’s Vertica solution with Colin Mahony, Vice President and General Manager of Vertica. In this role, Colin is responsible for aligning Vertica’s analytics capabilities with clients’ Big Data needs. Prior to his current role, Colin was responsible for product management, marketing and business development at Vertica. Before joining HP Vertica, Colin was Vice President of Bessemer Venture Partners focused on investments, primarily in enterprise software, telecommunications, and digital media. He has also worked at Lazard Technology Partners and the Yankee Group.

In addition to Colin, Nitesh Sharan of Investor Relations at HP is joining us for this call. The purpose of today’s call is to give everyone on the line more context around the theme of Big Data, discuss the market opportunity, help better understand HP’s Big Data analytics platform and speak through specific used cases. The format of today’s call will start with a presentation by Colin followed by question-and-answer session.

And before we begin, let me just mention that this presentation may include some forward-looking statements that involve risks and uncertainties and assumptions. For a detailed disclosure, please refer to slide 2 in the presentation. And with that, I will turn it over to Colin.

Colin Mahony - Vice President and General Manager, Vertica

Well, thanks a lot Katy. It’s great to be here and thank you everyone for joining us today. The topic of Big Data comes up a lot. I think you are certainly reading a lot about Big Data and information management and we of course at HP Vertica take a lot of questions around what it is, what it means, how it impacts our clients, how they can use to their advantage, and certainly how it affects HP.

I think that as you look back at some of the tectonics shifts that have occurred in the industry, certainly of the tech industry. This is as big as any other shift. And I am comparing this to mainframe the client server, client server to web. And really I think what’s happening is it’s a confluence of a lot of different things that are happening. Mobility is taking over. Smaller devices are generating more information. We are certainly living in a social era, if you will, where people are sharing a lot more information, personal information about activities, and what they are doing.

And then also sensor data is really taking off, no longer is all this information generated by just our fingers, but it’s generated by machines. And what we all want to do is basically harness and leverage the power of this information, so that we can make better decisions as an organization and as a business. And really it’s about getting a holistic view of many different types of data. I think the old world order was certainly primarily structured data, data that had been cleansed over several months’ data that came in. Then a lot of people got to take a look at and make sure that it was accurate and applied to the business.

And frankly, a lot of people were throwing out information that they either couldn’t handle because of volume or just felt wasn’t very valuable. And now what you have in this renaissance is all different types of information, e-mails, audio, video, text information, word documents, other types of productivity documents. And certainly just as much, if not more transactional data as we have ever had. So, Big Data to us at HP is really that tectonic shift that represents on the one hand a new computing paradigm and on the other hand an ability to incorporate a lot of different data types.

As I mentioned, we compare this very much to some of the other shifts that have happened. And as you can see, it’s very different. It’s really a perfect storm of a bunch of different activities that are happening whether it’s social, whether it’s mobility, whether it’s Cloud computing, and just as insatiable demand for information. We talk a lot about the bits of data. And really to me the bits of data are black sand. What really care about is the gold? They want the negative information that can help change the outcome of a decision, and ultimately, it changed the outcome of a business. They want to be able to really leverage the information at the right time to make a better business decision.

Now, I often get questions about well, what’s a petabyte or what’s an exabyte, a yadabyte, a zedabyte, a brontobyte. All of those terms probably sound pretty foreign. And what I like to remind people is that back in 1983 when most of us had our first personal computers, I still remember, my first personal computer had a real floppy drive. And that floppy drive was all it had for storage. And then I got a second floppy drive and I didn’t have to change discs as often, because I had to. And that was pretty much parallelization for me, but then I got a 20 megabyte, that’s 20 megabytes hard drive. And I thought back then wow, I am going to be able to store every single piece of information, every bank street writer word equivalent document. Every game that I’ll ever own for the rest of my life on that hard drives.

And at the time I had no idea what a gigabyte was, and I did know what a gigabyte was, it would have cost me about $1 million back in 1983. And so we look at these terms and we look at our decimal system. And many of these terms seemed very foreign to us, but the reality is it won’t be long before we are talking about these sizes of information. And I think that’s great from the perspective of people who are experiencing better services, better products as a result of it. And businesses who can make those better decisions because of it.

Now, as I mentioned, this information if we had to generate all this information simply by our fingers on the keyboard, that would be a tough proposition. And fortunately, we don’t have to do that. In fact, we have a project here called the Central Nervous System for the Earth, where we are doing a lot of very innovative research on sensors and sensor data and what they can pickup and how we can take that information not only from the point at which it measures, but then generate that data into a place, where we can collect it and run the analytics of statistics. This type of value proposition is huge for so many different industries. Obviously, oil and gas is a very natural used case, where this would be applied, where you get the equipment into places that are difficult to reach and being able to pickup something that’s going wrong can make a huge difference. In fact, it can not only prevent catastrophic failure, but really from a predictive maintenance standpoint, just maintaining equipment it can make the life of that equipment last so much longer.

For telecommunications firms is another example who uses information being able to pickup the quality of service at the most minute endpoint possible and look at all the atomic detail of how is the call – excuse me, how is the call quality, where do we need to add more capacity, or which customers are experiencing poor call quality. How can I either address their service or perhaps incentivize them to stay through some other means? Being able to capture that information and again combine some of the new sources of information when the traditional information is huge from the perspective of opportunities for all different types of organizations.

Well, obviously, the market opportunity for Big Data is very large. And I think for the purposes of IT spending and projects that are being deployed around Big Data regardless of which market research firm, which analysts you read, they have all got it being in the double-digit billion dollar numbers with very solid growth. And I think that’s one of the best parts about analytics. Analytics is not an island on its own. Analytics are permeating every aspect of computing servers, storage, networking. Everything is becoming more intelligent, because of analytics. And then certainly you have the traditional data warehouse market and some of the traditional analytic applications markets that will always grow and continue to grow standalone. But our belief and this has always been the belief of Vertica since its inception is that analytics must be everywhere. They need to be embedded. They needed to be – they need to be deployed in some cases as standalone applications, but they really need to integrate from the bottom of the stack starting with the hardware all the way up to the software application layer.

So, a little bit more about Vertica and why we started and really what we thought the most important tenets to be able to solve that would be. The first thing that is unique about Vertica that our Co-Founder, Mike Stonebraker, a longtime database legend was adamant about is that you need to offer speed, you need to offer blazingly fast analytics, because people in this day and age simply refuse to wait. The notion of an overnight batch window has gone away. Data is coming in 24/7 and people want to be able to get at that data all the time while it’s coming in.

The second core principle was scalability. We knew about the volumes of data that were coming I can’t say we knew about the Big Data craze that we are in right now back in 2005, but all of the core principles and drivers were intact with them and they have only accelerated now. Scalability, critically important. Something that we have always shared with HP is an open architecture being part of open standards, letting people leverage the ecosystem, that’s out there, an ecosystem that’s been built up over the years and we’ll continue to build up we believe is core to our principles as a company and fundamental to the design of the technology. It has been with Vertica whether it’s ODBC, JDBC, leveraging open source statistics like R, or the Hadoop platform. This has always been very core to what we do.

And then simplicity, you really have to make this easy to setup, to scale, and to run. Otherwise, people aren’t going to want to learn it. If you ask somebody to learn something completely new and different, they won’t do it. If you give something somebody incredible performance that’s akin with something they already know they will adopt it and embrace it and certainly take it to the next stage.

And then finally related to the scalability and the performance is optimized in the data storage. We have a lot of technology I will talk about a little bit about the technology on the next slide, but sufficed to say so much of what we do is we have our technology optimize the way that data should be laid out for the analytics that are going to be done against that data, without having to perform all sorts of unnatural hacks to get into that right format. So, we have a lot of technology as you might imagine as a technology company, but a lot of what differentiate Vertica is that this is a purpose built platform.

Mike Stonebreaker the technical founder of our -- who I mentioned, he founded INGRES, he founded Postgres, Illustra and ultimately was the CTO of Informix. So, Mike certainly knew the traditional platforms that out there, but Mike’s belief was that you cannot just keep adding on things like columnar or MPP to databases that were never designed for that workflow they were designed in a different era decades ago for a very different workload. So, what Vertica has built its platform on is columnar technology, Columnar storage, Columnar execution, Clustering meaning of full share nothing scale out model.

You simply add more servers as you need to add more data. Compression an encoding to reduce the footprint and the compression also speeds up liquidity performance. And then finally back to the notation of no overnight batch windows, we allow you to continuously load the data while you clearing. As opposed to the traditional database model which is stop the database, lock it down, load your new data then allow people are clearing. And then from the simplicity standpoint, we have something call the database designer which automatically can do what might take a DBA four months, it can do in about four minutes automatically to layout the data.

And then finally we didn’t just build this platform to have a database platform, we built this so that people take advantage of it. So, the people run analytics in it and we continued to add more and more analytics libraries to do quick stream and Geospatial, time series analytics, patent matching. As well as this software development kit this API layer that we offer in the platform. So that are customers in our partners you can right any other applications they want sentiment analysis being one example.

We’ll I give a lot of questions about Hadoop when it comes to big data and so I thought I would spend at least a minute or two describing what Hadoop is an really how we leverage Hadoop. Vertica and Hadoop, a very complementary. Hadoop is an open source cluster file system. The best way to think about it is run a Daisy-chain file system together that looks like a single file systems. So, I think about is a single C drive on your computer, but across many servers and allow people to store any type of information. This notion of collecting vast amounts of information whether it’s on Hadoop or any other file system is really taking hold and Hadoop is one of these lower cost Grass Roots initiatives in the open source community that’s taken hold of.

It was never really meant for interactive querying thought it was never really meant for interactive analytics, it’s really designed as a storage clustered file storage system for servers and you can do some processing on it with Hadoop map reduce. The Vertica is a first analytic database company to actually come out with connectors to Hadoop. And so you can seamlessly move data back and forth. You can exactly explore information using Vertica’s analytics engine natively on Hadoop and then when you want to run some interactive analysis you can do so and load it in the Vertica. The other nice thing about Hadoop is you can store unstructured information and so as an example using some of our other assets here in HP software like Autonomy you can actually go through a file system information, audio files, video files, text files and you can actually pull out some sentiment about those files. So, that you can then eventually load it into a more structured analytical engine like Vertica.

So, we see overall with this data big data phenomenon the pipeline of information is changing, you are grabbing information from more sources, you are trying to do more with that information not only from a storage perspective but from a management and analytics perspective and then act that on that information and get it to the right people to do that. So, all of the technology is great, but I always believe that you got a have great technology and the great business model. And the great business model is only proven by the customers in their own proof points in terms of what we can actually do. So, I want to highlight just a few examples by industry vertical of some of the things that we can do that you – were really difficult to do before. So, the first example is financial services. And what financial services firms do with Vertica is a number of different use cases. One is figuring out what true risk is and being able to pull detailed data about consumers or customers and ascribe a methodology and a model. They can actually take into account the true risk, not just a cohort risk, but down to the full level of detail based on the activities as an example.

And so we can take advantage of the full book deck of tick store data, as well as a trading application in the financial services industry and be able to bring weather data in or bring other types of say freight shipment data. We can take any number of columns. It doesn’t matter to us how many data types or data fields there are. We can manage it all. Another example is the telco, we can tell telcos here are all your customers who have two months less, two months or less on this particular handset. And then we can tell them and by the way these are the customers of that group that have had three or more drop calls on average for the last six weeks. And then by the way wouldn’t it also be great to be able to figure out, which of those have had a negative interaction with your company through web, e-mail, chat, or over the phone through autonomy and some of the analytics that we can do there. So, really allowing the telcos to reduce their churn and market and target those particular customers to retain them.

And then finally, just one more example maybe is online companies, web gaming companies, companies like Zynga, we have the ability to allow them to run their business 100% based on analytics. They know who the influencers are, who they should market more to? They know how to improve the gaming experience the online experience based on AB testing that can continuously run. They know who is using particular aspects of the game more than others.

And the reason I like to highlight the gaming industry is oftentimes I think the gaming industry is a harbinger of what’s to come across broad industry sectors. What they are doing today? The industry overall in the analytics space will adopt tomorrow. So, whether it’s companies like Cardlytics that are going to do very advanced marketing analytics for credit card companies and merchants and dramatically orders of magnitude increase the performance or it’s an online gaming company or a healthcare company like Blue Cross Blue Shield or others, there are just so many things that analytics can drive and have a very concrete ROI to change the outcome. That’s why so many people are interested in Big Data broadly across the sector. And I think that’s why it’s so important for us at HP whether it’s Vertica or the other teams at HP on the hardware side, the services side, and other groups within software to focus on, and really enable our customers to gain the value out of it that they really are looking for.

Katy Huberty - Morgan Stanley

Great, thank you, Colin. That was a wonderful overview. What we are going to do now is Maya if you want to give the listeners’ instructions on how to ask a question. And then as we poll for questions, I’ll start with a few and we’ll come back out and see if there are any questions from the lines. Maya?

Question-and-Answer Session

Operator

Thank you. (Operator Instructions)

Katy Huberty - Morgan Stanley

Okay. Just to start the discussion and extend what you have already laid out Colin. As you know, analytics isn’t a new term. Companies have looked at structured data and gotten value out of that for a couple of decades as you mentioned. Can you talk about what the typical customer environment looks like when you go in? Are you replacing an EDW or are you finding new datasets and serving as an incremental solution within those customers?

Colin Mahony

Yeah, that’s a great question. So, you are absolutely right, there is an ecosystem that’s been there. There are traditional enterprise data warehouses that are very well entrenched throughout a number of industries. So, our approach is never to walk in the front door and say time to get rid of your data warehouse, because we just don’t believe that’s neither a good idea nor a good strategy on our part. What we try to do instead is surround it and go after the analytic use cases, what are often referred to as the killer queries and businesses. They would love to be asking and they would love to get answers too. They have just never been able to either because the volumes of data are too large or the amount of analytics and the complexity of what they need to do against it are too challenging. So, we love going into those environments and we say here we are tell us about the application of the use case. And let us show what we can do and the nice thing about that is that many of these apps don’t carry the 12 months, 18 months, 24 months sales cycle that a traditional EDW what is that.

We can get in I can show up with the HP DL380 Gen8 servers and blow a customer away with the analytics that we can do sort of on that environment whether its large scale or even just approved point small scale. And then what we try to do is ultimately the customer sees the value and what we can deliver they see that performance and then we start getting more of those applications in the account. Eventually I think if you look at historically we have several examples of customers where we start in a periphery and then we slowly move into the core. But I would still say we are not fully necessarily trying to replace that core. I think one of the things that’s happening in the market too is that the core model, the hub model of an EDW is just breaking waiting months and months before you have perfect data to act on is great for compliance and reporting in some other things. But it’s not so great if you are running a business and you really need to move at a pace with alacrity, so that you can get the data in and maybe its not perfect data and maybe its not completely planned. But if you have enough of it you can start seeing trend lines they can start to help make decisions. And so we want a get into all opportunities but absolutely we can integrate with existing EDW. So, we certainly argument them we accelerate them we are not trying to just rip them out.

Katy Huberty - Morgan Stanley

Great and then everybody is talking about big data right now. There is a long list of companies that would like customers and investors to think they are at the center of the trend. So, lets talk a little bit about the competitive landscape we knew get to proof of concept, are you typically going and against the traditional EDW or are you going against a company that is trying to turn Hadoop into an analytics platform. Are you again some of the other appliance models like a Greenplum or Netezza maybe break those three down and talk about whether you see them and how you can be compete against those companies?

Colin Mahony

Yeah. That’s a great question. We in some of these environments we see all of the above and I actually think what’s great news now is if you asked this question a year ago I think I would have given you little bit of different answer but I think that the industry is become more educated and organizations are more educated as to okay, Hadoop is good for this. A real time analytic database is good for this and a data warehouse is good for that. But the reality is we do bump into all of these in the accounts. Sometimes as competitors, sometime as complementary products you if you will. But we certainly a primarily competing in the class of the IBM, Netezza EMC Greenplum that you mentioned we certainly go up against Teradeta. We go up against Oracle. And again I think its just finding a right tool for the right job.

And the buyers of these platforms and these technologies are getting smarter on well I know this is good for this and that’s good for that. But there is a still is uncertainty out there some people think of Hadoop as a database which is not and sometimes customers just have to try and realize what its good for and realize what it’s not good for. And so, there have been opportunities where my organization has been told we’re going to go with this Hadoop strategy to build this thing. And the n they come back 6 months to 9 months later and say actually its good for storage but it’s not great for an analytic interactive database. So, we still think we need to that. So, there is an education and as you would imagine any sort of high velocity market transformation that’s going on like big data it’s taking time for people are figured out.

Katy Huberty - Morgan Stanley

Okay, Maya do hae questions on line.

Operator

And remember we have no questions at this time.

Katy Huberty - Morgan Stanley

Okay. Just continue on the competitive discussion because there are few questions from investors that came on the webcast. The first is you just talked about the different buckets of competitors that there is a specific question on SAP’s HANA?

Colin Mahony

Yeah.

Katy Huberty - Morgan Stanley

And just comparing and contracting Vertica’s approach versus the in-memory approach from HANA?

Colin Mahony

Yeah. So, I actually – I mean I applaud SAP’s approach with HANA. It’s columnar. It’s in memory. A lot of the core principles that Vertica has built on we share with SAP. Obviously, SAP and HP are great partners. Vertica and HANA don’t really bump into each other and compete in the market. And I think part of that is where HANA is playing as you would imagine is in SAP accounts from my perspective, and in that very robust ecosystem that SAP has. And so much of the Vertica market is a different market. It’s sensors, it’s the telco network data, it’s quick stream data, it’s gaming data, and I don’t think that’s a market that SAP is particularly going after. So, again I applaud many other core design principles, because in many ways they echo the same approach that we took.

But specifically with your question on memory, that’s probably the biggest differences, HANA is all memory – all in memory. And Vertica uses a combination of memory and disk. And part of it is the volumes of data that Vertica customers are putting oftentimes into our platform. We are talking about petabytes. And putting petabytes in memory right now is an expensive value proposition. Will we get there? Someday, it definitely looks like that’s the trajectory we are on, but right now, because of the types of customers that we are going after with the all these different types of sensor and log data etcetera, it just wasn’t practical for us to get everything in memory, whereas I think with SAP HANA, if you look at most of the data that’s in there coming from SAP applications in that environment, that doesn’t make a lot more sense.

Katy Huberty - Morgan Stanley

There is another question here on the webcast about Informatica, which is a partner of yours.

Colin Mahony

Yeah.

Katy Huberty - Morgan Stanley

But the question is that your description of how you loop into the Hadoop functionality sounds similar to what Informatica provides, why type of overlap you see in the actual usage of Hadoop versus Informatica, you are seeing customers try to link those two together?

Colin Mahony

Yeah, I mean I think – so if you take Hadoop, what people do as they store the information and they can process the information. Now, the dirty little secret about Hadoop is that it actually takes a fairly sophisticated parallel programmer to get job done on Hadoop, it’s not the easiest thing in the world. So, I would never say to somebody, you can replace your traditional ETL or Informatica in this example with Hadoop, because you can’t. It can do processing jobs very well in parallel. And I think what Informatica and others have done is that what it really needs is some adult supervision and some better interfaces and really trying to leverage Hadoop to do some of the processing. But there is no question from a data pipeline perspective, we see Hadoop on ingest into Vertica having done some processing. So, it’s in a similar place, where you would see Informatica, but I would never say that Hadoop and Informatica as an example are really like comparisons, because there is just so much more you have to do on Hadoop. And I think Informatica will treat Hadoop like any other clustered file systems that it works with.

Katy Huberty - Morgan Stanley

And you mentioned the columnar functionality as a differentiator for Vertica, a number of the traditional companies have also announced similar architecture. Could you just compare and contrast Vertica’s expertise versus some of the other options?

Colin Mahony

Yeah, I mean, one of the – obviously at sort of 30,000 fee, we can all say well, we are columnar. And when Vertica first started that, we were talking about the columnar technology and all of our competitors were basically pooh-poohing the whole thing. And then all of a sudden, two years later, they all came out and said actually we have columnar now it’s part of our product. And what we know is that it’s one thing to sort of add columnar on to what you already have, you gain the marketing traction that, that’s going to provide. But the real workflows that our databases are having to do today for social graphing or really complex joins or bringing disparate data together, you can’t just add that on to your traditional decades old platform, you’ve got to build a purpose-built platform from scratch. And that’s why Mike Stonebraker was so adamant in the early days of Vertica that we don’t just go and use an open source technology and then try to add on because all of the modularity of Vertica, all the performance of joins and analytics all of that is tied very closely into the columnar aspect. And when you take an approach and just try to tack it on you will get some other benefits, but you really won’t see that the key benefits to give you the kind of orders of magnitude performance improvement.

Katy Huberty - Morgan Stanley

Okay. And just to follow-on that discussion because I think a lot of investors are confused on this point in the market, what data sets work well in a columnar data store versus in a row-based data store?

Colin Mahony

Yeah, so anything that works, any data set that works well on a row store will work well on a column store.

Katy Huberty - Morgan Stanley

Okay.

Colin Mahony

It’s really the job you are trying to do that is what dictates it. So, as an example, Vertica does not want to be an OLTP, an online transaction processing engine. If you are doing a lot of updates and changes and deletes to your data, if you are an airline reservation or you are taking orders or whatever that business is, OLTP is not something we want to go after. So, for that work load we would walk away. But for anything that’s an analytic or word load any data structures that you have will work very well in Vertica.

And one of the question I get a lot of times is people come up to me and they say you are columnar there. But when I look at Microsoft Excel I like my rows and my columns and it’s funny, it’s for example if you are using a Excel against Vertica you will see the exact same Excel that you would always see. The rows and columns aren’t done at the visual layer, this is all just done under the covers and the way to think about it is in an analytics workload if you ask a question like show me every male who lives within 100 miles of San Francisco between the age of 30 and 60. We know that with an analytic work load you are asking there for three or four columns of data age, address and gender really three types of data. So, by storing all those columns together on the desk we know that e can get that information fast and return it to the user. That’s relay the essence of columnar. And also by adding more columns, we are not taxing the system as a row store would where any clear you run it has to go throw a full table scan of all columns and all rows. But from a data structure and an interfacing type you would know the difference.

Katy Huberty - Morgan Stanley

Okay.

Colin Mahony

Other than it would be much faster in a column store.

Katy Huberty - Morgan Stanley

Sure, just extending the competitive discussion we talked about EDWs, we talked about some of the confusion in the market around Hadoop not really being a data base somebody asked about SAP HANA what about the other appliance models like Netteza or a Greenplum?

Colin Mahony

Yeah so I think we are competing with those folks and there is definitely demand for appliances, but this is one of my favorite things about being part of Hewlett-Packard is prior to this as an independent software company we didn’t know a lot about hardware, but we should have known more about hardware because in this industry that we are in it’s critically important. What’s nice about HP and these appliances that we have already announced is it’s the best of both world since the purpose build software and it’s industry standard hardware that people know unlike Teradata which is proprietary or Netteza which is proprietary often times people in the industry refer to these as sort of proprietary refrigerators. There is a comfort level in IT organizations especially to get the experience of an appliance which HP factor expressed as greater delivering but also know that you are now locked in. If you need to change your hard drive, you can change your hard drive. If you want to scale this ting out as your volume is growing you can scale it out and so we try to address that form the best of those worlds.

Katy Huberty - Morgan Stanley

Okay one last question from the webcast from a technical standpoint what avenues and licensing costs exist to download, install, configure and learn about Vertica?

Colin Mahony

Yeah it was a great question so we have an acronym that we have had since the beginning of vertica that we referred to as DISGO and DISGO stands for download installed setup go. And it is our belief that all software needs to be handled that way. In other words yes the appliances are absolutely critical especially when you get into these larger environments. But if you look at the software industry right now there is a premium model that’s taking place, where people wanted to experience it and they want to try and it has to be easy for them to do that. So right now, you can go to vertica.com, you can find us on HP set as well, and you can download our community edition, it’s a free version of Vertica. You can have up to three server nodes running it and the terabyte of raw data and experience the full Vertica experience with that.

And what we found is that when people try us, they really like it, and then that model ends up driving more demand for people to make the enterprise purchases. And I really believe that the DISGO model applies to the entire software industry. I certainly think that SaaS companies, software-as-a-service companies have sort of created this model even more, because you can just sign up for 30 days on a site and try it out of there. And by the way, we are doing a lot around the Cloud as well. So, that’s something that I always want to make that process more simple. And I think it’s critically important whether we are running petabytes or just a terabyte.

Katy Huberty - Morgan Stanley

You mentioned Cloud and Amazon’s new service has received a lot of press and questions from investors, how do you think about – what analytics will be done in the Cloud versus on premise and how big of the market opportunities that really?

Colin Mahony

I think that we have, well first of all, we have been – we have run on Amazon for several years on EC2. And experience is that a lot of the smaller customers and smaller companies like to experiment up there. They like to test things out up there. But one thing about analytics is if you think of data being the gold, people are not going to want, they are going to be very careful about where they put that information. And so I actually think private cloud is going to be a much more prevalent cloud deployment paradigm for analytics and data, because you are not going to let your crown jewels walk out the door.

That’s one thing if you use salesforce.com and that data is already up there and you know it’s protected, but uploading a bunch of your other data. First of all, it’s tough to upload big data as we all know because of the public pipes. But secondly, I just think there is a lot of concern around security and will my data be breached. So, I think there is a market for what Amazon announced with Redshift, but it’s a lower end market, I think with developers we are building things upon EC2. And again, you can get the same from us if that’s what you want to do, but I think the core market, the vast amounts of information for companies will be protected either on prem or in a private cloud.

Katy Huberty - Morgan Stanley

Okay. So, the core market for Cloud would be smaller customers and then datasets that are already in the Cloud?

Colin Mahony

They are already in the Cloud and that gets tricky too. It has to be in the Cloud and sort of co-locate in the same facility, which is a lot of people tend to think oh, it’s Cloud. So, it is just going to work well. Your analytic Cloud part might be in Portland, Oregon and your other Cloud part might be in another country or maybe in the same country, but a completely different data center. And so you got to make sure you have the right connections and throughput. So, I do think it’s going to take some time, before we can get to that enterprise caliber up there. But in the meantime, I think a lot of people with very smaller datasets, tens of gigabytes, maybe a terabyte or to and some fewer cases larger are the ones that are experimenting up there. And we have had customers that start using Vertica on the Cloud, on the Amazon EC2. They get it to a certain size and then they say I want better performance, I don’t want to use the shared services model up there or we are getting a lot of data now. We think it’s time to pull it into the core operations that we do. And so we have seen that a lot as well.

Katy Huberty - Morgan Stanley

Okay, makes sense. Maya, why don’t we reach out and see if there any questions on the line before we wrap it up.

Operator

(Operator Instructions)

Katy Huberty - Morgan Stanley

If there are no questions, we will be respectful of everybody’s time and wrap it up there. Colin, thank you very much for the insights, and have a great day everybody.

Colin Mahony - Vice President and General Manager, Vertica

Yeah, thank you everyone. Thank you, Katy.

Operator

And this concludes today’s conference call. You may disconnect.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Hewlett-Packard's Management Discusses Technology Briefing: Big Data (Transcript)
This Transcript
All Transcripts