Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message|
( followers)  

Red Hat, Inc. (NYSE:RHT)

February 20, 2013 11:30 am ET

Executives

Karin Bakis

Sarangan Rangachari

Karin Bakis

Hello, and welcome to Red Hat's Big Data and Open Hybrid Cloud Webcast. Red Hat Vice President and General Manager for storage, Ranga Rangachari, is our host today. We'll begin our webcast with a presentation by Ranga and then a question-and-answer session. [Operator Instructions] Okay, let's start the presentation and we'll see Red Hat's Safe Harbor statement. I'll just pause here for a moment. Okay, and now I'll turn it over to Ranga Rangachari and he'll go through the presentation.

Sarangan Rangachari

Thanks, Karin. Good day, everyone. Thanks for joining us on the call -- webcast today. What I thought I'd do today is, I think we have an hour, but I want to set aside quite a bit of time for Q&A, which I anticipate quite a bit coming from the audience today. The presentation agenda is, see, the way we want to kind of cover this today is primarily around what are some of the enterprise requirements for big data, then talk a little bit about Red Hat's direction, how it's centered around open hybrid cloud, and then last but not least, community-driven innovation is the core of everything we do here at Red Hat, and I want to talk a little bit about how big data and community-driven innovation comes together.

So before I get into the enterprise big data requirements, a little bit on the market size projections. And if you look at the market size today, and this is from RDC, if you'd had this chart a couple of years ago, the numbers would have been a lot smaller than where they are today. And the primary reason is what we've seen over the last 12 to 24 months is the tremendous amount of explosion and a tremendous amount of interest in the big data market. You can see this where the market, a couple of years ago, was just shy of $5 billion, it's projected to grow north of $20 billion at a very substantial clip rate annually year-over-year. So our belief is, I think the market, where it is today, is right for the disruption that open source brings to the table.

Touching upon a little bit on the customer requirements. The advantage, if you look at the big data space today, it is primarily dominated by open source and open source projects. So we at Red Hat, just given our huge legacy, as for the hosting and the open source side of things, we have what I think is a cat-bird [ph] seed, and have a very good perspective of how customers are thinking about big data and what open source means in the big data space. But -- so we have talked to a lot of customers around the world over the last few months, and I've written down the top 3 requirements that come back over and over again.

The first one is, and as today, people often associate big data and Hadoop. They think big data is just Hadoop or Hadoop is just big data. But it's more than that. Big data is most than just Hadoop. To most customers, it could be audio files, it could be video files, it could be logs, mission-generated desktops. So all of those things need to be managed differently, because the explosion of the data is really more than what IP can handle today.

To kind of put things in perspective, one of our large customers in the financial services industry do have 60 big data projects going on in their enterprise today, that they can count. And only a part of them are Hadoop-related. So a bunch of other projects at things like NoSQL, Cassandra, Mongo, and so big data, just to reaffirm that it's more than Hadoop and then when we look at our approach today, you'll understand that we have taken a very broad view of the big data market, and not just coming up with something that's very specific to Hadoop.

The second requirement that's coming over and over again, from a customer's perspective is, most of the big data projects today are what I would call in a proof of concept phase. And if you follow the natural progression, things move from a proof of concept, to a pilot, to a production. And before organizations are ready to actually transcend from a proof of concept into a pilot and a production phase, they really need the enterprise class reliabilities, stability, performance, scale and manageability, because they see that the part those fundamental requirements that their infrastructure can offer, that they have a tough time to store their big data projects as they move from pilot to production.

The third, which is very important, is interoperability and compatibility to the existing infrastructure and development tools. Because what organizations don't want is 2 things. They don't want to start another silo or infrastructure and a development platform that doesn't talk to the rest of infrastructure. Because, ultimately as you go further up the spire from big data, it's all about how do you get data from multiple sources and present them in a cohesive, consistent fashion. So the 3 things that I want is, the big data is more than Hadoop, customers that are looking for true enterprise class, reliability performance, and the third, which is last but not least, it's interoperability and compatibility with existing enterprise management and development tools.

So when we looked at the big data space, today, I'm going to talk about this as we go through the slides, but the assets and the technology assets that Red Hat has today, we have a lot of products in our product portfolio today that are, what I would call, big data ready. So customers can start to deploy big data applications today, if they choose to. But when we looked at the overall, what I would call, the architectural view of how customers need to deploy and be successful with big data deployments, it boils down to 3 essential elements: One is, does the infrastructure that you have, looking at this slide bottoms up, that the infrastructure that you have, really scale and is it secure; then the application platform that you're going to build your applications from top up; and last but not least, the analytics space, how you're building these specific applications to run the analysis on your big data workloads.

So let's double-click on those 2, the infrastructure piece and the application platform piece. So look at the big data infrastructure piece. So I'd start at the bottom left corner of the slide. Today, Red Hat Enterprise Linux, Linux's background a little bit here, which is, Linux Foundation, they reported in January 2012, it's a year-old, but they estimated that 72% of big data workloads run on Linux. Right there they've -- affinity, the propensity for big data workloads were run on top of Linux for the exact same reasons why customers use their Linux and specifically, Red Hat Enterprise Linux, to run their business, particular infrastructure and their workloads. The second part is that Linux today is the primary platform for a majority of cloud-based applications, and we've -- which is, industry analyst October of last year came up with a report which says 67% of machines running on AWS are using Linux. So the reason why Linux, Red Hat Enterprise Linux specifically, is ready for big data workloads, it's inherently -- the way it's been designed is it's ready for big data workloads, where distributed processing, then scale out workloads are absolutely critical elements of running your workloads. So managing tremendous amount of data volumes and intensive, analytic processing requires an infrastructure that's designed for high performance, reliability, fine-grained resource management and scale-out storage.

So with the combination of Red Hat Enterprise Linux and Red Hat Storage, we have customers that are actually running big data workloads today across multiple verticals, in the financial services, media and entertainment and in the Federal or the government sector. Where we are taking this, and this is one of the highlights of today's webcast is, over the last year or so, we've had a add-on module, if you will, to Red Hat Storage, which is a Hadoop-compatible file system. That specific module was part of gluster.org, which is the open-source community around Red Hat Storage. And what we are announcing today is that intent to move the Hadoop-compatible file system or the Red Hat Storage Hadoop-compatible file system module to the Apache community. So essentially, what that does is, the community, if you look at the Hadoop, it's predominantly around the Apache Hadoop community, and this will be another innovation around how customers can really take advantage of Red Hat Storage to start to run their Hadoop workloads, and specifically, around the Red Hat Hadoop module. I'll talk a bit -- a little bit of what we see, or what customers are telling us about the advantages of this approach. And where we are also innovating in Red Hat Storage is, around some of the areas that are absolutely critical, customers moving workloads from well, it's proof of concepts to production of pilot environment, where it's actually critical to have [indiscernible] of things like HA, things like high availability, things like snapshotting. Those are some of the features that the traditional storage management systems has to provide for customers to move into the big data workloads, especially as the amount of volume of data continues to exponentially increase.

The other area we are working on, around the Red Hat Storage side of things, is the notion of supporting a multitenant architecture. So it becomes very critical, especially for public cloud providers, and now you can have the same infrastructure running Red Hat Storage, supporting multiple tenants and multiple users.

Where all this is leading to, from an infrastructure standpoint, is around open hybrid cloud. And we've talked about open hybrid cloud in a couple of other venues before. But let me just talk a little bit about what we mean by open hybrid cloud, which is the ability for enterprises to create workloads on private clouds internally, across physical, virtual and private cloud environments, and move these workloads seamlessly to a public cloud, back and forth, without having to retool their applications.

When we talk to our customers, this is something they see as a very refreshing approach, that allows them to scale out their workload, regardless of giving them the flexibility to run the workload, they said, depending on where they want to run it, without compromising on risk and some of the other aspects that go with a true IT-type workload.

The other element is around the application platform piece. And here today, as I mentioned earlier, we have a lot of work today available in Red Hat, that we believe and our customers are telling us, are big data ready. Let me just highlight a couple of areas. One is our application platform, the JBoss middleware application platform. Today, we have a number of products that customers can choose with the big data integration and analytics, without having to learn new products and without having to acquire new skill sets. This is going to be a huge advantage for our customers because they don't have to just create new silos, but they don't have to worry about training a whole bunch of new people or new skill sets. It seamlessly allows them to move their application developers into a big data type of a workload.

We can also transport events from a variety of sources into Hadoop quickly and reliably with messaging, processing large volumes of non inter-relational data within memory speeds using Red Hat JBoss Data Grid. And we can also identify opportunities and threats in the big data analytics platform through pattern recognition using our BRMS, our business rules management system.

So here again, Red Hat Storage, Red Hat enterprise coming from JBoss, the current shipping products are big data ready, and customers are using it today to run some of their big data workloads. Specifically, the application platform where we are taking this is to move the Apache Hive connectors. So we have a -- for those of you who are not aware of the Hive project, it's essentially a data warehousing-type of a setup for Hadoop workloads. And the -- what we have is, Apache Hadoop at the Hive connector, that is currently in tech preview, it's going to be a fully supported product. Right, so with enterprise data services, probably about [ph] 3, that product allows the customer to virtualize data stored in Hadoop and other files, HTF and some other file systems, along with the traditional data sources, like relational database and spreadsheets.

So what we're talking about here is the ability for organizations, not just -- not as silos, but the ability to have a unified approach to how they build out their infrastructure and sort of how the application platform is defined.

So our plans around the application development platform, there are a couple of areas I wanted to highlight. One is around the NoSQL Mongo interoperability. So in the hibernate community, we have the support for Object/Grid Mapper, which you might see as OGM, that will help product -- increase productivity by simplifying interoperability with NoSQL technologies like MongoDB, using standard Java Persistence API. This allows enterprises to leverage the capabilities and skills that they have today, and to give them the ability to work with emerging data sources, with the new, when the big data workloads generate. The other area is OData, which is the web access protocol. This is in support of the OASIS, which is another standards group, it's called OASIS open data standard or OData, which essentially what it does is, it bridges the existing infrastructure to more modern mobile lightweight applications and lighter architectures with restful interfaces. So here's another proof point where the ability to seamlessly extend to emerging workloads, both at the infrastructure layer as well as of the application platform layer.

The road, even around the big data application platform, leads to an open hybrid cloud, right? So what they're going to be doing is, all the JBoss products under the JBoss umbrella, that are available and on print today are also available on OpenShift, right? So OpenShift is now available in 2 flavors, one is the online and the enterprise version. And the first product is the JBoss Enterprise Application Platform that's currently available in OpenShift, now is available on onprem hosted or hybrid cloud environments. So if you see the recurring theme is, anything that we do, specifically around big data, on the infrastructure as well as in the applications side, is all around help customers to run their applications, either on onprem, in a hybrid environment or in a public cloud environment. So the open hybrid cloud theme, we threw every one of the projects that we do, specifically, around big data. Because our view, from the market, from talking to customers is, big data could be one of the killer apps for open hybrid cloud. And I'll talk about this a little bit later as to what we mean by the typical use cases that allows people to move workloads seamlessly back and forth.

So when we talk about how does the Red Hat infrastructure side of things on the application platform? How does it tie together to really help customers in this journey towards an open hybrid cloud? And this slide tries to show that, and I don't want to go into the individual pieces of this slide. But just as a simple example of this slide, somebody, if you look at the bottom right, somebody could build a private cloud based on OpenStack that includes Red Hat Enterprise Linux and Red Hat Storage, and they could have a similar structure out on a public cloud running on OpenStack, with the exact same components. The key here is, once we have customers develop applications, whether it's JBoss or some of the other middleware applications, because of the inherent way these applications of the infrastructure is being designed, you can now move the applications from onprem, to a public cloud, or from a public cloud back to onprem, without retooling the applications. The reason that's really important in the big data space is, I want to do kind of highlight 2 new spaces. Most of the proof of concept that we are seeing in the marketplace today are running a public cloud, and for the simple reason that some IT, they don't have the resources onprem, where they want to build out 20, 30, 50 node clusters, so the use a public cloud, drive on it, test it out, and once they're ready to bring it back in-house, our whole notion on how do you orchestrate these workloads, to bring it back in-house, is all possible through an open hybrid cloud architecture. The flip is also -- the flip-side of the palm is, customers in the future, could have their entire workload running onprem in their private cloud, but perhaps, during the end of the year, end of the quarter, Christmas season, they might want to move the work that outdoor public cloud for

[Audio Gap]

there again, nothing changes. The entire application stack moves to the cloud, and things work in a seamless fashion. And you've heard about our products, and on Red Hat CloudForms and the recent acquisition of ManageIQ. Those are all the individual pieces that really help us fulfill that vision.

So I've talked a little bit about the architecture, the infrastructure piece, the application development platform piece. Now let me shift gears and talk about, how do we intend to take the solution to the market. And this is broadly -- it comes into a 2 or 3 dimensions to it. One is, we are already in the process of working with the hardware and software ecosystem partners in the big data space. And you'll see us start to come up with solutions to the -- over the next months and quarters, during the course of this calendar year. But more importantly, what we're going to be doing is, helping customers be -- that what the different customers are asking for, from us is a -- more a prescriptive approach on how they should actually build out the infrastructure, in support of the big data workloads. So we intend to have certified architecture documents, certified reference architectures that essentially become a cookbook for organizations to be successful with the big data initiatives. And then, in parallel, we will be working with some integration partners to help deliver these solutions to their customers. So it's a vicious cycle -- a virtuous cycle of innovation where we'll engage with customers, work on ecosystem partners, develop a certain set of architectures, it's almost a rinse and repeat approach. So we've talked to a few of our customers about this approach, and they really like it. They think we're on the right track, because they realize that no 2 solutions are alike, so where Red Hat can help them is with certified reference architecture that helped them move into a very, what is a high probability of success as they buildout their infrastructure, for big data.

So underneath the covers, our belief today is given the breadth and depth of the products that we have today, is unparalleled, in our view, in terms of what we can offer the market.

So let me just touch upon a couple of areas, right? And we have multiple products here. I think you know about most of these products, but obviously, during the Q&A, I can answer any questions.

But let me just touch upon a few of those. Red Hat enterprise, Linux, I don't think needs any introduction. We have customers today who are running Red Hat Enterprise Linux for their big data applications, as I mentioned earlier. Red Hat Storage is being used for big data workloads, but I think with our support for the hdfs compatible plug-in, you can see us starting to support a lot more Hadoop-type workloads. JBoss, I don't think needs a much of introduction. It provides development tools and run-time environments of big data applications. Red Hat Enterprise Virtualization is a fundamental critical component of our open hybrid cloud strategy. And OpenStack, and OpenShift and CloudForms are all solutions that help customers start from a private cloud, move to a public cloud, start at a public cloud, move the workloads back in-house and really go down an open hybrid cloud deployment model.

Now in addition to these individual products, which we think, very, very robust, when we find ourselves towards the end of the presentation, I have a slide from one of the analysts there who thinks we're absolutely on the right track. But in addition to the product portfolio that we have, we will obviously complement that with an ecosystem of partners and an ecosystem of integrators that help deliver solutions to our partners. So the simplest way to think about big data is -- data is -- big data is data, and data has got to be stored somewhere. So even though big data is all-encompassing, from Red Hat's perspective, it's centered around our storage product offerings today. And the reason is, as I mentioned earlier, if it's big data, the data has to be stored, and these are robust, scale outs, stored as software platform to really help support the workload and the scale-out nature of big data workloads.

Let me touch upon a little bit about how some of the features today that are available in Red Hat Storage lend themselves very well to and/or for big data workloads. And we think of storage -- the customers think of storage as piece of an elements, one is, how easy is it to store and access my data, how easy it is to scale and how extensible it is. And I don't want to go into each one of those bullet items there, but I thought I'd touch upon a couple of things, right? One is around, Red Hat Storage today provides a unified layer for file and object taxes, right? Now what we're doing is augmenting that with support for big data at Hadoop workloads. Another dimension to this, but what it means to end-users now, I think for the first time ever, they can create a unified storage platform that is pretty much agnostic to what type of data types it's dealing with. You should contrast that with some of the proprietary vendors that we've seen in the space. They offer 3 different complete solutions, 1, you want to access files, here's 1 box that helps you with file access, so you want all of your taxes, here's another solution for object taxes. You want big data, here's a third solution. So they are essentially making the problem worse for the customer, because you got multiple silos with multiple skill sets. We've taken a -- almost a diametrically opposite approach, which is create a unified layer so that organizations can leverage their skills and their strengths. An example of that is, the Red Hat Storage solution is POSIX-compatible. And, simply, in simple terms what it enables organizations to do is, as long as they maintain projects compliance, the way that I do applications, the applications can be moved to a public cloud without any rewrite, and a huge amount of time, dollars, savings, when you think about the type of application that needs to be rewritten.

And the other aspect of this, also is around when we talked about -- one of the things that Hadoop specifically, is the compute and the storage happens in the same node, right? So this whole notion of a data locality, knowing where exactly the data resides, is that's supported with Red Hat storage, pretty much since day 1. And so we believe that the solution's feature sets today, and the direction we are headed on, headed down, really helps customers with a single, unified scale-out software platform, that allows them to manage storage, regardless of whether it's files or objects or big data type of data.

So when we talk to our customers, and we kind of went with this approach with some of our leading customers, as well as some of the analysts in this industry, who we respect a lot, in terms of what their opinion is, because they've got a much broader view of the market than we do, right? And in some cases, especially with big data, because it's just emerging. And they believe that Red Hat is well positioned, and we believe that, too, that Red Hat is well positioned to bring a unique value to the big data market, specifically around these 3 areas. One is, the solutions that we have today and the solutions that will come out down the road, will all be enterprise-ready. So customers can feel comfortable that they don't have to stay in a test phase for long, they can move their deployments for pilots or for production. The other aspect, it gives them complete flexibility. So this doesn't have to be in a different solution for Hadoop, a different solution for NoSQL, you can just have a single solution stack that is flexible enough to address different types of workloads. And last but not least, I think the whole approach around big data and open hybrid cloud is truly a big boon to our customers, because now, I think for the first time ever, they have the ability to move or close back and forth without having to retool their applications.

So it looks like I'm right on time. So in summarizing, you can go through this list, pretty straightforward, but as I mentioned earlier, right, I think we truly believe that big data, now, can be or could be, will be one of the killer apps for open hybrid cloud, and we are well-positioned today to deliver flexible, enterprise-safe, big data solutions. And we will continue to work with innovation and the community of partners around the ecosystem partners, and as in when those come into fruition, we'll continue to keep you all updated.

Question-and-Answer Session

Karin Bakis

Okay, thank you, Ranga. We'll now begin our question-and-answer session. [Operator Instructions] Okay, here's the first question. It sounds like there might be a lot of integration needed on this on the side of the customer. Can you comment on that?

Sarangan Rangachari

No, I mean, if we -- if the solution comes across as disparate and disjointed, you're absolutely right. I think customers might have to end up being integrated. But our approach is fundamentally different, right? One is, if you go back to the slide where we talked about our solution components, there is really no need to stitch those individual pieces. When we talk about it from a solution standpoint, every part of the stack seamlessly works with the others, right? So that's kind of the technical component for your -- all the different pieces. Now the other part, I think was, one of the reasons why, in addition to working with the, some of the ecosystem partners in the space, we will work with some integration partners to help customers, because some of the use cases and the problems the customers face, no 2 problems are alike. So absolutely, I think, from a, we to the customer standpoint, they pay the potential for an integrator to play a role in connecting the dots, from what the technology solutions stack provides, to what the business problems sell to the customer.

Karin Bakis

Okay, next question is, why has Red Hat decided to take Red Hat Storage Hadoop plug-ins from the Gluster community, and put it in the Apache community? What is the significance?

Sarangan Rangachari

Well, the significance is twofold, right? One is, most of the, the center of gravity around Hadoop is happening in the Apache community. And so I think we felt that the best way to foster innovation, to continue to keep the Apache Hadoop community innovate, not just today but around for, on a forward going basis, is to provide the developers with the access to the plug-in, all from the same source. Now this doesn't mean that gluster.org, which is where the plug-in was before, starts innovating, we felt that, had -- where it can maximize innovation is around the Apache Hadoop community.

Karin Bakis

Okay, here's the next one. How do you define big data?

Sarangan Rangachari

Anything that's not -- so this, what, the way that we think of -- I guess analysts have different ways to talk about this. You've heard some analysts who talked about the 4 Vs, which is the volume, the velocity and a few other attributes to it. And yes, that is one way to look at this. But I think our view is, big data is fundamentally, I think the underlying type of data, either some unstructured and around structure, okay. So that's one way, at least, from a technology standpoint, which contrasted very much from your typical structure of databases that the people are used to over the last 20 years or so.

Karin Bakis

Okay, here's the next one. Is Red Hat advocating moving to -- the data to and from public-private cloud, or just the application processes? How will data access? How will the data access work?

Sarangan Rangachari

So the thing is, we don't, we can't advocate to the customers how they run their business, right? I mean, that's -- we provide the solutions that helps them run their business most effectively. And to that end, right, we have some customers that are running their applications on a public cloud and then bring it back in-house. We do have some customers running it onprem in a public cloud, and want to use their public cloud as a burst capability. So we're pretty agnostic to that, right? But the fundamental thing is, regardless of where they're starting, and where they end, our open hybrid cloud approach helps customers move things back and forth, the workloads back and forth, with minimum disruption.

Karin Bakis

Okay, here's another one. You say that big data solutions and applications run best in an open hybrid cloud, why is that?

Sarangan Rangachari

Well, so here's the simple reason. I think I mentioned this earlier. What we've seen from a majority of our customers is, when they start on any of these big data projects, it usually starts out in a proof of concept, typically in a public cloud. And then they want to bring that workload back in-house into a private cloud environment, once it reaches some type of scale and scope. So customers can either go to what I would call the vanilla hybrid cloud approach, or the open hybrid cloud approach. And the open part is, kind of a key differentiator from our standpoint, which is it really gives customers complete flexibility, in terms of what part of the stack, if you will, run out of a public cloud, what runs onprem, without having to worry about individual pieces that go there.

Karin Bakis

Okay. Please describe target markets, company size, regions, industries.

Sarangan Rangachari

Yes. So today, right, most -- and this is not a scientific analysis by any stretch of imagination. But we are seeing, from a vertical market standpoint, it's very, not high level, but it is across multiple verticals that we see a lot of interest, we -- in our comment, again, in the fire, which is financial, insurance, retail and entertainment. So those are the 4 key verticals, in addition to the government agencies, where you see a lot of initial interest and initial traction around big data type workloads. Now like any other new technologies, and our belief is that over time, it will cascade into other verticals and other smaller type organizations. Now that's just with Hadoop. What we're also seeing is around NoSQL, and some of the other areas, that whole new class of web 2.0-type applications. We are also seeing a lot of workloads built on top of NoSQL and those type of infrastructure.

Karin Bakis

Great. We've gotten a couple of questions about partners. Tell me more about the big data partners you are working with, and intend to work with.

Sarangan Rangachari

Yes. So we will, we say right now, we are working with a list of both hardware and software partners, and just stay tuned as we come up with more definite dates and times and start -- we'll absolutely make those partnerships available.

Karin Bakis

Okay. I think we have time for 1 more question. Which one is Red Hat -- it seems like Red Hat Solutions are doing big data already, what new capabilities and technologies will be added to the product lines in the future?

Sarangan Rangachari

Okay. I said this is -- we are at the beginning of a journey, right? I mean, that's one way to think about this, where today, they have customers want to deploy, take your Hadoop workloads or any other type of big data workloads, they can use Red Hat assets, specifically Red Hat Enterprise, the next JBoss and other solutions, to start to deploy those applications. But as I mentioned earlier, both on the application side, as well as the infrastructure side, we have some very specific plans to add more capabilities, both at the infrastructure level and the application development platform level, to help customers really get down to a place where open hybrid cloud really starts to make a difference in their business.

Karin Bakis

Great. Well, thank you. That concludes our Big Data and Open Hybrid Cloud Webcast. We appreciate you joining us today, and have a good one.

Sarangan Rangachari

Thank you, folks.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Red Hat, Inc. - Special Call
This Transcript
All Transcripts