Oracle's CEO Presents at Big Data Primer - Financial Analyst Webcast (Transcript)

Oct. 8.13 | About: Oracle Corporation (ORCL)

Oracle Corporation (NASDAQ:ORCL)

Big Data Primer - Financial Analyst Webcast Conference Call

October 08, 2013, 12:00 PM ET

Executives

Shauna O'Boyle - Senior Manager of Investor Relations

Andy Mendelsohn - Senior Vice President, Database Server Technologies

Analysts

Brendan Barnicle - Pacific Crest

Operator

Good day, ladies and gentlemen, and welcome to the Oracle Big Data Primer Webinar Call. At this time, all participants are in a listen-only mode. Later, we will conduct a question-and-answer session and instructions will follow at that time. [Operator instructions] As a reminder, this conference call is being recorded.

I would now like to introduce your host for today's conference, Ms. Shauna O'Boyle. Ms. Shauna, you may begin your conference.

Shauna O'Boyle

Thanks. Hello everyone, and thank you for joining us today as part of our ongoing educational speakers' series hosted by Oracle. I’m Shauna O'Boyle, Senior Manager of Investor Relations, and today is Tuesday, October 8, 2013. Joining us today is Oracle’s Executive Senior Vice President, Andy Mendelsohn; and Equity Research Analyst, Brendan Barnicle of Pacific Crest.

Today, Andy will be discussing Big Data. However, he will not be discussing any data that is not already publicly available. At the conclusion of Andy’s presentation, we will turn the webcast over to Brendan who will moderate the question-and-answer portion of the call.

However, you may submit questions at any time during the presentation by typing your question in the Q&A box in the lower part of your screen. Please keep in mind that we will not comment on business during -- in the current quarter.

As a reminder, the matters we’ll be discussing today may include forward-looking statements and as such are subject to the risks and uncertainties that we will discuss in detail in our documents filed with the SEC, specifically the most recent reports on Form 10-K and 10-Q, which identifies important risk factors that may cause actual results to differ from those contained in forward-looking statements.

You are cautioned not to place undue reliance on these forward-looking statements, which reflect our opinion only as of the date of this presentation. Please keep in mind that we are not obligating ourselves to revise, update, or publicly release the results of any revisions of these forward-looking statements in light of new information or current events. Lastly, unauthorized recording of this conference call is not permitted.

I would now like to introduce Andy Mendelsohn.

Andy Mendelsohn

Thanks Shauna. Good morning, everybody. So I’m going to give a very short, about 15-minute chat about Big Data and then we’ll take questions.

I thought I’d first start talking about what is Big Data. There’ve been all kinds of people talking about what Big Data is. There is statement, 3Vs or 4Vs of Big Data and all the various new start-up companies that have anything to do with information are planning their Big Data.

So what is the real kernel of what’s going on here? Well, the real kernel is that -- what we’re talking about here is it’s all about analytics. People have been doing analytics for 30, 40 years with information systems, and what we are talking about here is moving to the next generation of those analytics, which we’re calling Big Data analytics.

And there are two key transformations going on in the industry that are driving the Big Data trend, and number one is really about different kinds of data. So, traditionally analytic systems have been looking at data from company’s operational systems; so for example, if you’re a big retailer, you’re looking at data from your retail sales, the sales of information from products at your retail storage. If you’re Telco, you’re looking at your call data records that record all the phone calls and how long they’ve been there and what the charges are and things of that sort.

So that’s analytics on your operational data, and what people are talking about what Big Data is looking at more kinds of data than traditionally they’ve been looking at. And some of these data are sources that are from inside the company; things like looking at documents, more unstructured data, voice, video, and as we move to the Internet of things, people are looking at [sensor] (ph) data sources as well.

And on the Internet, there is also a huge amount of data, of course on the Internet and companies are looking at well, there is some way I can extract useful information out of social media. Now can I find information about my customers again, and what they are saying on social media about my company? Are they saying something good? Are they saying something bad? There’s a lot of bloggers out there saying all kinds of things that are potentially of interest to companies.

So people are very excited about the possibility of looking at this data, getting more information, especially about their customers and using that to deliver, raise the potential revenue of their companies by better marketing their customers that are up-selling them etcetera. So I think that’s the first big thing, moving from operational data source to these broader, internal and Internet sources.

The next big thing going on is new kinds of analytics are being done. So, traditionally people did what they call OLAP analytics, just slice and dice of data, mostly historical data from the operational systems, and we’ve been moving over the last years to more predictive analytics, data mining techniques for example to look at what’s going to happen in the future. And as we move to these broader kinds of data sources, people are looking at doing analytics against those kind of data sources, text, spatial.

Graph analytics has become very popular as you look at social networks, people want to know, run queries about who is a friend of who and based on that they do some kind of interesting marketing or a targeted advertising etcetera. We’ve also been moving to new kinds of analytic tools. R has become very popular as a development tool for doing analytics processing, and of course we’ve been moving in-memory in a lot of phases to do in-memory analytics, in-memory databases etcetera.

So, I think those are the two big key drivers of what’s going on in Big Data. And so, what are we doing in Oracle to deal with this new world? Well, Oracle of course has for many years been the market leader in analytics, and we have the world’s leading data warehousing technology, DI Technology with our database and our DI products as well.

And so, what we’re doing as we move forward to the Big Data space is we of course are trying to build a platform and a set of solutions that deal with Big Data. We need to be able to acquire, organize, discover, and analyze Big Data. And as we move forward, in addition to the Oracle massively parallel relational database, we’re doing analytics against Big Data. We’re also now moving to utilize Hadoop as a platform in our Big Data environment. We’ve also come out with Oracle’s NoSQL database that is of value in the [adjacent] (ph) phase of Big Data, and of course, we’re continuing to evolve our Big Data tools as well.

One of the big things we’re doing at Oracle that I think is very important in the Big Data space is we are in the business of delivering what we call engineered system. These are combinations of hardware and software that are integrated together to deliver great off-the-shelf experience for customers where they can order our products, order our hardware and software and get great time to value where they can very quickly build systems.

And we’ve done that in the Big Data space. We have our Big Data Appliance for running the Hadoop processing on our Oracle NoSQL database as well. We have Oracle Exadata. It’s a great platform for doing massively parallel analytics, and then of course, we’ve finally added Exalytics, which is our platform for our DI tools and our Endeca Data Discovery tool.

And finally on the top, Oracle is of course in the applications business and so we are also in the business of delivering horizontal and vertical applications and solutions to our customers and we’ll continue doing that on top of this stack of Big Data technologies as well. So that’s one of our key strategies and now let’s move on sort of a little closer to the products that we’re actually are doing.

So in this picture, we’re showing of course on the left side, the data sources that we talked about, all kinds of data sources both operational and new kinds of data sources like documents and blogs and information of the internet etcetera. And what we’re showing is a little different now is in the past what you would do is you take this information and you might put it into various files and you do staging operations, do ETL operations to transform the data before moving into a data warehouse.

Now what we’re showing in this next generation platform is using the Hadoop and Hadoop’s HDFS Distributed File System as a way of ingesting large amounts of this data and then we will use the Hadoop MapReduce platform to do some batch processing and ETL transformation against that data, skip through the data, look for interesting tidbits before we then use our Big Data connectors and load that into the Oracle Exadata data warehouse.

We also show our Oracle’s NoSQL database here as well. NoSQL databases are also very good in ingesting information rapidly and also again doing some operations against it and then that data can be also again moved into a data warehouse and that’s [very].

On top of this basic platform of Hadoop and the Oracle Database, we have our analytics engines. We have in the database, our advanced analytics capability, which includes predictive analytics and our processing. We’re also making this kind of analytics especially are available on the Hadoop platform as well.

And then finally on top, we show Oracle’s business analytic tools like our BIE tool, our indicative discovery tool. Those tools also can be run against data books in Hadoop HDFS File System and in the Oracle Database to delivery visualizations and analytics against the data.

Okay, let’s go to next slide. And as I mentioned, a big part of our strategy is our Engineered Systems, so in this next slide I just sort of show how those Engineered Systems fit into the Big Data Solution we just talked about.

Of course the Big Data Appliance is our engineered system for running both Hadoop and for running the Oracle NoSQL Database. We are the first vendor by the way that’s producing an engineered system for optimized for running Oracle NoSQL or any other NoSQL database for that matter.

Exadata of course is our platform of choice from running big massively parallel data warehousing for doing your interactive analytics and Oracle Exalytics is our engineered system for running our BI analytics foundation and that includes BIE and the Endeca Data Discovery product and also our Essbase engine as well.

Okay. Let’s move on. What I want to do at this point is drill down a little bit on sort of or talk about how Hadoop and Oracle databases relate to each other because I think there’s a lot of confusion out there about what Hadoop is good for and what’s massively parallel relational database is good for. And the key thing to understand and what we’re doing in this platform is that in order to have a Big Data Solutions, you need both.

If you talk to even the people who are the biggest early advocates of Hadoop, who are now trying to do Analytics, what they decide is you know Hadoop is a great platform for ingesting large amounts of data at very low cost for terabyte and it’s a great platform for doing some analytics on that data, but it's more a batch processing analytics. So what does that mean?

Well it means, if you have a data science, if you’re sitting in front of your terminal and is asking questions for the bid, trying to understand the business and trying to come up with great ideas for raising revenue, better marketing, advertising etcetera, he wants interactive response. He wants, send in the query and get a response back in a few seconds.

That’s not really what Hadoop was designed for. Hadoop is designed to crank our scalable [badge] processing execution and you’ll get maybe tens of minutes or an hour response to those kinds of queries. What they want is snappy response. And that’s what you need a massively parallel relational database to do and that’s what these guys are doing. They’ll use Hadoop for ingestion and for doing some big bash processing analytics against Big Data.

But then they’ll move a sub set of the data that they want to do further analytics on into their massively parallel relational database, in this case Exadata and that’s where they’ll do their interactive analytics against the data using of course now rich SQL language that we’ve tried with Oracle.

The other thing to note is that, although there are some SQL tools available in the Hadoop environment. They are very primitive and raw, high, [big], etcetera. And then of course, you can quote in Java as well. But on the massively parallel relational database side of the world, what people do is they code in SQL for the most part. SQL is a very expressive and productive language, a couple lines of SQL is equal to hundreds of lines of code in Java.

It’s also much more efficient in processing as well, so it’s much faster and requires much fewer computing resources to get a given job done. So people also like the fact that relational databases are just much more productive environments than Hadoop is today.

We also mentioned R; R is of course something that we made available as a statistical programming language and predictive analytics language in both the Exadata platform and we’re now also making available on the Hadoop platform as well.

Let’s go to the next slide and here I just wanted to give -- this is I guess slide taking out my key note from [inaudible] so you’re all welcome to go to oracle.com and take a look at my keynotes there I did in Oracle database 12c. But what this did was, it's sort of the end result of example we went through, where we showed, let’s say you want to do a very common example all the people talk about in Big Data, which is looking for fraud in banking system of some sort and we wrote the application two ways. We wrote it using Java on Hadoop using MapReduce.

We also wrote it using SQL extension, we call SQL pattern matching, which is a new part of the SQL language that we’ve implemented in the 12c version of our database. And we just sort of measured two key metrics here. One is – how many lines of code it takes to solve this problem and what you see here it took over 650 lines of code using Java MapReduce versus I think it’s on the order of 15 lines of the code using SQL. So number one, SQL is much, much more productive than having the code at a much more primitive level using in this case Java MapReduce.

And we also want to show we can run SQL in a very high performance massively parallel fashion. And in this example the run time also of the SQL version of the analytics was much, much less than 10 seconds of the run time. On Hadoop it was over 17 seconds and we actually ran the SQL on I think a couple processors and Hadoop was on like an 18 node cluster.

So the key message here is relational databases are constantly moving the bar and getting faster and faster and I think people who are thinking that, oh we are just going to put a little SQL engine on Hadoop and catch up in a couple of years to what relational databases are doing and have built over the last 20 years for doing high performance massively parallel SQL query I think are being a little optimistic about how soon they are going to get the parity there.

Okay. So let’s go to next slide. And here I just want to mention one thing that we’re doing in database 12c around in-memory processing. So for analytics the relational database engines have been producing very high performance massively parallel relational database engines for many years that are very good at cranking through, crunching through terabytes and terabytes of information.

But there’s been a sort of a breakthrough in the last few years in looking at column-store technologies and in particular now in-memory column-store technologies for making analytics even faster and in database 12c we just announced at OpenWorld a few weeks ago that we’re adding this in-memory column-store technology to database 12c. This again is going to give us another big leap forward in analytic processing in the relational database, in this case Oracle relational database. This of course, is going to be very exciting for customers doing analytics against Big Data and data warehousing.

So again, we’re sort of raising the bar of what relational databases can do here. We’re not standing still. Relational databases are moving forward very aggressively into the analytic space further. And again, people who are building SQL engines from scratch again have to think about adding even more technologies than they are thinking of to sort of match the capabilities of these relational databases.

Okay. So let’s go to the next slide. So what are some of the key differentiators for what Oracle is doing in the Big Data space versus other competitors? I think the big thing we are doing here is we are giving customers an integrated platform and an engineered platform. So a customer can just come to Oracle and say okay, I want a Big Data platform. They would order Oracle’s Big Data plans, Oracle’s Exadata platform.

Those two platforms or engineered systems can be very easily integrated together. We have both hardware integration [geared] by using incentive and networking technology across both platforms for making it very efficient to move information back and forth. And then we have software integration that we call our connectors that tie together the Hadoop platform with the Oracle database platform.

For example, one of the connectors are SQL connector like Oracle SQL reach-out into Hadoop HDFS and run SQL queries against the HDFS data. Another example connectors are Loader that should very efficiently move data from HDFS into the Oracle Exadata database. And then we have, of course, our whole array of technologies in Exadata that make it a great platform for doing Big Data analytics.

Next big differentiation that I just mentioned, we are adding a very high performance -memory columnar processing into our already very powerful relational database engine in Oracle that’s going to make us even much more outstanding interactive platform for doing analytics against Big Data.

Another big part of what we are doing here is Oracle has a huge ecosystem rounded of developers and ISVs and SIs who know and love the Oracle platform. They have huge sets of skilled consultants who know how to manage the platform. They know how to develop against it. We are sort of building on top of that as we go into the Big Data space and then finally, we’re giving our complete solution to our customers.

If you buy our Big Data Appliance, you buy Exadata and the connectors between those two. If you have any problem of course you just call up Oracle. We support you top to bottom, from hardware to software with any issues you have and of course we can provide you all the consulting help you need as well on top of that.

Okay. And then let’s just close, of course we have huge numbers of customers using our Big Data platform. I’ll just mention a couple here. UPMC is University of Pittsburgh Medical Center, which is one of the leading medical research centers in the fields of genetics and other health sciences.

They are a huge Oracle customer for Big Data, big user of Oracle Exadata. For example let’s go through here SoftBank, big telco, huge data warehousing users. So I think it’s big telco in Japan. They had actually also recently just been moving into the U.S. They actually moved all their warehousing technologies, of Teradata on to Exadata several years ago and they are very successful customer.

Thomson Reuters is our one of our big Big Data customers. They are using the Big Data Appliance and Exadata in their Big Data processing. StubHub is part of eBay. They do ticket reselling. They are using Oracle’s R Technology that I mentioned earlier for doing statistical analysis and predictive analytics.

And with that, I think we’ll move on to the Q&A session.

Question-and-Answer Session

Shauna O'Boyle

Thank you, Andy. Before I turn the call over to Brendan for the question-and-answer portion of the call, please let me remind our listeners that you can submit questions at any time during the presentation by typing your question in the Q&A box at the lower part of your screen. Brendan?

Brendan Barnicle - Pacific Crest

Thanks so much, Shauna and thanks Andy. Andy, one of the follow-ups from some of those customer references you just gave, I was wondering where you’re seeing your customers less frequently use your Big Data solutions right now if there is kind of a most frequent use case?

Andy Mendelsohn

Yeah, I think the most popular use cases these days are in financial services and Telco. The big banks are definitely very interested in a lot of what we are talking about. They are very interested of course in looking at data from social networks to see if they can get information about their customers. They're also interested in using that data to see if they can use it to help them in fraud detection.

Telcos are another big vertical that we see a lot of interest. Big problem in Telco of course is their customers leaving one telco to go to another. They call this churn. Churn analytics is a big part of what they’re doing -- they are actually looking at using graph analytics for doing that kind of processing.

Brendan Barnicle - Pacific Crest

Great. And looking at your presentation, clearly Oracle is leveraging both your software and your hardware business. Do you think the Big Data software advances you’ve made can accelerate the hardware side of the business?

Andy Mendelsohn

Yes. So I mean the biggest part of our business that’s here right now on the hardware side of course is Exadata. Exadata originally was being used almost a 100% for doing Big Data kind of analytics problems, Big Data warehousing problems, and it’s still about 50% of all the Exadatas that are being used in this space. We also of course are pretty successful with Exalytics. It’s our great platform for our DI tools.

And our Big Data Appliance is a more recent addition, and we’re also doing reasonably well selling hardware into that space as well. And one of the big things here that I think is worth emphasizing, once customers use these engineered systems, they are almost always buying more.

They’re always – it is always kicking the tire the first time around, but we see customers who are very happy with these products and become very large repeat customers.

Brendan Barnicle - Pacific Crest

Great. As you mentioned in your presentation, there are a lot of vendors in the Big Data space and they talk about different approaches to working with data than what we’ve seen in the past. Do you think these new approaches are going to ultimately be replacements to existing technologies or just supplements?

Andy Mendelsohn

Yeah. I think that what I’ll talk about is two of the main things people are talking about in this space is there is Hadoop and there is NoSQL databases. So, when I start with Hadoop first, since I think that is the real core of what we’re talking about here in Big Data and the key thing to understand is what we are doing today is sort of an extension of what people have been doing for many years in the past, and it’s just the next generation of analytics. And so, in the past before Hadoop existed what did people do? They would use file systems as staging areas for ingesting large amounts of data that they had eventually processed and bring into their data warehouse.

So, Hadoop is really replacing that. HDFS is a much more scalable file system and that’s sort of replacing the old traditional file systems people might have been using in their analytics initiative. And Hadoop has this added benefit that not only is a good low cost file system, good place for ingesting information. It also has a MapReduce best processing engine for doing some analytics there.

The place where people start getting confused is they are saying, oh, because there are some simple SQL analytics on HDFS that suddenly they don’t need massively parallel relational databases anymore and that’s where they get confused.

I mean, what I was trying to explain earlier is that you need the massively parallel relational databases to give interactive response time for your data scientists to do their Big Data Analytics. Hadoop is great, but it's really more sort of replacing the use of file systems and ETL engine sort of in middle tier platforms, and it's not really a replacement for MPP relational databases, and because these relational databases are constantly raising the bar on what’s normal, what’s expected of them, what table space, like for example this new in-memory column-store technology that we’ve been adding, it is not clear to me that it’s going to – there’s any way of adding SQL on top of Hadoop is going to catch up to them, anytime over the next decade or so.

So, I think that concern is very over blown. I think the best way to look at it is Hadoop and relational databases are very complementary. NoSQL database are an interesting area. Again, this is a technology that has been around for years and years. It was originally called on the mainframe 30 or 40 years ago, sort of Index Sequential Access Methods and then now they are called key value stores. And these technologies again have been used in conjunction with relational databases for many years.

They’re not really Big Data technologies in the sense that you can do analytics against them. They don’t support SQL that’s why they are called NoSQL. So they’re not really suitable for DI or analytics. But they are suitable for ingesting information just like Hadoop is good in ingesting information to a file system.

NoSQL databases are also good for ingestion as part of this Big Data story. You can get information out of them, but they’re -- like I said, they’re just sort of key value stores. They’re not really massively parallel analytic engines. So that opportunity is there, but it's sort of a smaller part of this whole Big Data space.

And I think, I’ll leave it at that. There are hundreds of other vendors, but I think those are the two key names, ones to look at here.

Brendan Barnicle - Pacific Crest

Now, one of the Hadoop distribution vendors that you guys have been working with is Cloudera and you mentioned them in your presentation. Why did you choose Cloudera over couple of the other distributions that are out there?

Andy Mendelsohn

Cloudera certainly is the largest and most mature of the Hadoop distributions out there. They also have a very large and mature support organization that sort of works with us in supporting our customers. And certainly at the time we chose them, I think they were the clear choice and they’ve chosen to be a very good partner with us as we’ve gone to market around Big Data with them.

Brendan Barnicle - Pacific Crest

As we step back from all this Andy, and you’ve kind of looked at this over years of experience on the database side, what you see is the biggest barriers to adoption both for your technology and more generally for Big Data technology?

Andy Mendelsohn

Well, there are these two key platforms -- parts of our platform here. There is Hadoop and there is relational database technology. On the relational database technology side, I think the barriers to adoption are pretty low these days. There is like I said earlier, there’s a huge eco-system of people who know how to manage Oracle databases. They know how to write SQL queries against them and they know how to use tools that automatically generates SQL and there are solutions and there is whole tech [specs] worth of stuff there to make it very easy.

So, on the other side of the world, however, on the Hadoop side of the world, they are significant barriers to adoption. I think number one, there isn’t a lot of expertise in the IT organizations and how to run Hadoop, which is why I think our engineered systems for Hadoop is the Big Data Appliance is going to really resonate with our customers there to make it easy to access the -- just initial deployment of the technology.

The other big issue around Hadoop is if you want to write analytics, you can start out saying okay, I’m going to do some Java coding and write MapReduce using Java. Well, that is a skill-set that is not very plentiful. So there aren’t a lot of developers out there who know to do, how to do that. So they have started building some tools on top of Java MapReduce, things like [Hi’s and TIG] that give you a little higher level programming paradigm and that’s sort of like a simple stuff that have SQL.

Again, that’s good because [starting] most of our customers will use those kind of tools rather than try to write with Java MapReduce and that helps a little bit. And I think, moving forward, they are going to of course working to continue that. Oracle is working with expanding our SQL engine capabilities against HDFS data as well to make it easier for customers to use the Hadoop platform.

And then moving up the stack, again there’s not a lot of tooling or solutions that sit on the stack today on the Hadoop side again. That needs to really improve to make that a much more easy to adopt part of the platform. Of course at Oracle, we will be working on those kind of solutions as well.

Brendan Barnicle - Pacific Crest

So the last question from me Andy, as you look at that, maybe you sort of answered with this previous question, where is the biggest opportunity for Oracle in the whole Big Data market?

Andy Mendelsohn

Yeah. You know, I run the Database Group and we see this whole Big Data space as being a huge market opportunity for us. Over the years, traditional BI and Data Warehousing has been a huge part of our business. We see Big Data as being just an acceleration of that business and certainly our Exadata engineered system is really sort of leading the way there. All of the customers I think who have big Oracle data warehouses these days are on non-Exadata platforms as they reflect -- refresh those platforms, are moving to Exadata. So that’s really moving forward.

As I mentioned earlier, we are continuing to innovate very aggressively in this space with our new in-memory column-store technology. So I think that’s again going to sort of raise the bar of what’s expected of a massively parallel SQL engine, that’s going to be very hard for people in the open source space to keep pace with.

So I think the relational database and Exadata of course are huge opportunities for us. The Big Data plans as customers started opting Hadoop is going to be another opportunity for us that’s significant moving forward. And in the BI space, of course, all our BI tools are moving into the in-memory analytics space. Our Exalytics engineered system is sort of the platform for doing that. That’s another big opportunity for us.

And then lastly, I did mention our Oracle no-SQL database. We do see a lot of interest in those SQL, not necessarily just in what we call the big data space, but just in general data processing space that a lot of web developers especially use -- are very interested in using those SQL databases.

We have a very strong no-SQL offering and we’re busy right now trying to make sure all of our enterprise customers know that if they are considering no-SQL, they should consider the Oracle no-SQL database product in their evaluation and we think, we’re going to do very well as people do actual competitive [POC] using our technology versus the other popular technologies out there for no-SQL. So I’d say those are the key opportunities for us in Big Data.

Brendan Barnicle - Pacific Crest

Great. That’s plentiful. Andy, thanks so much for your time today. Really appreciate it. Shauna, I haven’t had any questions come in. Are there any that you would like to ask before we close things up?

Shauna O'Boyle

No. I think at this point, we’ll go ahead and wrap up. We would like to thank everyone for joining us today. Also we’d like to extend a very special thank you to Brendan for moderating the Q&A portion of today’s call and asking the questions most asked by investors.

If you have any follow-up questions, please contact the Investor Relations team here at Oracle. This concludes our call.

Operator

Ladies and gentlemen, thank you for participating in today’s conference. This does conclude the program and you may all disconnect. Everyone, have a great day.

Brendan Barnicle - Pacific Crest

Thanks Shauna, thanks Andy.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!