Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message|
( followers)

Datawatch Corporation (NASDAQ:DWCH)

Introduction to Information Optimization Conference Call

January 24, 2013 7:30 PM ET

Executives

Tim May – Senior Manager Presales Asia Pacific

Olivia Jam – APAC Partner Enablement Director

Tim May

Welcome to this Friday morning brief where we will be discussing an Introduction to Information Optimization. My name is Tim May, I will be leading you through the WebEx today. I’m responsible for pre-sales in Asia Pacific for Datawatch. So, with that any further delay, I would begin to go through the presentation for today.

But just a couple of housekeeping issues. To start with, I assume, you should be able to call into the WebEx without any problem by calling in by your phone. Also importantly, today’s presentation will be – presentation mode only which I will be listing, not only which means all questions and answers, I can answer at the end but please type your question into the chat-box. If I move on, you can type your question into the chat area. And at the completion of today’s presentation, I would try and answer as many questions as I can.

I’ll turn it to Olivia if you have an important question, you can raise your hand. There should be a little icon there for you to raise your – so you can see how to raise your hand. And at the end I would try and take your question in the end as well, okay.

The timing for today, so the presentation would take approximately 30 minutes, 30 minutes to 40 minutes and then maybe we’ll have time for trying to get some questions at the end.

Okay. So, I’ll begin now by talking about information optimization. Just a brief agenda of what we’re going to talk about today. I’ll give you introduction of who Datawatch is and what we do. I will give you an overview of what we mean when we talk about information optimization and particularly what that means to Data and variety and also to be right and so we know it. I will discuss about what we mean by filling the content blindspot in your data landscape. I will also go with the Datawatch solution and what it is from a product point of you and give you some examples of how we’re using it in different industries and different case studies and also give you a short demonstration as well so you can see how the product works.

Datawatch as a company has been around for some time. We are listed on the NASDAQ in Chelmsford Massachusetts it is very close to the city of Boston. We are pioneers of what we call information optimization. And as I mention I will discuss what this means in the mail. We have offices across the globe. We have the biggest presence in the US, we also have big presence in Europe and obviously here in Asia-Pacific, we have a very strong partner network across the globe, and here in Asia Pacific we have been focusing our attention on building a partner network over six or seven months in completion.

We have over 47 customers world-wide, with 99 of those 4,100 companies even as software. And so we have a very, very widespread of clients who are actually using our software currently. In terms of which industries we are very strong on, traditionally financial services and in particular banks have been our strongest industry vertical, most in the banks globally and using our solutions. We are very strong in healthcare in the United States in particular. And more recently we have a lot of very, very good retail and manufacturing solutions as well as government, okay.

To give you an idea about who our companies are and what industries, as I mentioned, we are very, very strong in the banking and financial services area. Citibank in particular has over 6,000 users of their software. We also have a very good spread across government institutions, healthcare institutions, retail, manufacturing and also technology. The picture that Microsoft is using our technology, using our self effort for the call centers and to integrate data in the call centers.

Looking who our customers are in the Asia Pacific region, we have again a very, very diverse spread of customers. We have a strong presence in Japan, and somebody came over to the bank, as a big user of our software – a lot of the large banks as for Singapore, for example big user of our software, Cambridge, based in Singapore, our number big user of our software.

And looking at other areas, we recently saw in the account – a big account was already based in Japan to provide retail and wholesale level solutions for their business and very, very important solution for all of the customer because of the range of value it’s providing to them. And also it’s an important solution for us.

Okay, now I’ll move on to discussing what Information Optimization is. So, when we talk about information optimization, what we’re really saying is that we deliver the variety to DI. Information optimization solutions allow organizations to deliver any data to every application across the entire enterprise to improve the business processes and deliver analytical insight.

So, the key is the concept of the variety. So, traditional BI as you may be aware because it’s all taking data from structured data sources only. When we’re talking about adding data variety to this company, we’re talking about bringing in all the other data sources and the organization that traditional BI cannot access. This could be information which is residing in reports like excel files, PDF files or fixed files for example that kind of currently traditional DI’s cannot access these documents, and needs to rely on accessing data from our relational database.

What we assume is that you need to do that or we can supplement the information in your data warehouses, relational databases by supplying the semi-structures or unstructured data and complimenting the region’s payment for nurturing the BI to give you a much, much broader picture of all the data across the organization.

To explain this concept a little more, when we talk about big data, we’re talking really about the three Vs of big data, Data Volume, which means obviously the size of data, we’re very familiar these days with talking about terabytes worth of data. Also velocity of data, the speed of data and we are also familiar about facing the bad days in near plan, or near real plan.

And the other V that we may not be so knowledgeable about is data variety. And this is really with Datawatch has made firm with itself. So, when we talk about data variety, we’re talking about structure data, which means data I existing relational databases, i.e. your existing DI. We’re talking about it, and we’re also talking about semi-structured data which could be data that’s residing in all your existing reports. So, all of the reports that are sitting and what we call data silos across your organization, so sitting in excel files or text files or PDF files or long-files or EDI streams reassignment, we can now – bring all that additional information together and use it to compliment the structured data that you may already be utilizing within your organization.

Okay. So, to give a little more context to this and discuss what we call the information optimization challenge, so within your organization you may have BI data, you may have trusted operational data and other sources like your CRN systems or your existing ERP. And you may have third party data which is data which you’re buying in or integrated from outside the organization. For example, data from vendor invoices or market analysis, market research data which is outside third party. So, what we’re saying with that information optimization solution is that we can bring all of this data to give – and what we call the information optimization platform for you to get the whole story to allow and leverage all of this data and supply with the information you need to make data business decisions.

Okay. Just to add a little more to the idea of what B-data means. So, most people and organizations even at a non-technical level familiar with ERP data, this was obviously the big – this was one of the big concepts in IT 10 to 15 years ago where we could develop solutions which could manage data across the organization. Supplementing ERP, and CRN later and now we talk about data that’s on the web or available in other sources. And when we convert all of this data together, we’re really talking about what we call big data. And asset Datawatch we are able to leverage all this data from all of these different sources to really give you the visibility of this big data organization.

If we’re talking about why it let us to be high and looking at the different vendors in the BI landscape at the moment and where we shift in relation to them. So, we’ve talked about structured data. And the players that work with structured data, you can see on the lift here, we have example critics, for example say P, for example article. So, what we mean again by structured data is that means particular software needs to access relational databases to enable you to leverage – to enable you to leverage the information.

Sorry, I’m going to pause the presentation because I think all of the attendees are having audio problems. Olivia.

Olivia Jam

Yes.

Tim May

Can you hear me because the other ones can’t.

Olivia Jam

I recommend that people dial-in to the line, listen in.

Tim May

You can only dial on this, the only way you can access it, no one could hear us, everyone is chatting to me saying we cannot hear, we can’t hear.

Olivia Jam

(Inaudible).

Tim May

Well, I’m not sure I can hear you as well. Hello, are you able to hear me?

Olivia Jam

If you’re unable to hear us, can you raise your hands please. There is a little box next to your chat, you can actually raise your hand. Okay, so I can see rest please raise your hand. You’re having problems hearing, can you raise your hands?

Tim May

I can see your chat message, yeah.

Olivia Jam

If you don’t have a problem, can you just type in the chat, let us know. So, I assume we don’t have any problems. Can Tim, anybody raising their hands, asking you in the chat?

Tim May

Who, chatted individually to say they can’t hear. I’ve got about six or seven they say they can’t hear us.

Olivia Jam

Okay.

Tim May

I’ll guess it’s the dial-in they’re using from their side because it’s the problem, as you know, it’s not facilitating the dial-in number, that’s also our problem. I think the problem is, because I mean, I’m using Skype dial-in because there is no, I mean, dial-in and with the Singapore number because there is no – there is no Philippines number.

Olivia Jam

When you look in, there should be a list of numbers they can actually choose from?

Tim May

Yeah, but there is nothing for the Philippines, it’s been recording the whole time.

Olivia Jam

Tim, could you continue first, so that all the people that’s actually dialing in from other locations. And then we would try and reschedule. We have a presentation next week and we will look into that problem rather than holding everybody else.

Tim May

Right. Right, can we chat it in the chat-box to everyone so that I’ll continue?

Olivia Jam

Okay, yeah.

Tim May

Okay, sorry. Okay, I’m very sorry for the audio problems but. So, with Datawatch – where Datawatch fits is what we call a content blind spot so this is the area between structured and unstructured data where we’re working – where we’re working to take existing reports resides across the organization and we’re able to integrate this into existing business and talent and software solutions if we have them, or if they don’t exist already, we are able to provide the analytics on top of this semi-structured or structured content or even unstable content to give you the analytical insight we remain. So, we call this semi-structured area, the content blind-spot.

To look at it from another angle, if we’re talking about having existing BI and as we know with existing BI we can access structured data sources to give you the view of data that’s residing in your relational databases that’s in your organization already. This was fine except as we know we’re taking only structured data. So, what about data in unstructured sources such as machine data or long file, or what about that data in semi-structured sources such as existing reports, PDF files, PDI- streams?

So, what we are able to do is what we call take-off of the BI blinders and enable you to access data from all of these sources across your organization and bring it together in your existing business intelligence to allow how you encourage all of this additional information. So, it could be data that’s coming from routers, switches, essentials, it could be data that’s coming from invoices, from outside your organization, many final reports or other external data sources. Okay. So, this is the idea of taking of the BI blinders by allowing your BI to leverage all this additional information as well as data which is currently.

Again, to look at a – again from another BI angle, and what this means to DI. So, anyway, existing business intelligence, you maybe currently looking at six different dimensions. And that’s following of all information you need is in the six different dimensions but what if we need more dimensions, what if you need to understand this priority as well. What if you would like to inter-prevent distribution partner, what if you are taking the voice number for example, and what if you would like to take in a date range? These maybe data sources that are not in your existing BI framework and data that cannot be accessed by your existing DI.

So, what we are able to do is take those existing dimensions that you are looking at with your existing DI, add in all the other different dimensions that you may not be looking at as well and really give you the ability to combine all those different dimensions and do all of the different analytical capabilities that you might expect if you can leverage all of the different information across your organization.

What we’ll do now is, we’ll move on to our product platform. And the product that we used to deliver what we call information optimization. So, essentially this comprises of three products, together which make up to information optimization. So, on your left you see we have monarch professional, which is used to capture and transform the data that resides in those other data sources, so the data that resides in those excel files as PDF files or those text files for example.

We have data pump which is used to move the information captured off those reports, we wanted to go. For example, into a existing data warehouse or into a consolidated report which can then be distributed across your organization. And then, we also have enterprise server which works the house, all of the data and all of the reports that you are capturing from all those different data sources and allow you to do analytics on all these different data sources across your enterprise.

To look at it again from another angle, you may have data coming from all these different trusted data sources across your organization. Again, the concept here is a variety of data, so the data could be coming from many different sources.

With model-active professional we are able to model and transform this deck into the form that you are requiring. And then the data pump we can push this information to for example listing BI or Data warehouses, solutions or push into inter-fined server to enable to analytics capability across your organization. And the key concept with enterprise service is to think data as taking data from every part of the organization from the marketing department or with the sales, to finance, to human resources for example, I mean, all of their data to giving – and being able to analyze it in one holistic view and again to enable you to have data visibility across the entire enterprise.

Enterprise server, we also have the ability to do the context analysts that you would expect from for example using excel. And also if we distribute this information anywhere it needs to go within the organization. So we call this whole process from capturing the data’s modeling, the data is filling it, integrating it, automating it, storing it, sharing it and then analyzing it – this whole process is what we call information optimization.

So, looking from another angle, and what we could do with information optimization to leverage the existing IT infrastructure you may already have, if you have data that’s residing in data bases and then push into existing data warehouse and then you may have a BI enterprise being used to capture and to represent and visual data that sits in that data warehouse. That’s fine as long as those databases on the list contain all the information that you need to analyze. The chances are, they don’t cover all the information in the organization so you are left without the full picture.

What we are able to do is supplement the information in that data warehouse with all these other information sources, that PDF files and lifelong files and line frame reports with existing spreadsheets and existing database information or other sources that you may not be able to access. But the information optimization platform, with all of these different data sources in the original source and push them into your data warehouse, okay. This enables your data warehouse to leverage all of this additional – this enables you to leverage your data warehouse and enables you to store all the additional data sources that you currently allow storing in your data warehouse.

We can then take this information out of your data warehouse and look to distribute it across all different direction as you may need, you can push it to excel or PDF, you can push it to another database, or you can push it to content management system or an FTP site. Or you can distribute it by e-mail or an RFS, okay.

So, the key concept here is that if you have existing data warehouse and infrastructure, you can do so much more of it and we can leverage that existing infrastructure to enable you to analyze a lot more – to enable you to store a lot more information that resides in all these different data sources and then distributed as you need across your organization.

To look at it again from another angle, and to see how we can use the information optimization solution. Again, we have data coming from a variety of different type sources, everything from PDF, text files, excel files, to other databases, to information that may be coming from implementing, from HTML formats. You can probably call this information, the data can combine it and manipulate it and transform it, push it into an enterprise sales environment to secure of all this information and an internet portal to enable you to use all of this information across the entire enterprise. You can perform all of the business rules that you would expect from using excel for example, to perform your analytics.

We can then distribute the information – visualize the information as you might need to be done across your organization. We have a concept called data lineage and this is very important. So, this is the concept of being able with one click to drill down to the original source document. And this is incredibly important from an auditing point of view. We enable to access this source file from which the information is showing comes from as a very, very highly value added tool. And as I mentioned before, we can distribute it across the organization as it needs to be.

So, when we talk about the ROI and the business case for a client. The business case really is that we’re able to improve resource efficiency by optimizing the responses, that means automating a lot of the current copy and paste and retain functions that maybe perform, we don’t need to do these anymore – with cohorts we are able to access the original source file that data came from and automate that extraction and data movement to the place that needs to be done.

And of course, once we have all the information we are able to maximize revenues and cost efficiencies by making data claiming and data business decisions because we have all the information available.

Okay, information optimization solution is really available on the cloud as well as on pavement. So, we have a solution with and as our web services, where users can pay a monthly fee and access all the information they need without the additional expense of software and hardware invasion. We also of course have a non-premise solution as well.

So, in terms of summarizing what this means to an organization, what information optimization remains to all this familiar goals of an organization and really means any data for everyone. So, at an individual level, we can mine and create personal discovery of data at a discount level for example.

We also create coordinate actions for work groups and smaller departments within the organization, particularly with data pump that automate process that is being performed at a department level. And then we can take this up to the entire enterprise by being able to access all of the data sources across the whole organization to enable – to give and enable to be analyzed for the enterprise level. We have the solution available on premise and in the cloud. This is really what remains like any data for everyone, okay.

This is really what we mean by any data for everyone. Okay. How the solutions are being used in specific industries. For example, in the technology and financial services in this industry, we have the challenge of a huge amount of brokerage statements being generated every month at broad rage and what they are looking to do is bring all of these brokerage statements together and then enable self service BI analytics for the individual brokers who can then log into the enterprise server environment and have information on their particular transactions available for them to analyze.

In the technology area we have Logica, who was the large technology base in Sweden, so this is a very similar usage case to the one I previously mentioned whereby Logica is able to take data from the clients so it takes ingesting all the different data sources from an organization, house them in Logica’s cloud solution and then enable the client to log back in and perform their own BI. So this is really again BI as a self service whereby client can essentially outsource their BI to again Logica, who’s been using a solution to bring all these different data sources together and allow the client to access and perform their own business information analytics on their data.

And the financial services industry again a very similar solution where we are bringing data from different sources. Brokerage statements again, bringing this data -- brokerage statement data sources together, consolidating them, generating a consolidated file moving that file with data to an internal system where the file has created from the different data sources is being ingested into their internal system.

A mortgage example has two data sources from two very distinct places, so one hand we have a information and on the other hand we have creative writing information coming from two very, very different data sources. Okay. So what happens is the mortgage essentially have to reconcile these two data sources to come up with essentially racing information on individual applications. So what we are able to do is automatically take those two different data sources, perform the reconciliation and then provide the output that the client is looking for.

In the manufacturing industry, so for example Adidas, so for Adidas what we are doing is bringing customer and financial data together from very different sales markets across the Europe and across the world. So again, taking data for example from an SAP based financial system combining it as CRM data from different markets and then creating a consolidated view of all that data.

To look at it from a business process optimization example, so this is a more generic example specific to the automotive industry where we are doing a significant amount of work. So this – the example is that you may have different department across your organization for example marketing, sales, after sales and finance administration departments the creating the processes to govern the activities within those departments.

At a lower level it maybe the CRM inventory, logistics, sales, force stock services, accounting departments. Okay so the processes within the departments are typically controlled at their department level. And what happens is that when systems are implemented to supplement wants to perform – the task required by these different departments. We often end up with systems which are designed to meet only the needs of low specific process and low specific departments. For example, the marketing department implements a CRM system which is only able to perform the activities available for the marketing department. We may have a logistic system or a inventory management system which is available to the logistics department. And we may have an ERP which if we are lucky can covers the areas of sales and maybe after sales and service but there’s no cover anymore. And then we have a finance back-in system.

So the problem with all of these systems is that without integration between them we have what we call data sailors developing where lots of lots of data is residing in Excel or other resources because we can’t move this across the organization, okay, and we cannot combine the data from these data sources efficiently across the organization.

The real point here is with this sort of environment how can the whole story be understood, how can you understand only information in your organization if you are going to have integration between these different systems. Now the information answer to this problem is that, we don’t need to integrate those systems at an infrastructure level. What we can do is with the information optimization platform essentially we can capture all of these different data sources no matter where they may reside. Okay. Whether it’s data that’s coming from the CRM system, whether it’s data that’s coming from those PDFs or text files or Excels that may have developed. Whether it’s coming from the SAP system for the logistics and inventory management or the enterprise resource planning system or the finance system. So we can bring all of that data together and give you the whole story in an analytics environment.

Again we had the concept of data lineage to show you that when the data is bought into an analytic environment we can then show lineage back to the source where the data came from to enable that – functionality.

Next essentially an overview of who we are, what we do and how we do it from the product point of view. What I’d like to do now is give you a short demonstration of how the product that we – how the Monarch products works to capture the data from existing reports and how the enterprise server environment looks to give the analytics component to all of these different data sources.

So with Monarch Professional, we can – Monarch Professional is really the engine behind all of that technology. So Monarch Professional, as we discussed before, enables you to capture the data from any data source. So for example we may have a text file. We may have all – we may have a text file coming from an SAP system which we can ingest into Monarch very simply and with a single click almost entire level of information in that report we can bring all of these information trapping environment.

Okay. Now this is really the secret source behind how we capture information with Monarch. And what we are looking to do is find a common character, in this is case it’s a decimal points which appears on every line of the report and then due to the consistency of the character whenever this report is run again, we will capture all of the data every time. So this is what we call creating a trap, okay. So once we created a trap to capture information on the detail level of the report, we can select the appropriate role, in this case it’s detail.

We can highlight the fields of information we are looking to take into our analytics environment. So in this case we are taking information which shows the mediate pipe, the quantity of the purchase, the description of the purchase and label number as the enterprise. Now we have to do the same for headers and footers, so again we are highlighting in this case the contact name, which you see as at the header level. Again we create a trap to capture all of the information and define a few names that we’re looking to take into analytics environment. So in this case, by using a trap word contact, we are just not going to look down this report, so every time it sees the word contact and then take that line.

So you see what we are able to do. Now it’s from the semi-structured text file by creating trapping templates for all the information on that file, we are able to bring all of that information into a data view. Now this is really a very, very close looking feel to what you may be use to in Microsoft Excel. So we’ve been able to save data that has not been in the structured form and now as you can see, we have taken in the structured – into a structured environment to enable it to do and to perform an algorithmic data.

So we have all of the defined data with defined in the traps previously. From here we are able to do all of the summaries and calculations in business rules that you would expect to be able to use in Excel. For example we have been able to calculate totals that. We are also able to do a chart representation of all the data. This can also be exported to Excel very easily or any other source you may require. Okay.

So – Monarch – context this is the concept of data lineage which I explained before. So data lineage with Monarch context what we are able to do is add intangible signature to indicate that the export you have done to Excel is the original source data that you extracted from the file. We are also able to give you the context to drill back to the original source document.

So for example, when I have exported to Excel, I can take any filed in this report, sorry any field in this extraction and with a single click you can see that a digital signature is applied, so it is the source, it is represented of the source document itself and with a single click I am able to bring up the report where that information came from in the exact line, okay. So this is the concept of data lineage, or as we say – as we call it a Monarch context, which enables you to have portability back in the source data.

So in summary, we can take this one report, in this case a green report extracts all of the information to detail level, create a summary of all those information. With exception highlighting to show values set – predefined ranges. We are able to cross tabulate those information. Good – doing some good things of that – and of that information. And even -- tables of all those information and shops as well.

Okay, so there is a short introduction to how Monarch Professional works. Again Monarch Professional is a technology that we use to capture all of the data from Monarch Professional, we are able to leverage automation and enterprise level to house all of the data to enable to perform analytics.

In the same context let me explain how it – save the words. So with enterprise server we have log in credentials like control, address for uses laws and permissions. So you may have fifty users across your organization at the enterprise level, so all of those fifty users can be set up to view and do the analytics on the data when they have permissions.

You can see all of the report has been ingested into the enterprise solution, into the enterprise server solution. So you can see here, we are taking the reports every day, so the different sales reports we’ve taken in everyday, we can see we have two different types of reports that we are ingesting into – in this demonstration.

So these reports would be ingested automatically everyday and the Monarch Professional technology is being used to capture the information of these reports and with data -- push it into enterprise saver automatically to enable analytics when the user needs to use – when the user needs analytics capabilities.

So we are able to view the native report in terms of how the report looks at the original source level, okay. In this case again if it takes five reports and we would show it before Monarch. So it could be in any format, PDF, Excel, text file, or long files for example. We are able bring it into an analytics environment which call dynamic total view. So this environment enables us to perform all of the different business rules that we maybe use to in Excel again we can also export the information from dynamic total view into other formats for example Excel or PDF. We can export as detail or as summary in this case exported the summary of the information that we extracted.

We can perform the calculations that we would be use to in Excel. So for example I am adding a column which is input of sales to that I have extracted. You can we’ve created a new column which represents the new castellated field. Okay. So again to show you the idea of data lineage, I can take any row in this total view environment I can select any row and with a single click I can bring out the report – the data originally came from again. Okay. So this is again a very, very – to make auditing very easy.

Another very strongly value added feature is the ability to join different data sources together. So if you are familiar with existing data warehousing structures you will know that to join two reports typically we need to inject both of those reports -- structure them in this way before you can join the two reports together. What we are saying is you don’t need to do that anymore. So we can take two reports in the original source format and join them – in the enterprise server environment. Okay. So say for example, I have this one report which is coming from an Excel source. I also have this other report format. We look for a common field between the two reports. In this case we have a label number which is common between them.

And then in the enterprise server environment you can see that the two reports have been joined together. And this to enable to join data from very distinct sources or very different sources for example, data coming from the sales environment, join two data coming from the finance environment. Joining by a common field of customer number for example and then able to analyze those two different data sources in one environment.

We also have dashboarding functionality in the enterprise server environment, very similar to the chart I showed you before in Monarch Professional. So these dashboards are very easy to build and give you the visualization that you would except from any BI front end. So with these dashboards we can drill back to the total view line with the data was coming from so enabling – again. Okay.

So thank you for the demonstration. And that essentially concludes the presentation that we had for you today. I would typically take questions now. But I think we are still having some issue to that ODR for the Philippines, I will just talk to other panelist Olivia and see whether we can take any question.

Hi Olivia you there?

Olivia Jam

Yes I am here Tim. Yes, Tim I am here. Yes Tim I am here.

Tim May

Okay.

Olivia Jam

So we haven’t been able to resolve the problem in Philippines and we will repeat this webinar on Friday next Friday for everybody again. So those who couldn’t hear from us, we will repeat that and that have been communicated in the chat box.

Tim May

Right, so I don’t think we will be able to take questions today because of the issues we had with ODR. So I will wrap this up for today. So thank you very much for your attendance. If you do have any questions you can click us and we will be more than happy to answer your questions via email. Okay, so that will conclude the meeting for today. Again, thank you very much for your attendance and we’d be happy to hear from you again soon. Thank you.

Question-and-Answer Session

[No Q&A session for this event]

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Datawatch Corporation Management Hosts Introduction to Information Optimization (Transcript)
This Transcript
All Transcripts