Datawatch's Management Hosts Information Optimization Conference (Transcript)

Feb. 4.13 | About: Datawatch Corporation (DWCH)

Datawatch Corporation (NASDAQ:DWCH)

Information Optimization Conference Call

January 31, 2013 7:00 pm ET


Tim May – Senior Manager Presales Asia Pacific

Tim May

Just to test the audio one more time. Please let me know if you can hear me. Well, if you can hear me, please send me a short chat message to say that you can hear me again. Okay, good. It seems that most of you can hear me and some of you, a couple of you are having problem. So I’m going to get started again.

Okay. So today, we’re going to be talking about Information Optimization. Just a little bit of an agenda. So I’m going to give you an introduction about who Datawatch is, and what Datawatch does as a company? We will talk about understanding data variety and the content blind spot. I will talk to you about the Datawatch Solution, so to give you a bit of background about the product. I will also give you an overview of some interesting industry solutions and case studies, and there will also be a short demonstration at the end, so you can understand a little bit more about how the technology works.

Okay, so Datawatch today, Datawatch is an American company based very close to Boston. We were a NASDAQ listed company and we have been around for some time. We are pioneers of what we call Information Optimization and obviously today, I will be telling you about what that means. We have offices all across the world, in particular we have an office space in Singapore, which services the APAC region and we also have an office space in Australia, which services the Australia and New Zealand markets as well. So we have a very good presence in the region.

We have over a 125 resellers and 75 partnerships worldwide, and we have over 40,000 customers worldwide, 99 of the top Fortune 100 companies are using that technology and 497 of the top Fortune 500 companies are using that technology. So in terms of the industries where we are very strong, traditionally we’ve been very strong in financial services and particular banks. We are also very strong in the healthcare industry, and we’re beginning to develop solutions for manufacturing retail and especially government.

So you can see some of their customers here, and you can see that we have a very diverse range of customers across the range of different industries and different verticals. More regionally you can see that in the Asia Pacific region, we also have a very diverse selection of customers shown here. We have a lot of the banks based in Japan and in Singapore, as well as the Philippines. And particularly, you see at the bottom there, we have Audi in Japan, who is one of our bigger customers, who are using that technology extensively across their wholesale and their retail, automotive reseller operations.

Okay, so now I am going to give you an introduction into what we do and what do we mean by this concept of Information Optimization? So Information Optimization delivers variety to Big Data and BI. So, the key word here is variety. Information Optimization solutions allow organizations to deliver any data to every application across the entire enterprise to improve their business process and deliver analytical insight. So what we are really saying is that we can take a variety of data from a variety of systems and bring it all together to enable organization to analyze all of this data together.

Okay to give you a little bit more background about their concept. So if you’re familiar with the idea of Big Data, a Big Data is a trend that’s floating around the IT world these days that you’re probably familiar with. And so you’re probably familiar with the concept of Big Data as data volume, so a large number or a large volume size in terms of data. And you’re probably also familiar with data velocity and particularly the concept of data being near real time, so we want real time information.

So maybe you’re less familiar with the concept of data variety. And this is probably the most important aspect of the Big Data 3Ds is that data variety we’re talking about structured data, unstructured data, and semi-structured data. And this is really with Datawatch placed very well. And this is that strong suite, bringing all of this data together from whether it would be structured data, semi-structured data or unstructured data, we can bring this all together and allow organizations to analyze all of this data together and make to get a business divisions.

Okay, so what do we mean when we are talking about all of these different data sources? So to you give you an idea of about what the Information Optimization challenge is, so maybe for example, in the organization we’re talking about we have BI data, which maybe coming from a data warehouse and feeding into an existing BI applications, maybe we also have trusted operational data, which maybe coming from an existing ERP application or CRM application or a financial application and may be we also have a lot of third-party data, which may be for example invoices coming from outside vendors or it may be market research data that the organization is purchasing from a third-party.

So all of this other data, which is not typically stored in data warehouses or in operational systems, so the Information Optimization challenge for us is to bring all of this data together and what we call our Datawatch Information Optimization platform and what this does, this allows you to get the whole story meaning be able to understand all of the data in your organization, bring it all together and able to analyze all of this data together.

Okay, when we talk about Big Data again, so what we are talking about is all of the data. So you’re familiar with probably with ERP data, which was a very, very strong trend maybe 10 to 15 years ago, then came CRM data’s to add to the concept of customer relation management on top of ERP, then of course the focus became on accessing web data as well. And when we put all of this data together, now we’re really starting to talk about Big Data, okay. So Big Data really means multiple sources coming from anywhere within or outside the organization. So really there is a variety of data, which we are trying to bring together.

Okay, so to give you a little bit of appreciation for the different players in the BI landscape and where Datawatch fits in particular. So, on the left side you see structured in terms of data type. So most of the traditional BI players play in the field of structured data, so, for example, QlikView, SAS, SAP and Oracle. So really when they’re talking about structured data, we’re saying that for these applications to work, they need to be supplied by data, which is sitting in an existing relational database or data warehouse. So that data needs to be sitting in a nice columns and rows type of structure.

On the other end of the scale, when we look at unstructured data, we’re familiar with the Googles and Adobes of the world, which is really taking data, which maybe sitting out there on the web. So when Datawatch fits in and then this area that we call the content blind spot. So, the content blind spot means all of the semi-structured data, which is sitting in the organization.

So, what do we mean by semi-structured data? We’re talking about data, which maybe sitting in existing excel reports or HTML files or other log files or XML files or PDF or invoice data. So any data which is sitting there in an existing report form is what we call semi-structured data, and this is really the content blind spot. Because this data that is sitting there in those existing reports is not being analyzed at the moment. So, what we are able to do at Datawatch is take all of that semi-structured data, also combine it with data which maybe sitting in your existing structured data source or your existing BI application and then also combine it with unstructured data and really begin to bring all of that data together and allow analytics over the entire spectrum of data, which is in the organization. So, this is what we mean by the content blind spot.

So to stay on the theme of BI and what Datawatch means, and what information Optimization means to BI? So, I will talk about a concept who BI blind is. So, what BI blind does means is that if you have an existing BI application, it’s taking data from structured data sources, so, again, taking data from your existing data warehouse. So, if you are taking data from your data warehouse, how much data is there in relation to all of the data that maybe in your organization, and maybe if you are lucky 30% to 40%.

So, what about all of the other data that’s in your organization? What about the unstructured data which maybe sitting in log files or other machine data sources? What is a semi-structured data, so all of this other data which is sitting in existing reports or PDF files or EDI streams? So unstructured data maybe coming from routers or census or switches or off the web or other data sources and semi-structured data maybe coming from invoices, mainframes or other external data sources. So when we say we’re taking off the blindness what we mean is that we are allowing existing BI applications to essentially take in all of this extra data to compliment the existing structured data sources that they are leveraging already.

So the BI application that you are using, whether it would be clicked view or whether it would [tabloid] or whether it would be something else is able to leverage so much more information across the organization and that’s really where Datawatch is able to compliment existing BI. If you do not have existing BI that’s also not a problem for us, because we also provide the interface to be able to analyze and visualize all of this data as well.

Okay. So, so far I’ve told you about what we do. So, what we do is this concept of Information Optimization. So hopefully, over the previous slides, which I’ve just shown you, you have got an idea of what Information Optimization means and the key word to remember when we’re talking about Information Optimization is variety. So there’s a variety in bringing all of these various sources of data together.

What I’m going to talk to you about now is really how do we do it, so more on the product side. So if we are talking about brining all of these different data sources together, what is the tool that we can use to do it? And the tool that we use is what we call the Information Optimization suite and this really consists of three parts. So we have what we call Monarch Professional, which is as the engine behind Information Optimization suite, which allows you to capture and transform all of the data, which maybe sitting out there in existing report formats or in other formats.

We also had Data Pump, which acts to distribute and automate the data delivery that we’ve captured with Monarch Professional. So if you are using Monarch to capture the data, we use Data Pump to essentially automate and distribute the data where it needs to go.

And the third part is what we call Enterprise Server, which acts a repository for all of the reports and all of the data which you have in mind from all of these different data sources. Okay. So, when you put them all together, we are able to transform data, we are able to distribute data, and then we’re able to optimize the analytics of that data.

To explain, I think in a little bit more detail. So, if you have data coming into the organization or sitting in the organization in a variety of different data sources. So again, this is the concept of data variety, so data that sitting in existing reports, or data that sitting in existing data warehouses or other data basis. So you have all those different data from different data sources. With Monarch Professional we’re able to catch up model and begin to transform this data into the format or into the output that you need, so you can analyze all of this data together.

Then, with Data Pump and Enterprise Server, we are able to integrate all of these different data sources, automate it, so we can automatically begin to pick up all of these different data. So if you have report data, which is coming in on a scheduled daily basis, you can begin to start to automate the data extraction process and we can deliver that data to where it needs go. For example, if you do have an existing BI application, we can push that data to the existing BI application or if you don’t have, we can push it into a Enterprise Server environment where you can do quite advanced analytics on all of the different data and you can visualize all of the different data, for example, by using dashboards or other visualization.

So, the key value of Enterprise Server is the analytics layer. So the ability to do relatively complex analytics on all of the different data that you are able to extract from all of these different sources. So, for example, you can join different data together. So maybe it’s data from CRM, join the data from finance. So we can do these joins on the fly. We can also filter data, we can search for data, we can do relatively complex calculations, with data and we can distribute it and save it, or push into the format that you may require. So, hopefully, this gives you a little bit of an idea about what remain with the Information Optimization platform and what products are implied here to create the Information Optimization platform.

To give you an example of how it might work, in terms of your existing IT infrastructure. So possibly you have an existing, or your IT department has an existing data warehouse. So what the data warehouse is doing is taking in a variety of different databases and structuring them in a way so that it can then utilize by an interface, for example an existing BI application, which is great if all of the data that you want to analyze is sitting in the database. So what if it’s not? So, I have mentioned before that typically about and somewhere between 10% and 30% of company data actually sits in a nice data warehouse. So you’re talking about a huge amount of data which is not in your data warehouse. So this could be data again which is sitting in PDF files or in XPS reports on made in another electronic content management application.

What about all the data which is sitting in Excel. I know most organizations have a huge amount of data, which is sitting in Excel spreadsheets, or in other database applications like Access? And what about other databases from other systems in the organizations, which are not integrated into the existing data warehouse? So what we’re able to do, for example with a Information Optimization platform, is take all of these different data sources and add new data sources are added in the organization. We can take them as well and we can insert of this data into the data warehouse very easily, and then combine it with the data which is in the data warehouses already, extract that out and then distribute it across the organization as it is needed.

For example, we can export it into formats like Excel, PDF or may be into another Access database or CSB files for example, we can push it into another database directly, or we can update a content management system, or push it to an FTP site, or email distributor to whoever may need the information within the organization. So really what we’re talking about is being able to leverage more and more value off the existing IT infrastructure by being able to supply more and more data sources to it.

To give you another example, so may be you don’t have an existing data warehousing infrastructure, and may be you don’t have an existing BI application. That’s fine as well. So what you do have is probably a variety of different data sources, right? So you may have all of your text base, PDF base, XPS, or any other report formats sitting in your organization. Yes you have lots of spreadsheets, database files, or text files sitting there as well. You may also have existing databases, or applications which are generating data, and of course, there is everything which maybe sitting out there on the web, or in HTML files of formats, which you’re also wanting to integrate.

So, what we are able to do is bring all of that data together and with Data Pump, we can combine, manipulate, transform that data and push it into Enterprise Server. So in Enterprise Server, you have a secure, a compliant archive of all of that data and X as an Internet portal to enable the right people in your organization to be able to combine this data, join this data and do the sort of business rules, perform this sort of business rules that they would expect to be able to do to analyze all of that data together. And then they can visualize it in a range of different ways. You can create chats. You can create dashboards. You have been in both analytics of being able to manage all of this data as well.

What is very important is that we have a concept called Data Lineage, which essentially enables you to trace back to the source file from where the data came from. So, if you are pulling all of these different data sources together into one application, so you can analyze all of that data together then maybe a concern of not knowing where the original data came from. So that’s okay because with one click, we can trace back to the source file and literally pop up the file that the record from in the source file format. So this is a really, really cool function for anyone who is looking to create an order trail for the data, which may exist in the organization.

We can also distribute the data as you need whether it would be by email, creating other exports in PDF or Excel format for example directly printing, sending it to a printer. So, this is an example of the sort of infrastructure that we may use, if you do not have any existing data management infrastructure is sitting there. So really we can provide a full landscape for Information Optimization within your organization.

Okay in the business case or the return on investment behind the Information Optimization platform is really quite compelling. Because on one hand, we have the ability to leverage resource efficiency improvement via business process optimization. And what that means is the ability specifically using Data Pump to automate processes, which maybe being done manually at the moment.

So, for example, if you have people in the organization, which are copying and pasting data from one source to another source or rekeying data into an application from another source. What we can do with Data Pump is essentially eliminate all of that extra work of rekeying or copying and pasting information, so you’re able to extract much more efficiency out of your resources in your organization. Then, of course, once you get all of the data together, you’re able to make better business decisions in terms of maximizing revenue or achieving cost efficiency because you’re able to have all of the data together and to create Information Optimization.

Okay just moving on, so I’ve explained to you a little bit about the infrastructure and the product make-up of Information Optimization platform. So this is just to explain that we have an on-premise solution, so we can install the Information Optimization platform into a server, an on-premise server environment, or we also have a could offering to host the Information Optimization platform in the could, is it your own organization cloud or the third-party software-as-a-service type of application, for example with Amazon Web Services.

So really, in summary, what we have is the ability to achieve personal information discovery when we’re talking specifically about using Monarch Professional at a desktop level. We can then take their concept and do coordinated actions for workgroups and multiple users and then leverage organizational effectiveness across the whole enterprise with an enterprise level Information Optimization platform.

Of course, as I mentioned before, the solution can be on-premise or it can also be in the cloud, okay. So hopefully, what I’ve been able to do so far is explain to you what we do in terms of Information Optimization and bringing all of these different data sources together across your organization and how we do it in terms of the different components of the Information Optimization platform and what they do to work together to create this integrated data landscape, which can then be analyzed.

What I will do now is just give you an overview of a few key industry solutions that we have done to give you an idea about how Information Optimization can be applied to different industries with different clients, to allow them to achieve more value-added information analysis. So, for example, in the technology and financial services industry, in particular with Broadridge, which is a large U.S. brokerage firm, so what they were looking do is to bring together over 200 million pages of transaction reports per month.

So we were able to bring all of their information together and put it into a consolidated Information Optimization solution, so that enabled self-service analytics for individual clients. Specifically what can happen is that individual brokers can log in to the Information Optimization platform, which Broadridge is hosting and those individual brokers can see or perform the analysis on their individual transaction where previously that was not possible simply because of the volume of data that was sitting out there.

Another example specific to the financial services industry, with Standard Chartered. So, what Standard Chartered is using us for is to bring together over 500 broker statements per week. So these broker statements come in a range of different format and different sizes and shapes, and what we are able to do is bring those different brokerage statements together, consolidate them into one file and then with Data Pump move that projects and consolidated files to the place that its to the location with Standard Chartered is one thing that’s to go. So really we’re able to consolidate and aggregate broker statements and the range of different formats and move, the consolidated file to the location where it should be.

In the mortgage industry, so with AHMSI, an American mortgage firm, so this is really a reconciliation solution. So, on the one hand you have customer loan applications. And on the other hand, you have customer credit rating data. So, what we are able to do is take these two different data sources and essentially combine them and allow the client to do a reconciliation between a loan application of the customer’s credit rating, and essentially that eliminates a huge amount of manual work and also able to comply with regulatory requirements, which enable credit checks to be done on individual applications. This next example is a little bit more detailed and a specific to an automotive solution that we rolled out quite recently. So, with this solution, and with an automotive company located in Japan, you can see the processes at a company level are divided into marketing, sales, after sales and finance administration processes. At a lower level, you have CRM, inventory logistics, sales force, parts, service and accounting and controlling processes.

So, the processes within this organization and within a lot of organizations are governed by department structure. The different department in an organization have different processes, and of course those different departments have different data and system requirements. So, what that means is with all those different systems, it’s very difficult to get a consolidated understanding of all of the data within the organization. So, when you start to overlay the systems on top of those processes, what you get may be for example, is the CRM application, which is specific to the market team in CRM department. You may have a logistics system or application, which is specific to the inventory management and logistics department. You may have an ERP, which covers if you’re lucky, the sales and after sales business units, and of course, you must probably have an accounting and finance solution, which is really used by the back office.

So with all these different systems in place, what happens is that Data Silos developed, because there is no integration between different systems. Okay, and this is a problem, because what happens is, when you want to move data from one application to another part of the business, you end up having the copy or extract data out of these applications, and most probably put that data into excel to enable you to transfer data across the organization. So, what happens over time, because you get more and more and more data, which is sitting in Excel, or sitting in PDF files or sitting in text files within the organization.

So really how can the whole story be understood, when you have data sitting in all of these different systems, and then you have data also sitting in all of these Excel, or Spreadsheet, or PDF, or Text File formats as well. The Information Optimization answered to this problem, is to bring all of that data together and enable you to get the whole story, okay. So, we are able to bring data from the directly from the individual systems and then bring data from Spreadsheet, or bring data from PDF or Text Files all together, consolidated and then provide the analytics environment you need to enable you to get the whole story, okay.

So really what I’ve shown you so far, I hope you can understand now, what we do when we, or what it means when we’re talking about Information Optimization. How we do it and a few individual examples of what it can be used for within organizations.

The last part of my presentation today is to give you a short demonstration about how two of their products were? So, what I’m going to do is to show you how Monarch Professional works to capture and transform data and how Enterprise Server works to consolidate all this data together and enable an analytics environment across the whole organization.

So, we’ll start with Monarch Professional. So, Monarch Professional essentially can be a desktop application, but it also acts as the engine to all of that technology, all right. So, at a desktop level, what you can do with Monarch is ingest a report, for example, in this case it’s a ABAP report out of SAP in a text file format. So simply you can ingest that file very easily. You can then click on the line of the report to highlight a level of detail in the report, okay.

Once you have done that you will be both into a screen and look something like this, which is what we call a tracking environment, okay, and in this screen what we are trying to do is to look for a character on the line that we chose, which has a common character along or through every line of detail in that report.

So, for example, if I select a decimal point, you can see that that decimal point occurs on every line of detail of the report. So what that will mean is that when this report, or once I confirm this model, when this report is rerun, the technology will be able to essentially model all of that data again and again and again, somewhat automatically. So we can extract the data defined by this track automatically.

Okay. Once I have set that track I define the fields which I’m wanting to extract from this report. So again, in this case, I’m taking the media type, the quantity, the description, the label number and some pricing and amount information. I select the role as a detailed role meaning that it’s the most detailed level; it’s the most granular level of detail on this particular report. I can do this manually or by auto-defining the fields with one click.

I need to do the same function for headers and footers, which occur on the report. So, for example, you see the contact name, in this case is a header, which is about those lines of detail. So I take the contact and follow the same process. So I set my track this time as contact meaning that the technology will look for the word contact and then if I define the field next to contact, it will extract that field every time.

I call the roll in a pen roll, which means it acts kind of like a header. So it’s looking down the report until it finds the next field, which matches my track, in this case, contact, and then it will take the field next to that track. And then, when I generate a data view, I am taking all of the data, which I define in those trapping templates and bringing it here into a, let’s say, much more structured view.

So what we are able to do is take a semi-structured or unstructured report like that, text file and with a few simple clicks, create some templates, which is able to bring all of the data or capture all of the data off that report and bring this into this data view. And with the concept of automation, as I mentioned before, once I have defined the data model, which will capture all of that data when that report is rerun, I can automatically extract that data and bring it and take it off that report and into my analytics environment, okay.

So, I am able to create summaries and calculations, very similar to what we can do in Excel in terms of analytics, all right. I am also able to create chart representations of that data, okay. And I am also able to export a data to Excel, for example. So, remember the concept I talked about earlier called data lineage. So, in Monarch we call that Monarch Context. So what that means is that when I am exporting data to Excel, I can select context to apply and also a digital signature to apply. So when the data is exported into Excel from that original report, you can see down here in the bottom left, I have a digital signature applied, which means that is telling me the data has not been changed or has not essentially been touched since it came from the original report.

And with Monarch Context, if I am curious about where this record came from, what the source data was, with a single click I can bring up the report, which the data came from, okay. So you can see I have actually been able to, the technology is able to highlight the line of the report, which that data field or that record came from. So that’s the concept of Monarch Context or Data Lineage and I was talking about earlier. Okay.

So in summary, I can take this one report, extract the details from it; put it in a summary format. I can even do exception highlighting to look for certain values, which are outside preset parameters. I can constabulate it. I can group it and provide subtotals and I can even do pivot tables all from this one semi-structured report, and even do charts. Okay.

So that was a little bit of an overview of how Monarch Professional works. So, again Monarch Professional is available as a desktop product for an individual to use for Information Optimization at an individual level. If we really want to start the leverage all of the information across the entire organization though, we really need to start thinking about an enterprise solution.

An Enterprise Server is the tool to enable us to do that. So Enterprise Server still works with Monarch as a front end to capture all of the data. So all of the logic I showed you in that previous short demos still applies to Enterprise Server. Enterprise Server is essentially the environment that we can use to take the analytics to a higher level for all of the data from all of the various sources across the organization.

So with Enterprise Server, essentially you have a log-in screen, which represents the roles, which you can configure to represent the roles of different users in the organization. For example, if you have data going into Enterprise Server from the marketing department, from the sales department, and from the finance department, maybe your COO or CFO would like to be able to have access to all of that data, that’s fine from maybe an individual in the marketing department only should access the marketing data, this also fine. So you can configure specific user IDs based on their permissions and roles.

So once you’re in Enterprise Server, you can see all of the reports that have been ingested and the different times that they were ingested. So, for example, if you’re taking one report, in this case, it’s the shipping report, if you’re talking it into Enterprise Server automatically everyday, the report will be available in Enterprise Server automatically with a date stamp of when it was ingested.

Okay. So you can view again the Native Report, how it looks in its original format. So remember again that this report is being ingested into Enterprise Server on a scheduled automated basis. Okay, so this could be data in any source format, so it could be a PDF, excel file, text file or different log files or any other format that you may have. Okay, with the interactive data grid, we are able to analyze the information very similar to what exactly as I was able to do in Monarch Professional in the data view. So, now we have the data coming from the semi-structured or unstructured report in a structured format, which enables me to do analytics.

I am able to export this data out of the interactive data grid. For example, I can export the summaries or the detail. I am able to perform calculations, okay. So in this case, I’m adding a column, which is 10% of the sales amount. So I can create essentially calculation columns or calculation as I would be able to create them in excel. But anyone who is familiar with excel and most people are, the functionality is essentially very similar, okay. So here you can see, I’ve created a new column which represents a calculated field based on the amount column.

So this concept of data lineage, so as I’ve showed you before with Monarch concept, we have the concept of Data Lineage and Enterprise Server as well. So I’ve ingested all of these different records form different reports and I’m curious to know where this data came from. So I select the row, I’m interested in. And with one click, I can pull out the original report where that record came from, okay. So again, it’s a very powerful tool in auditing, because if you can see where the source file is, you are able to very easily verify the accuracy of the numbers.

Okay, so also a really important functionality of Enterprise Server and the ability to join datasets together. So if you are familiar with the concept of joints maybe from the BI world, you probably know that if you want typically, if you want to join two different data sources together, you need to essentially push in both into your data warehouse, and have them structured in the same way, before you can join those datasets together. What we are saying with Enterprise Server, as you don’t have to do that, so essentially we can take two reports or three reports or more than that take those reports in those different formats. So one might be in a Text File format, one might be in a PDF, and one might be in a Excel and on the fly join those reports together, to enable you to do analytics across all of their data together.

So for example to create a join maybe I have one report which is sitting here in Excel, I have another report, which is Text File format, you can see I have a common field across both reports. And this is really the key to know where their common field is because that is the condition to join these two datasets together. So once I know my common field, I can bring these reports together in Enterprise Server and you can see I have joined them together. So now I’m able to do analytics across both of these data sources, okay. So we have the visualization in Enterprise Server as well. So you can do dashboarding of all of the different data that you are able to inject, okay. You have dashboard linens, which means you can drill down on individual dashboard entries to get more detail as well, okay. So that’s really an overview of how Enterprise Server works and how Monarch Professional also works, and that essentially brings to an end of the presentation I have today.

Question-and-Answer Session

Tim May

So what I would like to do is I’m going to escape out of this and I will see if I have some questions. So, does anyone have any questions? You can either try type them to me in the Q&A section or type directly into the chat box here. So, I see I have one or two questions. Okay, so the first question is if do I need an individual user license for Monarch to operate Enterprise Server, and the answer is essentially no.

So, with Enterprise Server, we are able to, essentially we don’t need an individual desktop license as you do in Monarch. So you are able to essentially, Enterprise Server comes with a group of licenses which are able to applied to different named users across the organization without needing to, for example purchase a license for each person across the organization.

Okay, I am just looking at the time. I think what I will do, if there are no more questions, I will wrap it up for today. So, with that, I will, excuse me, so with that if there are no more questions I will wrap it up for today.

So, if you would like any further information or you do have any specific questions, which you would like me to answer, please email me directly. There is my email address there, and I can answer any questions. I’ll give you any further information individually or you can also contact my colleague who was not on the call today, Olivia. She is the Director of Partner Enablement for the partners who maybe on the call, so you can also email her. If you can also, from our website, download a free trial version of Monarch, which enables you to have all the functionality of Monarch desktop product with 30-day trial.

So, with that I will like to say thank you very much, and thank you for your attendance, and I hope to see you again next time. Thank you.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to All other use is prohibited.


If you have any additional questions about our online transcripts, please contact us at: Thank you!