Seeking Alpha
We cover over 5K calls/quarter
Profile| Send Message| ()  

Executives

Timothy J. Stultz – President, Chief Executive Officer and Director

Ronald Kisling – Chief Financial Officer

Kevin Heidrich – Senior Director –New Business Development

William McGahan– Director of OCD Technology

Lars Markwort – Chief Executive Officer – NANDA Technologies

Analysts

Srini Sundararajan – Oppenheimer

Nanometrics Incorporated (NANO) Analyst Day March 15, 2012 10:00 AM ET

Timothy J. Stultz

Well, good morning everyone and thank you so much for joining us this morning. We’re pretty excited about this. This is the first Investor Analyst Meeting that we’ve had in almost five years, and we brought together some pretty exciting thoughts and I hope it will be worth your time and it give you some insights in areas that many of you’ve heard we speak many times so we brought some of those smart folks here.

Before I start, I want to just remind everybody that we may make some forward-looking statements. They will be made with the best information we have at hand. However, they may not turn out the way we hope and the way we think and for any further information, I direct you to our website where we have our filings and our other disclosures that maybe relevant.

So, let’s jump and look at the agenda. I am going to spend a few moments in the beginning really setting the stage for the talent that’s going to speak to you after this. I want to talk a little bit about our strategy for driving the business and growing the business which has rewarded us with some nice growth and performance over the last several years. But behind that, I really wanted to give our folks the opportunity to talk about those drivers in a lot more detail than we’ve had the chance to do previously.

So we’re going to talk about the OCD, we’re going to give some nice information in that, on our new acquisition of NANDA with the SPARK platform and inspection and we’re also going to talk about wafer-scale packaging.

One part of the agenda is with the [agenda] setup so that there will be a brief opportunity for some direct Q&A technical ones on the presentation right after each presentation, but afterwards my colleagues will be join me at the table and we’ll actually do a Q&A session where we can answer the more general ones. So unless you have a very specific question about a slide that you want to answer at that moment I encourage you to use the panel session to dive a little more deeply into the topics of interests.

So I am going to address three things here. One is the secular growth driven by technology changes and industry demands which plays a major role our business opportunity, going to talk about how we’re entering new markets through our R&D investments as well as the acquisitions and a little bit about our success in gaining market share through competitive wins, which is clearly a key metric that we measure ourselves by.

So, I’ll start with the first one, which I'd say secular growth and technology changes. And we’ll talk about the drivers of the industry, most of you here are very familiar with the ongoing shrinks in the technology and the shrinks ultimately drive more mass levels of processing more mass levels and more mass levels in the end turnout to be more process steps, and more process steps in the end turnout to be more wafers being measured and in the end what really matters to us is that, if you have more process steps and you have more wafers being measured and you make more measurements you need more tools.

And so the bottom line is that the growth in demand for process control type tools and particularly metrology inspection is increasing at a rate much greater than the number of steps that are fundamentals to the shrinks taking place. So we get a geometric relationship between the demand for our tool and the advances of technology.

The second one that we’ve talked about is the displacement of other traditional technologies by OCD, OCD being non-destructive, high speed and giving multi-dimensions. It is not only displacing CD-SEM, but some other technologies and we will have a nice presentation, talking in more depth about where the OCD is positioned, what the OCD does differently and why it has superior performance capabilities over other technologies.

The third area is the three dimensional aspect of devices and so most of you are familiar with the FinFET device that was announced by Intel a while back, no other companies are also going into these three-dimensional structures. The key point here is that, we’re no longer will deliver plainer features. We’re looking at vertical features that have a lot more parameters of interest. You’re going to see some really nice graphical models of the types of things we’re actually measuring whether our OCD in these three-dimensional structures.

Along with the three-dimensional transistors, are the three-dimensional memory devices, this is an example on the right-hand side what’s called a BiCS which is in Toshiba, three-dimensional memory cell. There are also many cells being developed. They are known as VNAND or vertical NAND devices in Korea. We are deeply engaged in all of these developing applications, tools and tool modifications to address these exciting areas. The main thing is, it’s three-dimension, it’s not plainer and so the traditional tools do not address the needs for metrology and inspection.

And finally there is the packaging area. The packaging is going three-dimension, we’ve talked about this before that, so it’s the form factor that drives it. It’s the performance. It’s the power of battery consumption that drives the benefits in terms of going to three-dimensional packaging. And we’ve got a nice presentation. We will talk a little bit more about whether these are involved in the three dimensional packaging, whether you are stacking chips, whether you are putting chips side by side in areas that called inner browsers and so on.

But all of these are important factors that are behind what’s driving our business, the OCD, the displacement, the shrinks, the tighter process allowances, the increasing complexity of the devices and the three dimensional features. So a couple of slides that we’d like to share and this one is the adoption of our OCD inter production. So this is not a market play. This is a Nanometrics deployment of OCD into production.

What’s important is not only is there a high growth rate that’s doubling almost annually, but to take note that where the highest growth is in the smallest technology nodes, the 2x, and 3x nodes. You will see that that one is doubled what it is in the nodes, which intuitively make sense, if anything we are telling to you is aligned with what’s happening.

So as the technology nodes gets smaller as the process tolerances become smaller then the need to make more measurements and to do more across the wafer and to better control the process increases. And so you’ll see this continue to increase and we’re already working on the 1x nodes.

Importantly, two for OCD it’s not just a single technology or a single device type technology. You’ll see that we’ve got applications not only in the logic, but we’ve got it in the memory in the DRAM as well as in NAND and even in the hard disk drive that used in the OCD technology for looking at process and process controls in hard disk drives.

So, the other area that we’re going to talk a little about in more detail today is wafer scale packaging. We’ve talked, in 2009 we acquired a product line from Zygo. We call the UniFire played a key role in this area and we’re try to help give a little more clarity as to where we fit into our packaging. We’re not so much in a traditional packaging flip chips and putting things on to PC boards.

But really when they are packaging the devices either on the, what they called interposer, which is the piece of silicon, where they put two chips side by side still using small micro interconnects, which is shown here an image of one of them or they stack in the devices, one on top of the next. And then in this case we’re looking at through silicon VS. Again we’ve got more slides and some more in content that we will be sharing later today.

All in all, it’s a very, it’s all amazing technology it’s an inevitable technology it’s being driven by, as we said, form factor performance and battery consumption and so the materials and equipment industries that are positioned in this area are benefiting from these transitions. And we have a couple of products to address in this area.

So we talk about, the next step is the entering new markets through R&D investments and strategic acquisition. So the first one is to remind everybody that we continue to invest on our product line, we consistently introduced next generation products across our entire area, whether it’s our flagship product the Atlas OCD, where we just introduced the Atlas II in the fourth quarter of last year.

Our software or that we call the NanoCD suite and we continuously improve it, that’s used for the modeling, that’s used for the analysis and control, there is going to be a more day, we’ll be sharing on that today.

Our UniFire, where we continue to increase its capabilities to address new applications and now recently with the SPARK, but our R&D investments are continued to be made to make sure we remain competitive and have a differentiated capability to win in a competitive environment.

So, a little bit about in-line process control, this is a slide that reminds folks that our core business is based on optical metrology, optical metrology being fast, non-destructive and suited for in-line, in-line is important, its going to be fast enough, so that you can be used between each of the process steps to make sure you control the sequence, they are all non-destructive, its basically light-in and light-out and the interpretation of the change of that light as it interacts with the surface measuring either structural features or material characteristics.

We packages in a variety of automated platforms that are appropriate either for the semiconductor environment, data storage and so on and we also have the software NanoCD suite which we will be explain in more about. We just recently added to our optical technologies with the acquisition of NANDA, a small firm in Germany that is now obviously part of the Nanometrics family added the SPARK platform and the inspection capability. Up until this time, we’ve been basically a pure play process control metrology company.

With the acquisition of NANDA and the incorporation of the SPARK platform, we now move into inspection and that’s the other leg of the story in terms of process control. We have – for us, we sell multiple markets and the reason we plug that out is that we really don’t need to know in any major detail whether or not the device is going to be a memory device, or a logic device, or it’s a solar cell, or an LED.

What we need to understand and help is how do you control the properties that dictate the performance of those devices. On LED and solar cells, solar you put light in and you get electricity out, and in LED you put electricity in and you get light out. I mean in many ways you can look at that. What you really care about is; what are the critical features? What are the dimensions? What are the film thicknesses that need to be maintained in order to get the desired performance, the yield and reduce the manufacturing costs?

So the common value proposition across this entire product area is that we, they use our tools to develop the next generation products. We’re deeply engaged with all of our major customers developing their next generation devices. We’re working very closely with them too to improve the yield, trying to yield is a key metric for the profitability of our customers and ultimately reducing their manufacturing costs.

So just a couple of things about the NANDA acquisition and we’re very fortunate at large. Mark Carter will be talking about this product platform to start. So I want to remind everybody what is our criteria for acquisition? We get asked this question often. The first thing is adjacency. We’re not looking to acquire companies to put revenues on the top that don’t really leverage what we’re doing. We wanted to be adjacent. We wanted to be compatible with our core capabilities.

Well second thing is we want to expand our served markets. Our goal is to increase the served markets and if we increase the served markets and meanwhile gain market share, then we can grow the business at a rate faster than spending, faster than the industry overall.

The third one is to leverage our synergies. We have a global infrastructure. We have sales, service and applications all over the world and when we find a company like NANDA that’s got a great product, that’s highly differentiated that has customer interest. Typically the barriers that they run into is the customers are same, we likely what you have, but you are not ready for 24/7, you don’t have the deployment of the service and applications groups and we bring that to the table. We can say, we can solve that problem, we’re healthy company, with the people in place to support the products and we help them overcome that barrier.

And finally, we needed to be differentiated, we don’t look for it, we are not excited about B2 products. We want to be a market leader. In order to be a market leader, you have to be able to gain market share. In order to gain meaningful market share, in a market where the barriers to entry are tough, and the switching cost are very high, you need to have a highly differentiate platform that offers a value proposition to your customers that, they are not realizing with their current offerings.

So right across the board, NANDA hits all four of those and their SPARK platform basically met, all of our criteria and that’s why we get so excited about bringing business into our family.

So what is the NANDA acquisition and more importantly our entry into inspection do for us? So typically we describe or have been describing our business as process control metrology. And this is a kind of showing what the metrology total available market is running about $1.4 billion and we’re kind of segmented it in the different areas whether it’s films and LCD.

And that part of that market that we serve with our products, which is about $977 million. So close to two-thirds, three quarters of the market, we actually serve with our current products and we are doing pretty good in that space. But that can’t becomes a limit, with the acquisition of NANDA and incorporation of the SPARK platform, we now enter into inspection.

Inspection is actually a larger market in total than metrology. We don’t address it all, we have shown right here out of the $1.6 billion, $1.7 billion we got about $300 million that addressed with the – with our current products and product pipeline. That’s an important part though, because what they really amount is approximately a 35% increase in our served markets.

So we have taken our serve market from under a billion to about $1.3 billions with the acquisition of the SPARK platform and allowing us to now participate in an important area where we can continue to pursue our goal of growing faster than the rest of the industry and expanding our market share.

So, just a little summary about the history of our acquisitions, and I’ve got to hit a few of these right at the top. So we’ve had a series of acquisitions that we made and we start off with the KOR Nanos around 2005, we acquired Accent Optical, which got us into the overlaying some materials business. We acquired Tevet in Israel small company that had Fiber Optics; it was primarily for the solar business. We acquired the UniFire product line which addressed the wafer scale packaging and now we’ve acquired the SPARK product line.

In most of these cases with the exception of the Accent, when you look at the Tevet, UniFire and SPARK, we are really looking at product technologies, we are looking at ways to expand our product platform and getting addition to our core competencies. So if we look at this on kind of step through, the idea is increasing our footprint in the fab.

So with the every acquisition, we are able to bring greater number of solutions or more comprehensive set of solution to our customers. And you’ll see that they all going to the fab, they allow us to have conversation with metrology and inspection now with lithography with CMP with the interconnect folks. So, the last couple of slides I just want to talk about our position and performance against the industry.

Many of you have probably seen the announcements of the CapEx spending plans in 2012. On the left, we’ve shown the slides the customers which are Samsung, Intel and Hynix which have been last year we're 10% or more customers. All of them announced increases in spending, up until about a day or so ago, all the runs on the right hand side had announced decrease in spending, but TSMC has been up in number and I think they are now talking about rising about another billion to be $7 billion.

And so that would be moved in terms of year-on-year spending. But what’s really important is that if you look at our top three customers, again the Samsung, Intel and Hynix, we’re well positioned with the big spenders. Clearly, they spend twice as much as anybody else either one of them and they also represent over a half of the spending in the industry. Very key, very important to make sure that the people that are spending the money are the ones used in most of your tools.

So last thing is I want to talk about secular growth, so Wafer Fab Equipment grew roughly 10% last, year year-on-year, process control grew almost 20% year-on-year last year. OCD grew almost 30% year-on-year last year, if you look at the spending and our OCD business grew 40% last year, year-on-year. Clearly that supports are story in our thesis that we’re serving the growth marketing and we’re gaining market share and this allows us to continue to outperform in the industry.

So one of the slides that we’ve shown historically is this is capital spending to buy this into our revenues. No segmentation just kind of taking straight CapEx into our revenues and you’ll see that that green yellow curve shows that our portion of the spending continues to increase year-on-year. And with the exception of the acquisition of Accent, where we actually acquired some revenues back in the 2005-2006 timeframe, all of the other acquisitions, all the other activities and the growth in revenues was a result of organic growth. So in rolling that up what is our strategy, addressing markets that are growing faster than spending and general secular growth, addressing new markets and gaining market share.

And with that I’m going to turn it over to the folks who are going to do a deeper dive and the first one will be Kevin Heidrich who is going to talk about the OCD markets drivers and opportunities. And thank you very much and I think you’re going to enjoy this presentation.

Kevin Heidrich

Thank you, Tim. And I’m excited to be here in front of everybody today, I haven’t had a chance to this type of form before. I’ve done a lot of IR slides for Tim and I’ve seen this story go up many times reversed but never out in front of everybody. So, it’s a great opportunity for me.

I’ve been at Nanometrics now for five years and I’ve really enjoyed seeing all these new products come to market that we’ve developed organically, as well as looking at the opportunities for the inorganic acquisitions that we’ve driven. Prior to my five years at Nano, I was in R&D at Intel for a decade working on numerous aspects of process development from the 130 nanometer node down through the 32-22 nanometer nodes before I left to join Nanometrics. And it’s given me a lot of foresight into some of the key drivers in process control metrology that we are serving today.

So to talk about how and expand on what Tim said around OCD technology and where that fits into the market, I’ll do a little bit to put some context together on how we see OCD fitting in against other technologies. We’ll talk about how it serves the various markets, specifically the N devices types of markets as well as the process steps through the fab and then will lead into a detailed explanation of the technology where my colleague Doug McCutcheon will speak and give you a little bit of a science lesson.

So first off OCD is really an inline technique as Tim said it’s non-destructive. And if we step through what people have done historically for metrology, it’s really the right benchmark to compare and give you the perspective for how the technology is deployed. So first off here in this graph, we see Atomic Force Microscopy, really a profiling technique and it kind of gives us the normalization of a technique, which gives customers inline information, but it’s relatively slow and the information is relatively limited. So there is a finite amount of information customers can get for process control.

This is a help of bit of information in every customer we have, also has in AFM. And what you’ll see is these techniques are all complimentary, but there is a real leverage of OCD.

If we look at inline techniques for real process control at the speeds of production, OCD is complementary again by CD-SEM technology. CD-SEMs have been around or critical dimensions Scanning Electron Microscopes for many, many years and spend the workhorse for a long time. We can see in this case, we get really good two-dimensional information. We get no information around what’s happening below the surface and that has advantages and limitations.

If we look for that next level of information, we want to see what’s happening throughout the device or real 3D information historically, customers have used cross-section SEM, FIB/SEM and most recently Transmission Electron Microscopy or TEMs to do real forensic or biopsy type analysis of their devices. And we see suddenly in the same device in the upper left is to the lower right, we see a tremendous amount of information below the surface now and we can see what’s happening to the devices in process. This is incredibly rich dataset, but again, it’s limited in how much data somebody can get it, its hours for sample preparations to get one data point, and it’s a destructive technique. Once you’ve taken the wafer out for TEM analysis you can’t put it back inline, so to slow feedback.

Comparing that to OCD, we have an optical technique now, where we can build a model of this structure and using the light into sample and the algorithms out that Bill will explain in a few minutes. We can build a rigorous model of what’s happening inline.

If we look at this, this gives the chance now inline to get the same type of 3D information that we would get out of this high-resolution TEM. We have a way to do this non-destructively and we have a way to do this very fast these are measurements that takes a few seconds to do in production lines. So we believe that the value of OCD is driven by the data richness, the speed to information and the relative 3D information we get across the flow.

So, if we talk about how OCD now is deployed across the fab, this is a graph – as race track is on Tim’s previous example of kind of how we see our customers. And the – all of our customers really run a fab process where in this race track you might have 20 to 40 passes around this loop depending on what type of device we’re making.

And if we look at this from a very high level point of view we see that really there is a transistor formation segment an interconnect formation segment, and then the litho patterning that goes into that. OCD fits into the transistor space really given the fact that we have these very complex device architectures coming in now, no longer plainer transistors and you’ll see some examples of that. We see many features that are becoming very important that are varied features that are reentering features or things you can’t see from that inline CDs and plainer views.

And we’re also starting to see where critical dimension is sort of a monitor that we use, but it’s really films on surfaces or other complex materials where we’re looking at replacement gate materials for these advanced logic devices, (inaudible) materials for DRAM devices and such where we, we are looking at materials and materials properties in addition to just the height or width of something. So the transistor segment really drives a lot of applications for OCD.

In interconnect we see again, there is a deployment of OCD, we’ve seen increasing metallization layers as we stack up all these next generation devices. So memory layers, memory devices whether it’s NAND or DRAM have increasing metallization to deal with the density of the cells. Logic devices are up to 10 to 12 metal layers in the standard fab process. So we see the critical dimension meteorology for the metal CD, for the depth and increasingly for the aspect ratio or the profile control in these complex steps as densities of interconnects go up and up and up, it’s a width plus the depth measurement which is critical, so there is OCD in the interconnect space.

And then lastly, the classical deployment of OCD is across the patterning loop, which is lithography and etch. But in this case, the OCD technology has really driven by the classical scaling that we all know about what is the node over node, pitch, shrink that goes on for every technology. But increasingly in addition to the pitch shrinking, we see complexity of devices. Some people have heard about computational lithography, OPC or Optical Proximity Correction where you have more complex lithography techniques to drive more complex and smaller device scaling.

And increasingly, OCD has deployed not just to measure that dimension of something as shown from the top, but also the profiles and sidewall angles and Bill will show some examples of that. So we see OCD increasingly in complex transistor formation, driven by the scaling of interconnects and lastly, the continued drive for smaller features in the patterning space.

If we step back, and now we talked about every fab for our customers, it’s normally the same by those three definitions: transistor, interconnect and patterning. But we’ll take a minute now and look at it by device type. And as Tim said, OCD is adopted in that graph of process recipes by all of our end customers across all device types. So I’ll spend a few minutes to talk about how it fits into each of those. So, first we see an example of a DRAM structure and this is an example of, this is a model that came out of our OCD modeling engine just to give you a perspective of what it is we’re measuring and this is a berried date transistor structure. And what’s really important in this structure is that features at the bottom of this etch. So we see here, we have a feature whereas a berried feature and what you’re trying to control is the critical dimension of something that’s below the surface and OCD is unique in enabling this.

The DRAM structures have historically been very high aspect ratio structures and consequently they were one of our first adopters of the technology. In the DRAM process world, there is over 40 process control steps. They measure many parameters by step and again as I stated, many of the parameters are vertical features.

In the NAND case, NAND is really the scaling leader. We have many customers and all of our customers are in 2x production now, 28, 27, 22. We have several customers running 1x production this year with a rapid adoption to the one Y one Z node for NAND. So they’re really the pitch scaling leaders. There is a simpler process slow in NAND, but the higher throughput on the fab is over 20 process control steps. The device layout is a little simpler, so there is only two or three process control parameters by step, but this pitch scaling in a fact that their aspect ratio increases every year.

If we look at a traditional NAND gate, you have to have so many electrons to tell if you have a 0 or a 1. If the pitch goes down, the only way to have enough electrons as far as the aspect ratio, the height of that (inaudible) to go up. So increasingly again, OCD has driven by – like OCD, but also the height of structures as well.

Recently, we’ve deployed OCD across numerous logic and foundry nodes for front-end-of-line transistor architecture, and here the real key drivers in the logic – in ASIC space are really your transistor control. And Bill will walk through a detailed example of how we deploy OCD by step, so what we see here is that, there are critical dimensions again that are width and height. And this is a technology which is employed increasingly as we go node-over node in logic, but it’s the strain types of materials, the (inaudible) type materials and most recently the FinFET and gate last materials with process control steps where OCD is your unique capability.

And then as Tim alluded to in the 3D memory space, there is a drive in NAND scaling to continue the pace of memory cell density and vertical is really the next way to scale that. And here we see an example of a eight-layer [VNAND] cell or big cell. Many of the development customers have roadmaps to go to 8 to 16 to 24 to 32 bilayers to end up with high aspect ratio, very complex structures. And again, most of the parameters for process control that you want to measure are berried.

They’re at the sidewall angle, they are the bottom of the structure, and there is really no other in line way to measure and control your process other than an OCD technology. So sort of in summary by technology now, we see OCD is employed across all device types and importantly, across every key process step in the fab.

If we step back a little bit to that chart that Tim showed for OCD technology adoption, it’s also by node and more importantly, we’ll talk for a minute about how the technology adoption of OCD is increasing by node as we go forward. And this is really how many steps and what types of process control our customers are doing. So the chart Tim had earlier shows us going back to 65-nanometer, but I’ll start at 45-nanometer and lead you through the 1-nanometer node.

So at 45 we had really, Nanometrics had adoption across DRAM and NAND customers and it was really driven by etch as I showed in the earlier graphic as well as CMP and films control. And films is an important takeaway here, because it’s not traditional films metrology, its film thickness on device, let me show some examples of that.

At 32-nanometer, we expanded our application space for our DRAM and NAND customers to include lithography control, so we’re growing our application space through the process flows and this is really substitutional now for CD sample, we’re starting to takeover that share.

In 2x, we expand not only our process steps by adding advanced materials control in our classics DRAM and NAND customers, but we’re also added logic and foundry customers for LITHO H and important process control steps in logic for (inaudible) strain control. So critical transistor performance steps that the only way for these leading edge customers to measure that inline is to deploy an optical critical dimension technique.

And going forward into the 1x node, we see new applications increasing, so we mentioned in the alternative NAND technologies, we now see 3D memory, and then also in the foundry space we see continued adoption of advanced technologies. We’re now we’re not just moving from simple process control; we have our customers in there and customers pushing us to do on-device metrology.

So, we now have pushed our analysis in algorithm capability to do specific on device measurements i.e. for SRAM and other memory self control, but we see that it’s the next key driver. So sort of in summary in this case, we see, we see that OCD is expanding in applications as we go forward in technology nodes. We continue to see that trend increase and we see that our customers increasingly depend more on OCD as they do advanced R&D and technology architecture decision definition.

So just before I hand this over to Bill McGahan to talk about how OCD works a little bit, I want to give everybody a little bit of a brief aside, we talk about all these structures and how can you kind of relate to this in anyway other than what these transistor architectures are. So I had (inaudible) do a little bit of homework for me. Right now, we currently have six analysts that cover us and what we did as we took our OCD engine in those graphics and cartoons that you see are the graphics in cartoons that we come out, that come out of our graphically user interface.

And one of the strengths of engine that Bill will talk about is, how can we put in what customers want to measure, how do we fit that data back and give you the right process controlling answer. So here we have an example of University of Washington logo here. This will be an example of a structure rendered at a very small pitch so sub-micron pitch, and if we look at the materials I’d say silicon nitride on silicon. We render for the special spatial signature, we can go fit that data and then go tell you what that profile looks like. And Bill will walk through what that looks like for advanced transistor architecture.

But kind of going forward, we have the ability to render very fine detail, and we can’t show you customer structure details, but we want to show you a way that we can take this detail to the next level and you can see things like (inaudible) University Of Tennessee where we can get this microstructure detail in line at these production [sites].

Lastly, if we look at things like how do you look at a spacer or a conformal coding or materials engineering, we can see like in the [ALY] here this might be a spacer surrounding a given structure. So we have the ability to stack and add process steps, and Bill is going to walk you through a few examples of that.

So with that, I’m going to hand it over to my colleague Bill McGahan, who is going to walk you through OCD deployment and the technology. Thank you.

William McGahan

Thank you, Kevin. My name is, Bill McGahan. I’ve been with Nanometrics for 17 years now. It’s kind of humorous actually, because when Kevin was back at Intel, I was the guy going into Intel and he was the guy the customer doing the metrology system of valuation. So, I’ll never forget when Kevin came to work for Nanometrics, and I went to the CEO and I begged and begged and begged please let him report to me for just one week. He toiled with me (inaudible) my car needs clean (inaudible). So, I’m the Vice-President of Global Application then Software Development as I said, I’ve been with NANO for 17 years so I’ve been through the whole OCD cycle.

And I’d just like to talk a little bit about the OCD technology itself. I have to say this is a very different audience than what I’m used to, so this is kind of a first to me in front of the investors so, bear with me a little bit on this one.

So what is OCD? OCD stands for Optical Critical Dimension. So let’s look at a very simple case of what we would be looking at. Here is the cross section of an electron microscope picture from the STI Island, so this would be shallow trench isolation. And you can see you’ve got your silicon pillars there, there is some nitride on top, you can see the distance scale there. It was discovered very early on that there were certain dimensions in these structures that are really critical to the performance of the device. So you fabricate your actual transistors, your logic cell things like that and you get done and do an electrical test and you find the things like the [gateways] are absolutely critical to the performance of the device. So that’s where the terminology critical dimensions come from.

So as Kevin mentioned you can actually break the wafer and take a look or here is another screen shot from our software, we can build an actual structural model for this and [instead] optical data. So here are some of things that we like want to measure. We could be looking at the dimension of the top region of the STI Island. We could be looking at the height. We could be looking at the height of the etch into the silicon. We can be looking at sidewall angles of the different regions, any structural parameter in the model. The problem with the cross sectional approach as Kevin said, you have to cut the wafer, you have to break the wafer, you have to do the wafer preparation, it destroys the wafer, it takes a long time to do this and its very expensive to do.

The upside of course an OCD type of a model is, you don’t have to break a wafer you can do this in-line on the product wafers as they’re going through the process flow and you do not destroy the wafer, and it also enables very complex structures to be measured as we’ll demonstrate here a little further on.

So how does OCD actually work? There is basically four steps we’re going to show here. The first step is the building of the model. So typically a customer will come to us and they have a process that they want to actually control and characterize. And they know what the critical parameters are or at least they think they do are in the structure and they will tell us this is the type of structure we’re looking at. They’ll say what the pitch is, what the general dimensions or processed nominal dimension are for a regions in this model that (inaudible) what the materials are. We will go on the software and then build this type of a model. This is an example of a FinFET structure here.

So basically, this all comes from the customer and what we find is better the customer’s information is to us the more rapidly we can actually get this done. From this model then, we can calculate what the spectra, what the data that we actually measure would be.

So the next step is actually get one of these wafers or multiple of the wafers, and go to the tool in the fab and acquire actual data from the wafer. So this can be done with our integrated systems, the IMPULSE, for example that can also be done with the full blown standalone, spectroscopic ellipsometry systems, and I’ll talk a little bit more about the acquisition, a little bit further on.

So now I have a model and I have a set of data or multiple sets of data. What I then do is I calculate from that model. What I expect and I compare that to what I actually measured from the wafer. If you are really lucky, it matches up extremely well like this.

Often times it doesn’t match very well, and then you have a process of actually adjusting parameters in these models to get it to match up to what you actually measured from the wafer as closely as possible. Of course, this is all not done manually. We have automated algorithms that actually float the different parameters in the model in order to achieve a best match to the data to get the match to be as close as possible. So once you get to the point where you have a very good match to the spectra and these are ellipsometric spectra being shown here.

Now, we take a look at the results. So I get a list of parameters out they could be height, profiles, sidewall angles, curvature of etch regions and things like that. And I take a look at that, and I’ll often compare those results to reference data from TEM type of measurement or a cross sectional SEM and evaluate the quality of the model. Does it correlate to the conditions of the customer expected or to the reference data that they have from those wafers. If the performance is not good enough, we go back around the loop adjust the model trying to optimize the recipe, and we have automated tools for that.

So this is a basic process flow of how an OCD recipe is made to deploy into production. And so these steps will dictate your time to result or the amount of time it takes from the time the customer brings you the structure until you have a functional recipe that works on the tool.

So let’s look at the OCD data collection, and I’m going to give a little bit of history here. So here is kind of the flow, what we’ve gone through at Nanometrics in the OCD process. So the first phase began back in about 2000, that was the year I moved to Texas by the way, and it started with spectroscopic reflectometry. So at the time, we had an integrated system that was an unpolarized spectroscopic reflectometer that we were selling to apply materials for integration into CMP. (Inaudible) contacted at their etch group and said, can you make a machine that can do polegate etch that can actually measure the shape of the poly lines for the gate etch.

So we had a meeting and we thought about it, and we said, well we want to make this integrated because that’s what they are asking for. So we put a polarizer in our regular integrated spectroscopic reflectometer, so that we can align the polarization state of the light across the grading or parallels of the grading. This worked incredibly well for measuring the gate etch. And as I turned out, we started to find other applications. We could go resist lines, things like that with this system.

The next logical step was to extend this to the spectroscopic ellipsometer capability. We have been selling spectroscopic ellipsometer systems as well since about 95 or so, but always for thin film application. So we extended our OCD engines to cover the spectroscopic ellipsometer case. Around about this time one of our customers suggested to us that they really like the Timber software and they thought that the combination of our ellipsometer with Timber’s analysis software would make a very nice tool. We partnered with Timber and we actually sold systems with their analysis software for quite a while.

During this phase, we also went to a combined SR and SC. We had that capability for thin films for a quite while. And so we went to this sort of a system where we had standard ellipsometery and then polarized reflectometry at normal incidents.

Historically, its just kind of interesting here because around about 2006, we lost the ability to use the Timber software in our system. And so our internal development all through this phase we still had our own internal OCD and thin film analysis software. But we were not doing a lot of development on that. Once we lost the ability to use the Timber software, we kicked it into full year to do our own internal analysis, which I’m going to talk about in the next slide. So that happened at about the space. The current state-of-the-art is what we would call a multi-SMUs, multi-mode type of system.

So we still have the normal incident spectroscopic reflectometry capability. We also have the oblique 65 degree angle of incidence spectroscopic ellipsometer and that ellipsometer is capable of measuring at any orientation angle to the grading structure on the wafer. The key message from that is we can pile up of immense amount of optical data from a single location on a wafer very, very quickly in these production tools.

So we’re also showing basically kind of the scope of the applications as we evolve this hardware to be more and more power from simple 2D, simple 3D to the thin type and now with this full blown system, we’re able to measure very complex 3D structures.

So it is the evolution, increasing information, more complex structures, better precision and accuracy. The key point here is we have a very sophisticated, very powerful hardware platform that is capable of measuring a lot of data, a lot of information from a single spot on a wafer.

So let’s take a look at the software. Our analysis software package, we call it NanoDiffract for obvious reasons. We’re looking at light diffraction of these pattern structures. There has been very key capabilities that differentiate us from the competition with respect to this software package. First and probably most important, we have an extremely good editor for the creation of these types of models, these complex 3D types of structures. When we sat down in 2006 and implemented this editor, our number one design criterion for this was that it would be completely general.

Any structure that a customer could make, anything that they could pattern on that wafer, we can build in this editor. We didn’t want to have to go through cycles and cycles and cycles of redoing the code and adding, getting new features into these code, every time a customer came up with a new process or a new structure, we were very, very successful with it and we want a lot of business for years because we were the only one who had this full general capability in the editor. Of course, there is a lot of codes that goes underneath of that that actually choose this structure into a form that the engine can handle, but these make the pretty pictures, all the pictures that Kevin shows, these are screenshots directly out of the software.

Second major differentiator, we have by far the fastest calculation engine for these diffracting structures in the industry and the performance is very good. So as I mentioned, we can make basically anything, any repeated structure, we can build the model for it, we can calculate it extremely rapidly. We also have a form of this that is ideally suited to high volume manufacturing, these used to be called library systems, you probably heard that terminology. We are really a generation beyond that in terms of the technology. We call it a [prezi] system and it’s basically a form of the calculation engine that can run at extremely high speeds. Sub one, second analysis times for running in high volume production, very scalable solution, very easy to distribute these types of models and prezi objects across the fleet of tool.

Finally, we have a very nice suite of tools for our applications people to use to actually build these models to optimize the models to figure out which structural parameters they are going to float and which ones are fit in order to determine an optimized recipe. Now a really key point along with it, so my message here is we have this is a dominating factor for us in the OCD market, this software. But along with it, we have over a 100 applications people deployed worldwide who are expert in the use of this software and experts in the process that the customers are using. That in combination with this software is a very powerful source for us competitively.

So let’s look at some example through the process, well take a look at the thin type structure here, gate less 3D transistor had some different places where the OCD technology will be using the process flow. So the first place we would look we would be in CNP and so for an example here after the oxide deposition and polish the oxide bag, we might be looking at the franchise of the oxide itself then we could go to etch, etch back and now look at the actual thin height, that’s a fairly difficult measurement right there in spite of the fact that the structure doesn’t look that complicated.

Moving forward, we get into gate, we can do looking at litho where we got the resist lines now and we’re looking at the resist profile, CD, sidewall angle, all of that. And what’s interesting about this, you cannot just ignore the substructure here and an OCD measurement. So it’s not like a top down CD-SEM where you’re only looking at the resist line, you also have to look at the model for everything underneath, in order to get the proper measurement of the resist line.

That could be a liability they can also be a strength because if a downstream process modifies the substructure, you can catch that with OCD, but you cannot catch it with the CD-SEM. We could also look at Gate Etch, very common application here where we are looking at the gate profile now, that we’ve done the etch the resist is gone, and spacer etch, where they have done a conformal coding of the spacer material, and then anodes are tropically etched it back and in this case we are looking at the spacer width.

Another point I would like to make here just kind of interesting couple of points. So Kevin showed the cross-sectional TEM and in fact that it does have a data richness that one could compare to OCD. But if you look at a 3D structure like this, to fully characterize this structure, you would have to cross-section it in both directions on a very carefully chosen location. So an OCD measurement can get this entire structure in one single, very rapid measurement on a single point, whereas with the cross-section you cannot get that full set of data in one measurement on the spot.

Finally, if you look at metal gate replacement, this is kind of towards the end where the gate is actually been done, we can believe the poly or the metal high oxide thickness and erosion. So the key there is, we are ubiquitous throughout this process, we are used everywhere throughout the process on these very complex structures. And as the customers learn more and more of the type of data and the type of information that they can get from these measurements, we are becoming more and more used throughout the process.

The other thing that we found that is happening now is, the customers are also using these types of measurements to do their process development, not just the back end once they’ve gone into high volume manufacturing, to detect process excursion, they’re using this actually in the development phase as we are bringing the process up.

There is an example of some actual performance, this is 1X thin-fat R&D type of structure here we’ve been looking at this all the way through the process flow. So for this particular measurement, we drive to this location we take one set of data, ellipsometric data and reflection data. We were able to slow 11 structural parameters, so I’m measuring a 11 different dimensions in this structure simultaneously from one single measurement, when I get to that point.

That is what’s fascinating, if I look at the precision so I drive to that point I take 30 measurements in a road look at static precision. The worst precision on anyone of those structural parameters is two angstroms just under two angstroms. The atomic diameter of silicon is 2.34 angstroms. So even floating a 11 parameters simultaneously we are doing a measurement that exhibits precision on a sub-atomic scale, one thing debate the physicality of those kinds of numbers, but for process control that kind of precision is critical, and extendable through future knows there is an optical technique, we had a broad range of wavelength. So we do not see this technology as being inextensible, it’s going to hold up for quite a long time.

Okay, this is the final slide here. I’d like to look a little bit into the future, so I’ve described the measurement capability itself in terms of a single point measurement, in terms of the type of machine that we use, in terms of the software that we use, and how we extract information from a single set of data that we would acquire from a point somewhere on a wafer.

Now what we found is that our customers don’t just deploy these single point systems, a typical customer will have an entire fleet of tools, in that fleet of tools they may have multiple fabs with fleets of tools in them as well. So what we’re looking at moving forward is using an intelligent system to link these systems together to link the nanoionics that exist in these fabs together and leverage the fact that there is more than one of them.

We can do this to improve the measurement capability. We can do this to improve the metrology performance and to enable certain measurements that were not possible from a single side measurement. It increases productivity and we can also use this to enhance the cost of ownership, basically make the fleet better than just some of its components.

So this is how it starts. So initially a customer might have an Atlas here. This is an Atlas standalone system. Often somebody is cubical to have a workstation, they go to the Atlas, they measure a wafer, they take the data, they go sneaker.net [ph] and take it back to their work station, and they pound on the data for a while, they make a recipe, they take it back to the metrology system and they run it, there is a lot of manual processing of data involved with that. Not very efficient.

So maybe they may get another tools, they start to build an actual fleet of these things of Atlas systems, we like that, we love to sell Atlas’. Then the back of the Atlas’, they will typically add some of our cluster computers. So these are the NanoGen’s, which are used to do OCD spectrum calculations, are used to do data mining operations. They are basically large clusters, a fully populated NanoGen has 32 blades, a very powerful computer, kind of fascinating, it take 250 amps, 220 volt line to run one of those boxes, crazy.

So they might also have integrated meteorology systems, on CMP, on etch, on track things like that. So a typical customer will have their – our favorite kind of customers they will have all this sort of things inside the actual fab. So why would we want all of this just sitting there operating on its own, why not leverage the fab that they’ve got all this nano equipment in the fab, or they just bought another nano fleet and even better.

A third component here – fourth component actually is they always have reference data. Most customers when they evaluate the performance of a recipe they’ve run with what they call a DOE or set of split wafer. So they run wafer at a certain process and they intentionally run it low, then they run it at the process nominal, then they intentionally run it high, then they break those wafers, then they take the wafers to the electron microscope and they generate reference data. So they say all right, here is the best numbers we believe for these wafers. Then they take the results from the OCD and they do a correlation plot.

So they say, how well this my OCD data correlate to the reference data. But you can do that in a manual way, so maybe my recipe doesn’t correlate very well, and I got to go back to the drawing board and tweak the recipe, but you can also do that in an automated fashion. So I would like my suite, my fleet of tools here, and I have visibility of this reference data also that existed on that fab. So I’ve got all this stuff, I got this fleet of tools, let’s price it altogether, the concourse.

So we designed a server, basically it’s a multiply redundant, very powerful computer, hardly enough in the same sort of rack that a NanoGen occurs in, maybe [money] on the manufacturing there. And its got a new piece of software that we wrote that is basically a fleet manager, so that’s kind of a fleet of tool taking control for example job running or job processing on the NanoGen cluster.

So my Atlas’ are sitting there in the fab, measuring away, generating data and they shoot the data after the concourse and they run this analysis, run this recipe on it and then the data comes back and of course concourse server remembers everything, it can do the same with the integrated metrology system, with much visibility to the reference metrology. So I can get on my NanoStation, I can see all the data that has come from any of my metrology tools that are in my fleet and basically process that.

So this opens up a very interesting way to approach this very complex 3D metrology. You saw the structure that Tim showed of the BiCS from Toshiba and some of the – and Kevin showed the (Inaudible) type NAND structures for memory. Very complex, massive numbers of parameters, unlike that you are going to drive to a single location on that structure and be able to fit for 50 or 60 parameters.

But if I’m operating a fleet like this, I’m seeing that wafer at multiple points through the process. If I track all that data on the concourse I now have the capability to link the analysis and to leverage the data that I have and the knowledge that I have about wafer as it goes completely through that flow.

So the concourse is a released product. We’ve actually put one of these in the field so far, and we do believe that this is a very good direction moving into the future for us in terms of enabling the very complex metrology by leveraging the presence to the fleet.

So, basically that is the total message. We are a very powerful player in OCD because we have a very dominant hardware platform and excellent hardware platform. We have by far the best software in the industry and we believe our vision moving forward based on the use of the fleet rather than looking at these tools that are seen as single islands in the field is really going to make us very strong moving forward.

And with that, we’ll bring Kevin up here and move into the question-and-answer section.

Question-and-Answer Session

Unidentified Analyst

The type of capability that concourse represents change the customers engineering organization particularly with respect to the metrology engineers, in other words, is there a newer heightened role for the metrology engineered within the customers organization as a result, a cultural change if you will?

Timothy J. Stultz

Okay, Bill. Thanks for the question Bill. The question is how does this new fleet deployment change, how our end customer – what their job scope is and what their job definition is? I think that’s a fair point and if we think about who Nanometrics end customers is, there is – the ultimate customer has the device yield and integration engineering, its how does our information get to them.

Now we do believe going forward that providing more information up through the flow goal we’ll change that job scope and definition. And I think as Bill said, we have these 100 plus applications engineers worldwide. They are the worldwide fab metrologists and our customers are increasingly counting on us in our expertise in that and the metrology engineered in the fab is starting to become that information aggregator. So I believe that I think you are right.

I think that scope does change and they do move into more of an integrated or integration or yield device engineering role. Its how do they get one parameter, which is what a customer ask for and say, hey, by the way, I just have one parameter, I now have 10 and here is you’re another nine problems.

All right. We can make it incredibly unpopular in the Fab 2, because there is a lot more information than any process engineer wants to show to the other modules. So we’re going to break all the silos down between all those profit groups.

Unidentified Analyst

Can you describe your sale process to sell this entire platform, and what the pros and cons are on each piece for the customer buying the integrated solution versus buying components from you and other components from other competitors?

Timothy J. Stultz

So, the question is the sales process of the whole fleet of products versus an individual (inaudible) product? So, I think the sales process for us is really driven by it starts very early now where a customer comes up with a new architecture or device type, and they come to us with a problem definition of something that they traditionally can’t do otherwise. And they give us this problem and which is mostly done in offline evaluation way, where we’d have this design of experiment wafers that Bill talked about. And we go back and we have that we solve that problem offline and we go and we now have this killer app as it (inaudible).

And we have that solution for them and then it gets quickly integrated into whatever that process module is, whether it’s a critical edge or it’s a critical CMP step et cetera. And that will traditionally drive the sale of the metrology tool to do that application along with the relevant software and engineering licenses for that and traditionally NanoGen or fleet of NanoGen’s now. And that first sale is really for that killer app and it’s not just in tool and in application, it’s usually a fleet of tools, and then the cost of steps before and after.

Once we are in somewhere, there is a scope in application scrip where the data starts to get more application, and then we see it grow from etch into these adjacent process module CMP or films or lithography and that drives additional tool sales and it drives additional interconnectivity. And I think for many of our customers kind of getting back to those question and what the metrologists job is, they want to do their job as simply as they can. And aggregating down that, that fleet of tools into one application space with consistency across the fleet adds high value to them.

And so we’re starting to see consolidation in our leading customers work to how that fleet is deployed. But we do believe going forward, mixed fleets are always going to be there. And so this part that Bill talked to you here, this reference metrology, this could be other third party data as well, I always going to make sure we have a tie to that data to sell on top of what our customers already have, but our goal is to grow our footprint and scope.

Unidentified Analyst

Two questions, if I may. The first one is on the chart where you show the adoption of OCD across all – by its types. It looks like for the last few years it really has been much of an increase at all in logic, does that change as you move forward and now that most of the growth, all the growth has been driven by memory essentially. Does that change going forward as logic start to grow though the memory has been growing? And the second question is, perhaps you could talk a little bit about your competitors within OCD?

Kevin Heidrich

So the first question is, what’s driving the growth in rest of the adoption and by device type so, logic, foundry DRAM, NAND et cetera. And we certainly see, we saw very rapid and very early adoption in the DRAM case. And this was really driven by the fact as I said earlier, they have the – early on they had processed the full steps that were buried and reentering and so forth. They quickly latched on to a technique if they could deploy inline for not just top down metrology.

In the case of the logic folks we see they recently move to these 3D devices. And in that 3D device world were certainly height becomes critical control parameter, OCD becomes really an enabling technology. And we have relatively simple structures, I don’t know what’s going on. I didn’t touch anything. Relatively simple structures that now become very complex and we try to dump them down a little bit. But if we look at that thin height that Bill showed that becomes critical.

Those other re-entering parameters the source, drainers or is this date last approach and all that has been a follow on as our customers adopt those technologies. So very few people have advanced 3D transistor architectures in production and we believe at the 2X and 1X nodes as foundry and logic adopt these advanced architectures there the growth in those recipes and applications will follow. And I think I’ll let Bill handle the second question around OCD competitors.

William McGahan

Yeah, specifically what about the competitors, I can answer that. Okay. Yes, our primary competitors. You’ve got to remember I’m technical here so this is a new territory. Primary competitors would be Nova and KLA, obviously. Nova has a small presence, but not so much really it’s KLA and us.

Unidentified Analyst

(Inaudible)

Kevin Heidrich

So that the question is, are we gaining share or loosing share and why? And I think as Tim showed in his slide we show significant OCD market growth year-over-year. So based on the data we have, we believe we’re gaining share and it’s really driven by the applications that we just step through in that that applications chart, and it’s really driven by the advanced technology notes and the complexity and the more complex things get the stronger Nanometrics advantage. So I think you’re…

Srini Sundararajan – Oppenheimer

I’m Srini from Oppenheimer. First question is on in terms of 3D FinFET type of designs and do you have any majority of market share at this point?

Timothy J. Stultz

In terms of – the question is, market share for 3D FinFET design there is, it’s a relatively complex question given how much 3D FinFET is in production, we have a lot of good solutions in place for 3D FinFET and we have very good customers across Logic and Foundry, and that’s really as much color as I can give on what we’re doing in-line. We’re trying to make it towards end customer drastic presentation. So, I’m sorry, I can’t give you more detail on that.

Srini Sundararajan – Oppenheimer

Yeah. so, second question is can you use this technology for roughness measurement meaning that if the structure is not periodic so for example for line edge roughness or surface roughness?

Timothy J. Stultz

Yeah, so the question is whether or not this technology could be used for non periodic structures or for line edge roughness. Now, I’ll take the line edge roughness first and the answer is yes and we’ve done that. There are numerous ways one can approach that from a modeling perspective, that we successfully demonstrated that capability. The second question is non-periodic structures, and that’s a little trickier. Completely non periodic like an isolated line, out in the middle of nowhere with nothing else around it no, that’s not going to work. Things that show some sort of symmetry to where there is some repetition maybe it’s not perfectly periodic, but it’s close to it, we have done successful measurements in that case.

William McGahan

I think just to add a little color to that. We see that overall, the pitch shrinking is driving very restricted design layouts. So these random structures and isolated lines don’t exists anymore, so every generation more of the end customer devices fall into the OCD space and we believe that’s going to drive adoption as well.

Unidentified Analyst

(Inaudible)

Kevin Heidrich

Yes and no because, yeah, yeah. Sorry the question is ultimately that’s a matter of calculation of speed. Yes, but remember the process well I showed for OCD, so there is one part where we measure data and then we model the data. Yes, given an ultimately large computer and there is actually software universities have out there that will do finite element stimulations half of non-periodic structures. The problem one thinks about is now positioning of the measurement being on that isolated structure and you got to think that it’s going to be rather sensitive to where the beam is actually located over that structure. But maybe that becomes one of the parameters in your model, what is the offsets. So there is some fruits to that, given a large enough computer, it could probably be done, but nobody has done it for a production mode type of measurement yet.

Unidentified Analyst

I assume FinFET is more metrology inspection intensive, but can you quantify or is there a way to think about, how intensive it is compared to other architecture, may be I guess think about, I guess, 5000 wafer stack per month line. How many tools would it need, if it is a FinFET line versus a normal line?

Kevin Heidrich

Okay, so, I guess the question if I hear it is, how much more metrology is necessarily needed for doing advanced transistor metrology, how you invest in others, versus traditional plain or another technology.

We certainly see the number of measured parameters is going up. And in particularly, we see this for the gate-last flows as well as these highly strained engineer processes. We now have four to eight critical steps just around the transistor formation certainly metrology, where as in past data devices you might have had one or two key steps. So strictly there are more important steps to drive transistor control, where there is good feedback to the process tool for everyone of those.

Certainly it’s not 8X or 4X increase in the number of tools, because we're offset by increasing the productivity of the tool. So our measurement time goes down generation over generation, we get faster, but our customers demand more metrology. And I think in one of the early drivers as Tim shows, we show that increase in requirements both in terms of measured parameters as well as the number of dye people sample, the process are getting very small. So gains in metrology, need are always offset by productivity, which we also believe we have growth going forward.

Unidentified Analyst

Can you just give us an idea of how long does it take to develop a model? I know that it is probably a wide range. But in the range and is it that, is that an off-line I’m assuming and do you provide that as a service. And the second thing is, what’s the time in terms of data acquisition and production data acquisition and doing the calculation, is that instantaneous and how do you differentiate, what is your differentiation in that relative to your competition

Timothy J. Stultz

Okay, so the first question. Yeah, they are trying arrest me, so too many questions in sequence. The first question is the time it takes to actually develop a recipe and that is highly variable, and it depends a lot on the complexity of the structure, and it also depends a lot on how we evolve the processes that the customer has, there are in development and they don’t really have control of the process, it can be extremely difficult to get the model dialed and because the customer doesn’t really know what’s there, if they have something or there are in good control of it and they really know pretty well what’s there, it can happen very quickly.

We see in cases, where this type of development it can take a few days even on a complex structure, we’ve also seen cases where it takes months. Okay, so highly variable that is a big component of our software roadmap is to develop further tools and the key making better automation in our software that can shorten that time to recipe as much as possible because that is a distinct competitive advantage for sure.

So the second question was do we provide that service absolutely that’s what our 100 applications experts do as they make these recipes for the customers, some of the customers will do them themselves particularly if they have a top secret structure that they don’t really want anybody to see anywhere, so we do provide that as a service, absolutely.

Unidentified Analyst

(Inaudible),

Timothy J. Stultz

Yeah, the analysis is instantaneous when using that [press the] object that I mentioned before, by typical analysis time from that will be 0.4, 0.5 seconds extremely fast.

Unidentified Analyst

Couple of questions, first one do you help your customers when you pick the parameters you’re measuring, in the sense that what the effect is on the device itself. Can you help them to decide what the threshold voltage is going to be and well it controlled and what parameters to measure? And then second question is, how is the word, do you sell high performance servers to Intel.

Kevin Heidrich

So the first question is really a device integration question is, how do we in recipe setup and development, how do we tied those measured parameters to device performance. And the second question is, how do we resell servers and that’s a shorter answer. On the first one we are growing our presence with our metrology and yield customers in terms of what can we measure, how well can we measure it, but ultimately they have their own correlation to their own new test parameters and so forth.

Through our experience we tend to see that the obvious parameters that we showed up on the screen once that are measured so gate width or depletion height et cetera. But we also start to get driven by parameters that we can necessarily show in our graphics, which is critical undercut of an edge or the very small putting of a device structure. And that’s turns out the transistor performance is driven by a lot of the subtle detail that we can’t put up in these kinds of forums. So those conversations are increasing year-on-year, we’re not at the point yet, we are a device services provider it’s more about the stimulation.

The second question in terms of reselling servers, if we look at how we deploy our whole solution fundamentally our end customers is the bad customer. And the fab engineer wants a single point solution and I think when we started this business model five years ago and we look at how we were competing against the timber is still alluded to and others is they're a software reseller.

At the end, the customer in the fab wants everything from one person. He wants the solutions. He wants the point of contact, and he wants the point of escalation. And so we tend to see that all of our customers are buying, our cluster computing solutions in addition to our metrology tools and software tools. And we expect that to grow going forward and part of that is, Bill’s group has really done an excellent job to match the hardware that we define to the software that we put on it. And if an IT group takes that over, it can’t do that definition as well, because IT group has to buy service that serve many applications and we have very purposeful service specifications.

William McGahan

I’d like to expand on that just a little bit that’s a really key point there. So I made the comment in the presentation that we have the fastest engine by far and that’s a big factor in that as we know exactly what hardware we deploy on and that code to be optimized for speed on that hardware, so…

Unidentified Analyst

We are a little short in time, but I think we have two more questions.

Unidentified Analyst

As a follow-up how penetrated I guess, do you think are these ancillary products are selling basically. The Concord you said you just shipped your first one. The servers you put in, the workstations, when you think about your leading customers, obviously they have kind of you’ve evolved that you’ve grown with them. So you probably came in and selling just the equipment, but now you sell a lot of other features. How penetrated is that. Is it a completely different sales cycles now to maybe new customers or device makers you really don’t have a big presence in right now?

Timothy J. Stultz

So I think the question is really, I would sort of restate that as what is the adoption rate of the new technology as well as what are the switching costs?

Unidentified Analyst

I’m thinking more about all of those ancillary services you have, basically everybody can sell a piece of capital equipment, but now you’re selling a bunch of other hardware suite products. How penetrated are you, do you think there is room for more penetration in your top customers or they are all fully penetrated in those suite products. And then when you think about a new device maker or someone who you currently don’t have a lot of activity where do you go and selling a suite or is a still equipment sale first and then you’ve got a kind of sell on them a suite later?

Kevin Heidrich

We can show you that the one page sale slides which is actually NanoCD suite and we go in with that whole solutions space, which is applications, metrology systems, hardware and work stations and software. And we see all of our customers even our customers that buy just a system, they tend to buy a sale down NanoGen and they have one Atlas, one metrology engineers, they have specialty, CMOS image sensor application or a specialty thin film head / hard disc factors even most customers with very constraint budgets will buy, the whole suite. They may not buy 10 NanoGens and 20 Atlas’s but even for that one Atlas or that one IMPULSE they buy, they will buy the whole package. And I think they’ve all realized now that it is part of their budgeting process that’s part of the wholesale.

So we sell it as a package we do upgrade all of our customers overtime, so we come back and sell software upgrades year-over-year with new features. We also go back and sell computing power upgrades year-over-year and again tying the hardware and software together, drives metrology performance better. So we believe that we put in attractive incentives for them to buy all the pieces and they do, so we also believe they will continue to buy those pieces going forward and that customer penetration will continue to go up.

Unidentified Analyst

Second question is a little different, could you think about your customers OCD buying patterns, how they tend to split market share, as that you get it all or you get nothing, did he tend to kind of be broken long certain distinct line even at different customers or is it not that consistent?

Kevin Heidrich

That’s a good question is what is OCD share looks like by market segment and I think if we would step back to that race track slide that really kind of set the stage, we really have three customers in the fab, we have a transistor customer and interconnect customer, and patterning etch customer. And from several of our key customers, we had all three of those islands for some of our customers who may have two, and some customers that we’re just penetrating into we may only have one.

And you could probably go back and find market share attribution by those three islands based on the competitors that Bill alluded to KLA, NANO and Nova. And we believe that once we get that total we are about to expand to those other two or three islands. Yes, one more.

Unidentified Analyst

Just quickly on the opportunity with EUV especially with some of the lays that process or that technique is having today. I guess what do you see as the opportunity for OCD, especially on the patterning side. Is there the potential that I see how it benefits you with double and quad patterning, what's the opportunity for EUV, what's that gain traction?

Kevin Heidrich

So the question is what is the opportunity in EUV as well as, is really what is the opportunity (inaudible) as EUV continues to get delayed. I’ve sort of got two questions in there and certainly the – in traditional lithography double patterning means double lithography steps and/or double the etch steps that means twice the opportunities for process control, quad patterning again is another factor on top of that.

And one of the key things we see for process control there is, as Bill showed in the very advanced techniques we have, we can start to look at very subtle impacts of resist processing and hardness processing to have huge impacts on final device performance, because if you have a subtle variation by nanometer or so, it’s the first patterning step, I guess replicating many, many times.

So we see a significant leverage in the double ball patterning for litho, but it's actually really more an etch effect that we tend to measure in those cases. In EUV the adoption likely with where EUV is in timing, it’s entirely possible that EUV were actually come in with a double patterning step early on and that is really driven by minimizing line edge roughness as Bill alluded to earlier. So either we will have customers that have to do a lot of line edge roughness with EUV or they will have to do some other wafer processing to minimize that.

And so the number of steps in there will be still significant, but we also believe EUV is really going to be substitution for one or two of litho steps early on and as we show through the architecture flow a lot of the process control steps right now are really transistor and etch and process materials driven more or so than lithography, so transistor architecture still tends to drive us more than lithography.

So I think with that, I’m going to hand it over and change gears a little bit to Lars Markwort, who runs our Inspection Division.

Lars Markwort

Great, well thank you, Kevin. Good morning everyone, my name is Lars Markwort and I’m the – one of three Founders and the ex-CEO of NANDA Tech, the company that was most recently acquired by Nanometrics in November last year. And I am looking forward to telling you about the product here. But before I start I would like to take one step back, and first of all I would like to tell you a little bit about NANDA Tech.

So we started the company at the end of 2006 to build a better macro inspection tool, because we had seen that what the industry offers, even today offers, can be done a lot better and so we wanted to do a modern macro or better macro tool. And because we could start with a clean slate, really that was no prior history that burden us in anyway. We could go completely out of the box. We could do things quite differently than other people. And so, as you’re going to see it and I want to preempt you a little bit, so you are not confused. We can do defect inspection with this, but we can do a lot more with this. We can do something that is or some things that are traditionally really in the realm of metrology.

Okay. The second thing I want to spend a minute on is, Claire had asked me to tell you why I am excited to be part of Nanometrics, because I am generally excited to be part of Nanometrics. And really the reason for that is, that they are the best acquirer from our perspective that we could wish for because our products are completely complementary, right.

So while we were completely focused on defect inspection that was the one element that was missing in the Nanometrics portfolio. So as a combined company now, we address all four segments that are the process control segment in the advanced fabs. That is film thickness metrology, critical dimension metrology, overlay shift registration, and now, defect inspection. And there was really only to my knowledge, two companies that offered this full range, basically the full house of process control solutions.

So with that, I would like to go into the technology here. So we’d have to say it’s an advanced macro inspection tool. And to stick with the familiar [nice] track of the way that Nanometrics presents where its products fit, let’s have a look where the SPARK fits, pretty much everywhere. So we have applications in the interconnect space. We have applications in the patterning module and that’s litho and etch. We also have applications in the advanced packaging domain, namely in the bonding and the grinding processes for specifically the TSV process flow. And we have applications in the transistor area, predominately for process uniformity or process variance monitoring.

So let’s look at the technology. This is really one platform that enables you to do brightfield and darkfield inspection. And for those of you that are not familiar with brightfield and darkfield inspection, let’s have a look at this quickly. So the brightfield defect inspection is a process whereby you look to normal incidents to the surface and you look what is coming back reflected.

Whereas darkfield inspection, you look under an oblique angle and you look not at the primary beam, but you look at scattered light from the surface. And you see totally different defect. So the images already show you that you have the totally different way of looking at your wafer here, totally different defect that you see and they are complementary and the important take away here is we capture both.

For every wafer that goes through our tool, we do brightfield/darkfield. So we have very complementary information, very rich information. And we do this full-wafer in one shop. There is no scanning involved, all right. So, I mentioned how we are different from others. Full-wafer is something you got to keep in mind.

With this we are able to look at pattern and at bare wafers. We are looking at the front side of the wafers and even at the back side. We can look at surface defects, but also at [buried] defect and I know it will show you some examples and we built [only looking] for defects, but we look for process variation. So what do we do that’s different than what everybody else is doing out there. So everybody else pretty much uses one way or another of a scanning technology. Scanning means you have a small field of view. Only a tiny area of the wafer typically is illuminated.

I’m just showing here a darkfield example, but it’s the same for the brightfield. Usually it’s a microscopic technology, small area illuminated. The vast majority of the wafer is not contributing to the signal at any given movement. And so you have an inherent tradeoff. You can do this now. If you scan your surface, you can do this quickly, so high throughput, but you have low sensitivity or you can go for high sensitivities, but any means low throughput.

We break through that by illuminating the wafer completely. So 100% of the wafer surface is illuminated. All elements of the wafer contribute to the signal. And so we have now an ability to – have high sensitivity and at the same time high throughput. And so, when you look at the positioning here, these are the familiar fields of defect inspection that you all know.

On the left hand side you have the micro defect inspection space, where you have incredible sensitivity, all the way down to [have node] inspection, so very high sensitivity, but typically quite low throughput. So where is this technology predominantly used? It’s used for technology development and then it’s used statistically to control your process every now and then, look at the tiny fraction of the surface, look at the fraction of the wafers in the flow.

And because people wanted to have an ability to look at more wafers in the flow, some 10 years ago or so (inaudible) came up with some macro defect inspection product and macro defect inspection was targeted at high throughput, look at every wafer preferentially, but it was very much accepted that you would have a much lower sensitivity here. So you are looking at a defect.

The early VIPER tool was a 50 micron defect. Today’s macro defect inspection tools at 120 wafers, however, typically do something like 20 microns particle sizes. And that leaves a giant gap here and we fill that gap, all right. So we are about a factor 10 higher in sensitivity at the high throughputs than today’s macro tools and we are about a factor 10 faster than today’s micro tools. That’s the gap that we fill. Okay.

So let’s look at the – apart from sensitivity and throughput, there are many other advantages that our design grants us, right. So let’s look at these. So we use no moving parts technology. So the wafer is loaded. Once it’s positioned, it’s not scanned. There is no scanning page. There is no scanning optics. The wafer just sits there and we capture images. So it’s a snafu way of images, right fields and if you toggle and shutter and then another shutter opens and you switch over to darkfield. There are two images. Snap, snap and you are done with it. So this gives us high speed. It also gives us very high stability because we don’t have this precision [speech] in there and it saves us a lot of cost.

The tool of self aligning, what we mean by that is we don’t need an external pre-aligner. We can orient the wafer, so we can receive. First of all, we can inspect the wafer any which way we like. So it’s, the wafer gets loaded, because we look at it for wafer there is no scan direction. There is no preferential orientation of the wafer that the wafer needs to be in. And that’s very important. A, it gives us speeds. It also gives the stability again and because we eliminate a part, it saves with cost, but it allows us to integrate into other people’s process tools. Why? Because in the process tool you have no control over the orientation of the wafer. It just arrives and you got to live with this and we can perfectly hope this.

So a third, very important element, is we have a very large debt to focus. So large, though in fact that we can look at single wafers and even specs bonded wafers without having to refocus. So that’s very important also for stability purposes. It eliminates an auto focus, which is a key component that is failure prone or error prone. So it helps with stability, it helps with the cost and it again it helps with the integration, because it makes us very vibration insensitive.

So we can go directly into a process tool and typically if you integrate into a process tool you are being integrated, I’m sorry, as an afterthought, and so you are basically given some space in the tool and it’s not usually the vibration optimal place. But we are very insensitive to that, and so integration for us is definitely a possibility.

And then finally or actually not finally, the penultimate point here it is very robust lighting. We have a very scout, very compact system with a minimum number of optics in there, which again helps stability cost and make it integratable, because it’s in a form factor that can fit. And the final point now is this. We do full wafer imaging here or image capture as oppose to stitching many small images together that makes it very stable again, but it also makes it very simple to create recipes for because we don’t have stitching odd effect to take care off. So the image processing afterwards is hugely simplified.

So that gives us basically a basic parameter set of starting with the tool and positioning the tool, but we also have multiple applications that we have developed and now we can play in. So the primary one is the advanced macro space where we look for local defect, but also for global defect. Remember, we look full wafer. So we can look at the wafers differently than other people and I will show some examples. So we see some defects that are not easily seen, if at all, by other, by small field review and inspection tools.

We have developed a back type inspection application. This is essentially black boxes recipe free. You just place a wafer on there and the wafer backside, if you will notice, but they change throughout the process. So the moment you start with a virgin wafer, the moment you start processing that wafer also the backside changes quite dramatically.

So we’ve developed a black box approach where regardless of what the wafer backside looks like we can classify, we can identify particles. We can classify signatures on the wafer backside. And I mentioned this already, we can do things that are in the round typically of metrology. So we do what we call 3D imaging metrology. This is about speed. So create a lot of data points, a lot more data points about a factor of 1,000 to 10,000 more data points per time unit then at [3D stem].

And where we want to use, this is again to characterize full wafers, basically every wafer analog as it is go through the process tool, all right. So again this chips very nicely with the Nanometrics story, which is all about lots of data, high quality data and then turning that data into knowledge. And we’ve said perfectly with this. It’s very complementary to the Nanometrics OCD technology and I will show you how.

And then finally, we are developing a lot of applications with – for the advanced packaging space, where I mentioned it already, you have not only single wafers, but bonded wafer stacks, totally different process flow, new materials, new processes, completely different defect types. And because we look for wafer at these wafers here, we can extract defects that it turns out to be our technology is very applicable to that space.

So I’ve got these color coated here and I am going to stick with the colors as I spend a couple more minutes on the individual technologies here. So let’s look at the advanced macro space here. Where I said it already, we have a very high throughout and at the same time high sensitivity. So we can look at every processed wafer at high sensitivity and why is this important, because we can see faster high frequency events.

So, you can imagine, if you do statistical sampling and you have an infrequent events that occurs every now and then on your process tool, you may just completely miss it, because you are not looking at those wafer or few wafers that it just happened on. We will not miss it. We capture that. And we combined it with an automated root cause analysis, because we are generating so much data that people will need some way or form of automated data analysis.

So I mentioned it already. We are looking at the local defects, but because we do full wafer imaging, we are also looking at global defects and we have a key advantage here by looking at global defects.

So, on the left hand side you see now, one example here. It’s a nice example where you have tungsten residual Oxide/Metal CMP and it’s actually the dark areas that are the good areas. All the rest has residual on the wafer and you can imagine that if you go with a small [field of] microscope tool in any one of these areas, you are fully inside such an area. It’s practically impossible to say this is defective area or it’s not a defective area, whereas if you look full-wafer, it squeezes you. It’s very easy to see these global defects.

So these are, I would almost say, SPARK unique defect and there are a bunch of other examples here such as patent for labs. You don’t know what is going to happen. So statistical sampling will not yield you success here, but looking at full wafer you are sure not to miss pattern defect and other lithography based defects.

So, before we go a step further here, we now have established we are looking at full wafer. We are potentially looking at every wafer that goes through the process. So we are generating a huge amount of data. Now we have customers that immediately said, well, who is going to look at all that data? Who is going to review all this? So it became very clear that we needed to give the customers tools to automate the data analysis and this is what we did here.

One slogan here that we have [since we’re here] we see defects that other tools don’t see. And if we now take the next step here, we combined it with automated special signature analysis to alleviate the burden of the analysis for the process guys. And so, automated special signature directly on the tool allows us to identify clustering of particles. So it turns out that many defects when they are on the wafer are not actually randomly distributed, but they are clustered in some form or shape. And the clustering tells us something about the failure mode of the process tools.

So, I am sorry. Say for example the CVD showerhead example here. It’s a beautiful example where if you have a CVD showerhead clogged, it starts to spray particles on to your wafer, but they are not randomly distributed, but there is a clear signature in there. If I recognize that signature, I can add a lot of intelligence to the data that I provide to you. I cannot tell you look your process tool and I think tool was three chambers, chamber one of three have a showerhead clients, in chamber one now needs to close the load lock, no more wafers through chamber one but two and three can continue processing.

And this is very valuable information because before the whole tool, sorry, I cannot stop doing this, the whole tool will be shutdown. Now you just shutdown one chamber, the other two can continue processing. And more so, I’m already cutting process engineer or the service engineer what’s spare part he needs to bring, a new showerhead and its only in chamber one, he need to repair. So it’s a productivity subject, it’s not only a yield enhancement possibility here.

So we extract signatures that are immediately actionable and we also minimize the need for review because once I’ve identified such a signature, there is no need anymore to review, I know what the failure mode is, right. So it’s a huge relief for the guys in the fab. And finally and this is not be underestimated that data traffic in the fab is shoot, it’s tremendous.

If you have such a wafer with these couple of 10,000, sometimes 100,000 of defects on there that’s a very large file that needs to be send out to the yield management system. If I’ve already identified the signature in there, I can say well I found that signature and all those effects, I don’t report anymore, I just report the once that are ram in that you’re really interested in, right. So we minimize the data traffic to the yield management system and what we say is basically by looking at the full wafer in it, we see the forest, we don’t only look at trees.

So, this leads us into, so we have now spend on basically pioneer on the explaining that we do defect inspection but defect inspection is all about isolating single events, particles, scratches, residues from underlying patent.

What we also do is as we look at the pattern and this is what we call Critical Dimension Imaging. So the Critical Dimension Imaging not to be confused with the OCD product, right, the OCD we’ve seen in those presentation very nicely the evolution of OCD starting with very simple two dimensional structure and now today basically addressing very complex three dimensional structure and having displaced the CD-SEM here as the total of choice for these complex structures. But that still leaves open a very sizable CD-SEM market for the less complex two dimensional structures and this is where this technology applies.

So it’s really on the less complex more of that three dimensional structure and where is that – that was actually a little cluster for standard qualification for set up of the little tool and people are always asking there for more data and in fact for more frequent data. And this is exactly what we offer with our tool. We do full wafer imaging in one short and I am going to show you an example here where it becomes abundantly clear. We have about a factor of ten more data per wafer but we’re also looking at every wafer in the launch potentially where as the CD-SEM only looks at one or two wafers per lot.

So we have a sampling rate that’s minimum 10X higher than the CD-SEM and customers use this by saying, well look, if I can look frequently – more frequently at a higher data point density, I can control my process much better.

I can adjust higher order, correction parameters on my scanner. So this is the prime example here, what people do for scanner (inaudible) as they take a special vertical that printed on a wafer and they are printed will certain focus and exposure. Those variations called an SEM or focus exposure metrics, and the uses to derive their process parameters, the process window, basically for a particular structure or for particular process step. And so it’s not uncommon then to have CD-SEM run several hours here to collect five data points per field only. And that’s really only goods to create a field average math and to analyze the field average.

Or as in our case with the SPARK CDI tool here we collect in one second integration time, something like 15,000 data points on that wafer. So data point density is huge, it’s so large that we can do of course the field average, but we can also do within field uniformity. And if you correlate these two, they correlate beautifully for simple structures. Right, so this is a relatively simple method, it’s a applying our defect inspection tool for wafer, but its applying edge it to a completely different application which really a metrology application.

But very colorful and I am convinced it’s going to take away market share from the CD-SEM then at the bottom end, not at the top end whether OCD has already taken market share away. There is one more use case for this, that’s also very interesting, so I want to share this with you. Where we use basically a recipe freak approach. So you’ve heard how complex it is to set up the raw materials of recipes, all right and how much work has to go into this and detailed this process is.

While we could potentially use this tool and this is being discussed with customers as we speak here for simple go, no-go process control, where all you do is you look at the SPARK intensity integral, so I get an intensity distribution across the wafer and I plus that as a distribution, how may intensities I have in histogram. So I get a distribution that’s ideally is a nice sharp peak, if it’s a very uniform product and a nice sharp peak has a peak position, has a peak width, and if it shifts and if it broadens, I know I’m out of profit. So it’s a very quick way of identifying outliers. And this could be done on product; it doesn’t have to be on special monitor wafer. All right, so again this is something that will potentially heat in the CD-SEM market.

So let’s go to the backside inspection. So backside inspection, why is it important? Well, I will tell you, if you have defects on the wafer backside, picture a big particle on the wafer backside, it will create a deformation of the wafer on the front side. And the scanner is very poor, it’s very good at correcting tip and tilts and focus variations on a global scale within the field of view, but it’s very poor at correcting highly localized events.

So if I have a very small particle sitting on the backside, it creates a localized focus variation on the front side deformation that the scanner cannot correct and that the scanner then will – it basically it will be out of focus here and it will create front side hotspot, focus hotspot.

Now if you wanted to prevent this, you would really have to look at every wafer on the backside because it doesn’t make sense to look statistically. If one wafer brings in such a particle then okay, it only happens on that one wafer you think, well, what about the situation where that particle transfers to the chuck as a dust sometimes.

And now every consecutive wafer that goes through there will have a focused hotspot at that location. And so really if you want to prevent that, you need to look at every wafer that goes into your tool and you need to look at it full wafer because it doesn’t – just looking at small areas on the wafer statistically won’t catch that.

So we have the best tool for this, because it’s got the highest throughput and it’s got the right sensitivity range, whereas the industry standards (inaudible) inspection tool dark fields is too slow, and the macro inspection tool is too insensitive and too slow for this application. So what you would say well, the industry has lived perfectly well without looking at the wafer backside, why is it suddenly becoming an issue.

I’ll tell you why. This will focus hot spots as the industry migrates to more advance lithography here from DUV 193 drive to immersion and now to EUV. The number of wafers affected by litho focus hot spots is increasing. And it’s easy to understand why when you go from 193 to 13.5 nanometer it’s a 14X reduction in the wavelength, the sensitivity to focus variation or the focus window is decreasing dramatically. So you’re more prone to see hot spots. At the same time, it takes a lot longer for the scanner to be clean and stabilize afterwards. If you have a right tool it takes half an hour to two hours to clean the tool and to wafer it to stabilize, and then be back in the process. On an immersion tool it already takes anywhere between four to eight hours to do that today.

On an EUV tools you’re easily talking 24 to 48 hours of downtime, because this is a vacuum tool, you need to release the vacuum, you need to then clean the [chart], you need to pump vacuum back under, you need to wait for the whole thing to stabilize before you can resume operation. So 24 to 48 hours and at the same time that is such an expensive tool, it’s six times more expensive than the dry tool. So you’ll have a perfect storm here, where we feel there is a real opportunity now. because a list of cluster yield, but also the uptimes is limited by wafer backside particles. And people will deploy 100% inspection on the wafer backside, and we have the perfect tool for it.

So, the final market that I’m going to talk about is the advanced packaging domain here where you have new processes, new materials and a TSV process flow here. The SPARK applies to the specifically the wafer bonding, temporary bonding, the wafer thinning and the TSV reveal steps and then also to the debonding step. And I’ve said it already, so this is were we apply. And what we have designed here is that it’s a tool for this specific application, it has two modules on it, it has on the one end, the full wafter inspection capability that seems every wafer and it sees it whether these are single wafers, single wafers with thick blue on them, or single or even bonded wafers or bonded wafer thinning. Full wafer inspection, look at every wafer.

If you find problem areas, you’ll have a second module on there that you can use, we call this an MRM, Metrology and Reserve Module. So we can review with the effect, but you can also perform metrology there, so you can do film thickness and this is thick film thickness metrology using an IR interferometry that’s on there to measure the silicon and thickness, for example after it’s been ground down.

And these are some examples here where we say, well, we have unique capabilities that are not really commonly available, and it won’t easily work on other (inaudible) inspection tools, so we see this very shallow defects for example, they are very large, they are centimeters large, but there are only one or two microns deep. If you look with a microscope there, you’re fully inside that defect with you’re field of view, you will not see this. It’s only because we look for wafer that we seeded.

We can also play with the wave length, so we can look in the blue, you would look at very little penetration that so you will look at the surface only, where as if we switch to the rent, we look into the surface. We see the copper nails that are still buried underneath a very thin layer of silicon after grinding why this is important, because such a very important distance or thickness here to measure the residual silicon thickness after grinding over the copper nail, because you can compensate for this in sewing process steps.

And we can take it another step further by switching over to the nearer where we look right through the wafer into the bond layer, and we see bond layer defects such as voice, such as de-lamination, so very unusual tool, very powerful tool that is optimally positioned for this particular process.

And we're part of a very large infrastructure here at iMac, iMac has the most advanced private line. And we basically are involved in development of new process steps, new process tooling, new material and that ties us directly into the equipment companies that are playing in that space and that ties us also directly into the direct customers space.

So my last slide here is our market summary, so in the advanced macro market that is one established market today. We feel that because we have high throughput and high sensitivity, we will penetrate in that market. In the back of inspection market, we feel we have the optimal throughput and sensitivity for this application. And so as the industry migrates into the 2X nodes and rolls out more advanced lithography, we will benefit from that. In the CD Metrology space, we have the capability very similar to the CD-SEM much, much faster.

So we will take market share away here, in particular also because our tool can be integrated, so it could potentially be directly on the litho-cluster where the CD-SEM never will go. All right, so it's all about look at heavy wafer, look at 100% of the surface and directly in the process. And then finally, with the advanced packaging, we have the right mix of capability and this is a nascent market that is now starting to pickup and we are growing with it as it grow.

Unidentified Analyst

Okay.

Unidentified Company Representative

I think I will hand it back to Kevin here and then we will have a Q&A after this.

Unidentified Analyst

That’s okay.

Unidentified Company Representative

All right, thank you all and you all get to hear me one more time here. So we’re going to switch gears a little bit and talk about the advanced wafer level packaging opportunities and the SPARK, the last application of the SPARK is the lead in for what we see as a significant market for us going forward. And so there is a lot of background into this, so I thought I’d take a minute to kind of explain how we see in advanced wafer level packaging market falling out. So first of all, we showed this, the former racetrack, we showed in OCD and the fab-wide case we have this transistor interconnect patterning space. And in the Wafer Scale Packaging space, we have a similar but unique fab and fab type of flow and as large (inaudible) go these unique process steps in this case. So we now have full scale wafer grinding, bonding and other processing steps that are new and unique to this wafer scale packaging approach.

If you look at how that is deployed by end customer type, we see various applications whether it’s for through-silicon via flows, we’ll talk about a little bit, we’ll talk about microbump and copper pillar technologies, our emerging metallization technologies in that case and I’ll show a few examples of both what Tim showed earlier which is the (Inaudible) to do complex dye and dye packaging as well as for 3D wafer scale packaging. We’re looking at wafer to wafer bonding applications.

So if you look at this 3D process flow where that we put up earlier, Lawrence highlighted where the SPARK fits into this flow and we believe that the UniFire which is that the product line we acquired two years ago from Zygo Technology really complement the other applications here, we see we have now a complete solution end-to-end in wafer-scale packaging for process control metrology and inspection. So I wanted to talk about the inspection opportunities, so in the metrology opportunities there is really a few key areas where we are depending on the customer that we have but different applications apply. So there is through-silicon via and microbump metrology additionally there is additional metallization that’s happening in advanced interconnect packages for high performance logic orders, there is traditional metal distribution layers or redistribution layers, these are needed for either the copper pillar or the TSV flows.

In the microbump case, we see direct dye-to-dye back and before we see wafer level stacking and that is for high IOs and I’ll show you an example of that in a minute. So that there is really applications in the DRAM for TSVs, the copper pillar and redistribution layer for logic and then in the microbump across all applications and all of that needs metrology.

So we talked about – we kind of look back about where that fits classically versus where we see the market going in terms of geometry again the back end is now moving into full wafer processing front end and it’s moving into the geometries that are out of reach of the classical back end tools and the UniFire really has a technology likes to go forward to carry it through all this technologies. And so key drivers in this case, so the copper pillar pitch is shrinking down into 40 microns or new geometry out of the range of the classical tools offered by our competitors.

Our bump heights are scaling down to sub 25 microns and again being another factors we can talk about in a minute become key drivers for Advanced Packaging technology. And then lastly through-silicon via or TSVs to be deployed in math, we see our customers driving to initially 10 microns, now five and soon to three micron TSVs in production. So if we talk about the end results here, this is an example of the through-silicon-via post etch but in every process module that we showed in that flowchart there is unique applications of all the new process tools gets prior to this one, so there is lithography steps for TSV formation, there is etch include steps and there is inspection steps.

So here we see an example of – this is an output from the UniFire against the 3D metrology tool where we get, but the profile of the TSV and depth and other key measurements might come out of this, whether the TSVs are chip focused because they can’t be build properly or bonded properly, if there is any chip. Bottom of TSV roughness and other things are things that our customers are driving it forward. Because those drive quality reliability failures and at this point in the process you have a very high value added wafer. So the requirements or the inline steps tend to go up in terms of process control.

If we look at how our customers were deploy a TSV tool they will do some skeptical samplings, but fundamentally for these very high-value added wafers they do a lot of die by die and full field metrology to get the best possible metrology across the field. If we look at this is kind of a complex graphic, but if we look at this it highlights the traditional bump metrology that many people think are in terms of the packaging flow, but we also see that there are micro bumps we’re doing dye to packaging as well as copper pillar technology to do dye to dye and dye to TSV packaging.

But you can see that relative scale going down as we go through this and in-between these two we see these 3 micron TSVs, but the few key parameters in here that become increasingly important for metrology going forward. One of which is it’s a little easier to see here, but obviously becomes very important here so we call coplanarity. And what really matters at the end of the day when you’re bonding wafers together and wafer-wafer or wafers to packaging is that all these bumps and in the case of high performance Logic part there might 50,000 to 80,000 interconnects going on that package in a given dye or millions of them on a wafer, they all get exactly the same heights because when you bond those two dye together or those two wafer together, you get perfect connection between all those interconnects.

In the last part is that, in that bonding process, you can have significantly reliability failures with all these advanced packaging technologies for dielectric cracking or other misalignments. So the height, profile, and metrology each of these features becomes critically important.

Ahead of the height, we also see inline that the lithography and patterning requirements in the levelization advanced packaging plants is getting very compressed, it’s starting to look more like a from front-end-of-line fab and that we have CD metrology, we have lithography metrology and others that are fact of front-end-of-line steps now in the wafer level packaging space. If you look it, it’s how this is now packaged in the final micro bumper or copper pillars space is that our customers now have added all the value to the wafer and the next step is off to the dicing and packaging line. And many of these customers do 100% dye inspection, they want to know every bumps. So we end up with very intensive datasets across here and then we can go back and do the statistics for every good dye, every good wafer and so forth.

We look at how do we still all of these individual bump data that we get and we Tim showed a very high resolution graphic earlier into a graphic that the IDM might hand this wafer of to an OSAT and then they are going hand in not only the finished wafer, they are going to hand them a data set and a wafer set with the dyes drive the packaging decisions.

So sort of in summary, if we look at how the advanced packaging flow exists, we have this fab and fab models and we see that with the UniFire and SPARK, we have the tool sets that can cover all the process steps that are unique to advanced wafer scale packaging. We believe our for through-silicon via, for copper pillar, micro bump and wafer stacking, if that technology gets adopted, we’re are going to be well positioned with the tools we have in the field for these cases. So I think with that Lars and I will take some questions on SPARK and wafer-scale packaging.

Unidentified Analyst

Competitive advantages of SPARK where you presented and seeing pretty substantial. So two questions is there any pushback from customers or potential customers? And then secondly what do you expect the competitive response could be because if you’re that much more efficient competitors have to respond?

Unidentified Company Representative

Okay. So the question was customer pushback on our technology, and the second one was how does the competition react. I’m not sure about the customer pushback, I mean they have been very, there is actually pull I find not pushback on these at least on the three applications that I highlighted here.

We started all on the advanced micro space and on the advanced micro space people were saying well look you know yes even if your tool is lot better we are have about full and it’s not the first place I will rollout your tool. But look you have some capabilities really that are really exciting to me. Can you help me look at the wafer back side. We didn’t actually concede this tool at the onset as a tool for wafer backset inspection. But it is developed in that way. So we are now installed in the most advanced EUV private line in the EUV infrastructure at IMEC and that’s because people have looked at our technology and have decided that that is the best tool to be beneath there.

The same things happened with advanced packaging, so IMEC and the partners of IMEC picked our tools, over, all the other tools that were out there. As early as January 2010, to help them run with their TFV private line and it’s just been upgraded, because the customer is very happy with it and we’ve had first successes by – we are in – we are rolling it out basically to some of the IBM partners that’s at IMEC, that I know building guarantee at the private lines. So you pushback on the micro side, I wouldn’t say you directly pushback with some reservations, but then on the other side customers have basically taken the information that they got and then said, well this is so unusual, I want to roll this out elsewhere. I feel immediate needs else where and can you help me there? But I believe that their advance micro space is still open to us, we just, we will circle back to it. Yeah and then on the competition, I don’t know what the competition is going to do, honestly, I mean up to now we haven’t been a threat to them, going forward, I don’t know, I can’t predict.

Unidentified Analyst

Great. Was given to understand that you had a choice in terms of which company you wanted to be associated with?

Unidentified Company Representative

Yes.

Unidentified Analyst

So what made you choose Nanometrics?

Unidentified Company Representative

That’s a motive question. I think – there are many things to say to this. So first of all, what you call or know which was I used to be with Nanometrics ten years ago. So there is a lot of – and I was director of engineering then, so a lot of the products as you see at Nanometrics, today our products that come from that time. They look different, they are superiorly advanced than from what we had at the time, but they nevertheless come from the time. So I had a natural affinity to Nanometrics. But really don't underestimate that what I mentioned earlier, as I’m totally excited that we're building now a company that’s only one or two companies in that space that offers to full gamut of process control solution. And not applied materials, not Nova, not Rudolf are really opened offering that broad range addressing all the different process requirements, process control requirements that are part of that today.

Unidentified Analyst

This is the question for both of you. Are you engaged with any image sensor to related customers, because that comes – that is kind of the closest DSP is today?

Unidentified Company Representative

So the question that people didn’t here was – what is Nanometrics engagement with image sensor technology and, Nanometrics is in the image sensor process control and inspection business for several applications. The one you get said in your question was where are we with TSV and we have some TSV metrology in that space. But the TSV metrology demands and image sensors are really sort of the low-tech end of the TSV it’s a laser bars, et cetera.

Where we see wafer scale packaging for that picking up as any backside eliminated centres, where you have this fusion, fusion bonding, and we have a lot of engagements with customers on that. Additionally, we have engagements on our OCD technology for well sensor and Micro Lens Technology. So we already have image sensor customers for the OCD side now is moving into the wafer scale packaging fusion bonding side. And we see the backside eliminated is a big driver for us in that’s based more so than the TSV.

Unidentified Analyst

I was just wondering are you currently in reduction anywhere for the EUV or TSV or backside inspection.

Unidentified Analyst

I guess the clarifying question on the SPARK, I think so…

Unidentified Company Representative

So EUV nobody is really in production yet, TSVs we’re – yes in a production environment it’s not high volume yet but it is the most advanced customer in that space pushing hard to get to ramp actually TSV into actually two products.

Unidentified Analyst

And that shipping production volumes today?

Unidentified Company Representative

And it’s a rampage right now.

Unidentified Analyst

And I would assume that’s in logic…

Unidentified Company Representative

If I say much more than, you will know who it is? That’s confirming or denying yourself right anybody else.

Unidentified Analyst

Can you hear? Thank you.

Unidentified Company Representative

Yeah.

Unidentified Analyst

How many tools have you produced and how many that are out there either with customer product lines or the demand of the industry consortium?

Unidentified Company Representative

So I if you look at both the SPARK and the UniFire in the adoption phase, we have UniFire customers are really across the board now for both through-silicon via, microbump and wafer-scale packaging as well as some front end of line applications. So we acquired that product line in 2009. We had one tool of record customer that had deployed into high volume production in multiple fabs and since then we’ve been winning normally a customer at every quarter with the key UniFire (inaudible) wafer-scale packaging. And it’s really where is the adoption in volumes of that. So we see the microbump copper pillar technology as the earlier adopter and then through silicon via in wafer as to follow that. And then for SPARK we have three end customers and we so the SPARK is a little late on that adoption but we expect to post some complimentary along with Unifire Systems.

Unidentified Analyst

Okay. And then for SPARK it seems like the benefits are pretty really significant, so the economic benefits for the customers, it’s lots like ability et cetera so what is the value that you presented and they don’t have that you can fulfill they don’t have that solution and what is the pay back if you can kind of estimate in terms of fewer defects, fewer discarded wafers. And how do you measure the value of your tools and what is the price and therefore kind of a economic break-even? Thanks.

Unidentified Company Representative

So what I heard with some difficulty, but I am trying to rephrase it here. So what I heard was “How do you provision the tool, as a value for position to the customer?” And that’s an interesting question because even though I am a firm believer that the advanced micro macro data construction space is a space where we were eventually benefit. It’s really in the other spaces where customers are coming to us and are saying to us, look, I have seen something at Imago in the infrastructure there.

We’ve used your tool, I have seen it now for two years, you can see some defects that I cannot see with other tools. All right, so that’s one and then the second one is in the EUV infrastructure for example for the backside inspection like I said is earlier we never really conceive that space to be a real space for us because who cared about the wafer backside, if it hasn’t been for the Imago EUV consortium partners there to come to us and say, look, we want this technology but we want it can you make it even faster, can you, you have to sight the right sensitivity range. And you are the only one that basically give you this one shot very quick full wafer, inspection and that’s very appealing. And by the way, it’s where in that infrastructure twice I didn’t show it here, but the same problem persists on the EUV scanner for the mask, for EUV radical.

Right, so this is also full, this is now no longer transmission radical it’s a full contact clamping on the backside, it’s a reflective radical and if you have particles on the radical backside, they will cause tension in the substrate. I wouldn’t say that formation, but tension on the substrate that causes a face shift, and then you have a seven meter beam path but at time you are out to samples, you get more focused on hotspots but you get overlay hotspots.

And these are very difficult to find because they are not at some, not necessarily at the location where the overlay shift registration marks are that you use to measure overlay, but they are anywhere your array. And suddenly you are a couple of nanometers next to the line then you want to be and so it’s very dramatic, right. So this business are huge and in fact it’s our fist integration into a radical cleaner now. So it is our first integration of the SPARK technology, just to show you that, integrating is entirely possible and the industry is recognizing it.

Unidentified Analyst

I have a question on your full wafer versus scanning. What was the reason the KLA or Rudolph or Abbott, they did not use full wafer. It seems like that was your, that’s you primary advantage and write in there use it and what did you have to innovate something to get to that, and the second question is, you said your sensitivity to (inaudible) is that a upper limit on what you can detect?

Timothy J. Stultz

Okay. So two questions here first of all, why does the competition on to what we do, and why haven’t that traditionally already or historically done this. And the second one was what’s the sensitivity limit on the technology on this tier-1.

So to the first question is, you don’t underestimate the power of tradition. Something that you’ve done always this way will be continued, will be propagated. So the microscope maybe if you think back microscope based systems were used initially, manually, purely manually, but then the substrates where an inch.

And hence the substrates grew the tools were automated more and more. And now they are basically at the top of the S-curve, so you can expect very incremental, very tiny improvements, but really, there’s no surprises there anymore, right. So this is at the end of the S-curve. Where as but at the same time, everybody builds platforms based on what the prior platform was. So there is really that if were an evolutionary approach. And the revolutionary approach you can pretty much only do in a startup company where you have no history are not burdened by anything.

So to your second question on the throughput or sorry the sensitivity, let’s just say I mean, currently we use visible lights to inspect, visible and (inaudible). If we really wanted to push this, we would probably have to go in a shorter wavelengths also, but that means then that you’re changing over to much more expensive optics. You’re also are looking at possibly reflective optics rather than refractive optics, so it’s a different system. We’re not held back by engineering let’s say, or we are held back by engineering, we’re not held back by (inaudible).

So we could push this considerably, but it’s I don’t think at this stage we need to because we have ample markets out there for the tool as it is and before we go into a very advanced UV space, we probably we'll live with what we have for quite a while and live well with it.

We have time for two more questions before lunch. I think, so a question on the SPARK, you’re currently offering something that obviously it hasn’t been out there and as you said, just in your prior answer tradition means a lot in the semi industry. So have you been able to demonstrate to your customers that you’re going to enable them with a higher yield because obviously they will chase that everyday of the week. Have you not been able to prove that yet or is it to a point maybe on certain technology nodes, where they are getting what they need with their existing tool that, are they need to kind of migrate down the nodes before your tool set enables that better performance which effectively yield in the end so.

Unidentified Company Representative

Okay, so the question really is this is how much or just have you been able to demonstrate the customer side. And the answer to this is as a start up company we had a limited – just limited resources financially and people wise to engage with many customers. The big advantage is that Nanometric brings to the table, its now we’re no longer limited we’re basically going out for scheme and we are this is a terminal year for us where we will engage on all of this four application basis with customers right now.

So the first customer that has really is reporting back to us and where we see that we have a direct on their technology development and also on eventually their yield is in TSV space right where we have a tool installed since last year and that’s basically the first one where we can actually say yes to the question that you asked.

Unidentified Company Representative

So we get to the advanced macro side like when do you, when will you get either equipment N-FAB or data back that really start showing the differentiation of your equipment is isn’t?

Unidentified Analyst

6 month or 12-months arising there?

Unidentified Company Representative

Six years.

Unidentified Analyst

The EUV tool, the iMac EUV tool the you showed light sourcing if it is, is that affect a way your customers reduce and qualify your tool and their development – in different light source.

Unidentified Company Representative

So the question was is what kind of light source is the iMac tool used there and is it in anyway or shape impact the way we – our tool is use to characterize the wafers or wafer back side there and so they have the most advanced, they have the NXE:3100 there its no longer the advanced development tool but its really, it’s the first real tool let’s say and I am not sure what light source they use, honestly I don’t think it has an impact on us because what we are looking at is it’s a wafer backside and direct go backside and they are fundamentally focused related and overly related features that are independent of the light source.

Unidentified Analyst

Okay.

Timothy J. Stultz

All right so with that we have a opportunity to get some lunch in the back and then had 12:30 we’re going to start with long walking us through some of the Nanometrics finance.

The CFO and actually we got about one year anniversary, so it’s been actually a existing year with so really exciting technology at NANO and we are kind of change here as a little bit and I talk a little about our financial performance recap some of the guidance that we provided at the last quarter and I talk a little bit about model going forward.

So if you look at our summary performance over the last three years revenues in the 230 and on year-over-year basis, as Kevin mentioned earlier but 22% growth that compares to ERP growth last of about 10%, and there were 200% coming back from 2009.

On our gross margins our gross profits were about the last year, coming at 53.5% on a gross margin basis and when you look at our non-GAAP operating income and this is really backing out sales G&A and before amortization the legal settlements, 23% revenues and then on a GAAP basis our pretax income grew year-over-year up to $45 million.

On a net income basis that actually came down little bit and that’s largely due because our tax rates were up in 2011 because we finished utilizing all of our NOLs in 2010, which have tax around so I will talk little bit more about the tax rate later on.

We continue to think on our balance sheet significant over the past three years, we got cash from $150 million to just under a $100 million at the end of 2011. We’ve doubled our working capital and we've increased our intangible net book value to about a $184 million or $7.92 a share and what that's enabled us to do is invest a new enhanced products specifically and we had a major launch at the end of 2011 on our flagship Atlas product. It allowed to us to acquire companies that expand our core markets and do much of that acquisitions with cash. It allowed us to repurchase shares specifically around trying to offset the impact of employee stock plans is really the focus of what we're trying to do there.

When you look specifically our cash flow performance, our cash flow from operations generally on a last 12 months basis in the first three quarters of 2011 was between 16% and 18% for the year as a whole it bumped up at the end of the year to about 23% largely due to some very early cash collection. So I would expect that if you come back down to what we’ve seen historically. In addition what you can see is that it attracts free cash flow very closely as capital expenditures or a significant piece of cash usage for Nanometrics.

When you look at the revenue segmentation by products where you can see is that the big driver is our automated products and over the 2009 to 2011 period a significant growth, that we’ve seen from our automated products and the significant growth we’ve seen from the integrated products of over 200% those have been real growth drivers.

We’ve also seen growth in material characterization business as well as our service business, and specifically when you look year-over-year you can still see that the automated tools that we talked about earlier in our integrated products that driving the growth to slowdown the material characterization, which I’ll talk about earlier really has to do with sort of the macro equipment demand in the solar high brightness LED and silicon substrate market that has slowdown significantly in 2011.

If you look at our 10% customers, you can see that they can price 58% of our 2011 revenues, and they can price 44% of the worldwide CapEx in 2011, and we look at 2012, they are expected to comprise 55% of the worldwide CapEx in 2012.

So we clear have a strong presence with the companies that are leading the expenditures in semiconductor space. When you turn to geography what you can see is that this based on first end-use or ship to destination it really mirrors where our significant customers are with Korea being a significant piece of our business, the U.S. and then Japan where we have a number of very important customers that collectively make that an important region.

So it really mirrors what you see in the semiconductor space which is what you’d expect to someone that is successfully deployed in semiconductor companies around the world. So now turning a little bit to the quarterly numbers and specifically talk a little bit about Q4, as we indicated both in our Q3 and Q4 calls our revenues of 45.3 reflected a couple of push outs and a pause in spending towards the end of the year that we saw.

Our gross margins of 46.6, which were below the range we have talked about at the time reflected product mix particularly the launch of Atlas II, which had much more rapid adoption based some of our customers, then we have anticipated. And so those early shipments had an adverse impact on our gross margin along with the revenue levels, which were below that we see historically in the first three quarters of 2011, operating expenses on a GAAP basis were 21.6 million and included in that were settlement of the KLA litigation which was 2.5 million, that was also the purchase costs, the legal and professional fees associated with the acquisition of NANDA as well as some higher on stock compensation and amortization of the intangible.

So if you back those out, the spin rate from sort of an ongoing basis before amortization really ended the year about 17.3. Typically, we have seen kind of a low spend sort of the seasonally low in Q4 there was a lot of holidays and in addition the payroll taxes tend to max out as you get into Q4 so typically we see that drop and then we see those sort of the bump up in Q1 largely due to seasonality.

That translated into an operating margin of just under break-even on a GAAP basis and about 8% on a non-GAAP basis earnings per share was $0.02 on a GAAP basis and on a non-GAAP basis $0.10.

To remind you the guidance, this is the guidance we provided back on February, 8th, we did our earnings call. At that point we saw revenues up 15% to 21% at $52 million to $55 million. We expect to see gross margin in the 47% to 50%. And operating expenses if you look at the midpoint that translates into about 21.9 million on a GAAP basis which includes the amortization both NANO’s historical amortization which is known about 300,000 a quarter and the NANDA amortization, which is 500,000. So if you back that out the sort of non-GAAP, the R&D sales and G&A expenses the implied number is about 21.1 million and I will take you through some of the historical breakdowns on that. So that implies an operating margin improvement get into 5% or 10% of revenue in Q1. Earnings per share again on a GAAP basis of $0.06 to $0.13 and on a non-GAAP basis which only excludes amortization of acquired tangibles in Q1 of $0.09 to $0.16.

When you look at the revenue and gross margins by quarter you can see. Yeah, historically, we had very good gross margins going back through the beginning of 2010 above 50%, so we got to Q4. And Q4 was impacted by the lower revenue as well as the impact of the new product releases. The first half of 2011 on a revenue basis was a strongest on record for Nano. The industry slowdown certainly impacted the second half and Q4. We expect to return to growth base on what we’re seeing in the market and what we guided for Q1 in 2012. And as I said earlier, the product gross margin was also primarily impacted by mix and the new product ramp in the beginning of the year.

Our service gross margins as you can see in this chart continue to remain very strong based on the leverage (inaudible) the business.

Turning now to operating expenses. You can see the trended operating expenses quarterly going back to Q1 ’09 and the guidance for Q1, 2012. Our, the Q1, 2012 expense NANDA contributes about 2.5 million to that spend. So the increase quarter-to-quarter is largely driven by the NANDA. The other is due to ongoing investments in R&D. In addition, our applications grew, which like these recipes that Bill was talking about earlier that’s reflecting in selling expenses and a customer support expense and so we’re continuing invest in that group and that will show up in selling.

We expect G&A to be relatively flat, with just nominal increases through 2012. And for the Q1 outlook that I talked about is about $3 million increase driven by those items. We are not forecasting as we look throughout 2012, significant changes from what we are seeing in Q1.

On a profitability basis the operating profitability we’re expecting to see improvements going into 2012 largely driven on increased revenues as well as improved gross margin and I’ll talk a little bit about some of the initiatives we have in place, and some of the outlook that drive those gross margin improvements.

As I talked about earlier, we continue to strengthen our balance sheet growing cahs consistently up through Q3 ‘011 and we’re really maintaining flat cash in Q4 for despite the purchase of NANDA for $22 million in cash. We’ve seen our DSOs will be very strong and over this period of time, we’ve reduced our debt as well as refunded most of the acquisitions with cash. Our GSO remains very healthy at typically its around between 60 and 70 days we came in a little bit below 60 days at the end of Q4 again due to some of those early recedes but we expected to continue in that $60 million to $70 million 60 to 70 day range.

Inventory returns were down in Q4 and again that was driven largely by the lower revenues, but also we put in place a number of initiatives we’re going to be focusing on in 2012 to also improve our inventory turns and increased the number of turns.

Looking at the quarterly revenue segmentation I think, which you can still see is that the automated tools are most significant level in terms of driving the level of revenues that we see across the business as a significant piece. I think the other point is that the most significant product is really our Atlas products which is the XP products and the Atlas II that we launched are the really the majority of the automated tools. We’ve also seen, yes again significant growth in the integrated space and as you can see in the materials characterization business that growth areas that are affected by the slowdown in LED solar and silicon substrate.

As you look at the end market which is where the products go and looking at it on an annual basis through 2011, you can see that memory typically makes up about half of the end market that we sell into with the rest going into Logic, Foundry and LED and silicon solar.

As you look at Q1, 2012, you can start to see the shifts that we’re seeing in memory spending from DRAM to Flash, also seeing a increasing contribution from Foundry as an end market and also the slowdown we’re starting to see in Q4, 2011 in the LED solar and their silicon space.

So now I’ll talk a little bit about just to recap on some of the M&A deals and provide a little more background on some of the financial aspects of the NANDA acquisition. In 2006, the company acquired Accent Opticals, which was our focus and brought the overlaying materials characterization business to Nano that was a stock purchase to about 5 million shares.

In 2008, the company acquired to that that provided our integrated technology as well as our solar products with a cash purchase for 3.5 million. In 2009, we acquired the Unifire business from the Zygo with our semiconductor solutions business from them, which is really our advanced 3D packaging business for a cash purchase of about $3.5 million and then this year we acquired NANDA in the macro inspection space was a cash purchase of $23 million.

In the chart, you can see is that over this period of time we’ve grown revenues by 140% which is about a 19% annual growth rate on a cumulative basis. At the same time our increase in diluted shares was limited to about 56%, which is about 9% or another way of looking at in 2006, the revenue per diluted share was about $6 million just a little over that to just under $10 million at the end of 2011.

Couple of things on the NANDA we completed the NANDA acquisition just about in the middle of Q4 November 21, was a $23 million cash purchase as I mentioned earlier. They do have as Larry spoke about earlier multiple spot placements to multiple customers in place. The quarterly operating expenses that we expect NANDA to contribute to our expenses is about $2 million per quarter that’s primarily in the R&D function. There is an additional 530,000 related to intangibles as part of the purchase accounting that we reflect in each quarter and that’s going to be reflected in COGS and in operating expenses.

Approximately 27 employees located primarily in Munich, Germany, which is our headquarters. And as we indicated in the past we expect to begin to see meaningful revenues in terms of meaningful contribution to our topline really in the second half of 2013, which is similar when we also expect the operation to become accretive to Nano’s operating results. And this is important and we start to look at our near-term and our long-term financial model.

Tax changes, which is one of the most interesting parts, unfortunately we saw the tax rate from 2011 increasing by about 5%. Now the biggest, the first driver really is the (inaudible) to a new the R&D tax credit. And that contributes about a 2% increase in the rate it’s widely expected that will be approved sometime in this year. At the time it does get approved we’ll do a catch up adjustment and our rate going forward should reflect the benefit of the R&D tax credit.

So at some point, when congress gets around you and proving that we would expect our rate drop back down by 2%. And then NANDA has an impact of around 3% negative impact on the tax rate. And lot of that’s because as an operating entity they have been evolve some of their losses are fully – we kept fully recognized those which will create a valuation allowance against them. And so that bumps up our rate. Biggest piece of the drivers were it’s really get the amortization of intangibles and that’s about 3%. Once they get to the point whether profitable, we will be able to recognize the value of their NOLs, we would have a small favorable benefit.

And after that date this 3% will go away and we will back down in our 35% rate. And I’ll talk a little bit, we talk about the model type about the current environment and how we see our tax rates going forward. On a non-GAAP basis, we’re back out, what we believe is sort of the statutory rate that we would have in the U.S. based on most of our income coming back to the U.S. by excluding the amortization of NANDA intangibles and if the R&D tax credit is reinstated.

So now I’ll turn a little bit to the financial model and will talk first about our near-term model, which reflects how we see the business revenue range of $55 million to $65 million per quarter. And the goal there is to move our gross margins back up above 50% and I think as we talked about in Q4, there is initiatives around the launch of a product where, yeah, we’ll see continuous improvement overtime as we show, as we bring that product into mass production of the improved gross margins.

Operating expenses and again this is focused excluding amortization, but really R&D, SG&A in the 35% to 40% range, which results in a non-GAAP operating margin of 10% to 15%. And again the difference between GAAP and non-GAAP is simply the $800,000 of amortization. And again in the near-term model, looking at a tax rate on a GAAP basis to bring that below 38% to a non-GAAP net income in the 6% to 10% range.

There is a number of initiatives that really bridge us from near-term model to the long-term model that I’ll talk about, which is probably in the $80 million of a quarter release when these start to apply. One of them is NANDO NANDA provide a material contributions to our revenue, which as I indicated in the second half of 2013 and then continued growth in market share some of the market share gains we talked about, as well as growth in the segments that we participating such as the OCD of process control space.

Once we’re there, then to focus at a lot of the risk management activities around volume supply agreements, increasing manufacturing efficiencies and leveraging outsourcing and contracting manufacturing, which will drive a gross margin that we see, will be back up to the mid 50s and better. The other thing that these things we do but also drive a very variable manufacture operating structure, which continues to improve where we’re at today in terms of the ability for our manufacturing cost to fluctuate with volume.

On the operating expense again we continue to see focused increase in R&D and applications, but really it increases through 2012. As we really see those costs stay below, moving to below 30% in the long-term model. And what that does it get you to a non-GAAP operating margin in this long-term model of 25% or better and again the key drivers there are NANDA’s contribution to revenue, NANDA’s contribution to accretive earnings, gross margin improvement and the operating leverage that we get as we scale revenues.

Returning to the tax rate, we’ve done a lot of research around the tax rates and what can be done there and by restructuring our supply chain it is possible to bring that rate down overtime, because it’s a buying process. Buy one or two points per year over a period of time that ultimately would bring us inline with what we see some of our competitors who’ve done these types of restructuring items years ago. The other dynamic that related to when we engage in this is a lot of recent discussion in the U.S. about a) reducing the corporate tax which is certainly make some of these very expensive processes less valuable to company less beneficial as well as some proposals it would actually create an environment where these structures don’t give you the benefit that companies are getting today if we take that benefit away.

What we’ve looked at because of the dynamics of the complexity involved here is to really wait and see what happens over the course of this year with respect to sort of the political landscape and the tax structuring before taking this off. Once we move forward on, it’s a fairly straight forward process to start to bring it down. But it is just complex and expensive and if the U.S. is going to reduce the tax rates or create a structure where these provide the benefit, it would be a lot of expense of complexity that may not benefit the company.

So it’s something we’re going to continue to watch closely, we know how to move it in the current environment, but it’s certainly a challenge trying to figure out what that environment is going to look like in a lot of month, but when that all comes down to as a non-GAAP net income number that’s the 17% or better. That’s my remarks.

So, the rest of the panel could come up and join me up here. We can answer your questions.

Unidentified Company Representative

Okay, we’ve got the team back together. We will start taking some questions. We can do the filling some of the gaps that we may have created with our presentation.

Unidentified Analyst

Thank you. Nice presentation, along the sort of the capital structure line, you’ve got a very capital efficient corporate business model as Ron, we talked about free cash flow generation. You have been making some good incremental IT acquisitions that haven’t been that capital intensive, so I am just wondering, longer term, have you thought about you are buying to stock that’s a point where you have so much cash flow generation you might be buying the stock or something like that with the current level. Thanks

Ronald Kisling

Thanks [Graham]. I think the fundamental question is, why does our outlook and thoughts on stock buybacks and although we’ve had a nice growth in our cash position and strength of our balance sheet as a company, if you look at our cash compared to our revenues, our cash to our market position was probably one of the lowest in the industry.

And so we are getting better night sleep with adding close to a $100 million in the bank, you can consider that excess, where we are expense to the fact that we don’t want to necessarily dilute our show. And so we have a stock repurchase plans that were meant to offset some of the – what these sometimes people call [creep] in the total shares as a result of awards to other employees. That right now is the only area of focus for us.

Unidentified Analyst

Thanks a lot. Well, in terms of the segments that you talked about in terms of revenue growth from your current model to your long-term model among the three segments that you talked about market share growth, process control growth and NANDA. Can you kind of give a little more granularity of maybe a percentage breakdown of what’s going to drive that $80 million after that.

Timothy J. Stultz

Okay. I’ll let Kevin fill that one.

Kevin Heidrich

So we see market share growth driven by a couple of different pieces from that. In the – the NANDA case is largely (inaudible) to it. It’s kind of early in the adoption phase and as Ron said, we see that being accretive towards the end of the next year. So that is two years out from a real growth point of view. The near term gains are really driven by OCD and increasing OCD wins across all the different device types including merging devices in advance notes, and then we also see UniFier and other product adoption as the early penetrations from those tools in the 2009 era start to faint out. So I think the automated systems is as Ron talked about are the near-term drivers driven by principally the Atlas in the OCD suite, and then follow on UniFier with a long replacements we have.

Unidentified Analyst

Okay, great. And in terms of just improving on the gross margin front with some of the outsourcing and the leveraging of high volume manufacturing that you talked about in your presentation, can you give a little bit of color maybe a little about NANDA and how you need to be initial phase here whether you are seeing a commonality of parts or whether you can leverage some of those aspects into that model.

Timothy J. Stultz

So I’m going to use this question as an opportunity to talk a little bit about something we’ve excluded which is the links. We didn’t spend anytime talking about links platform. Many of you are familiar with our strategy of having a common front-end to put different technology modules behind this common front-end. And we introduced the links a few years ago, and we have been migrating our entire product platform on to the links. One of the things that are very attractive about SPARK platform is it still with a standard mechanical interface cal it one balls wide, which also makers a complementary to the links and that program of integrated on to the links is already underway.

The value proposition that we didn’t, they just limit about timing. When you think about the solutions we are bringing to the customers you need to think about not the SPARK or the OCD or the UniFier, but well as Kevin talked about when you transistors or interconnects or wafer scale packaging and what combinations of technologies on a common front end create better value propositions to our customers. Our discussions with our customers are about putting a SPARK with the UniFier on a common front end or SPARK with an OCD or put it in integrated metrology within there’s a lot of combinations and the leverage of this common front end is quite powerful and is the only one in the market that offers to our customer so that’s an important part.

In terms of common platforms we are going to get is much on the front-end as the same, we used the same robotics, the same automation, the same factor interface. We look clearly across supply chains, being the NANDA product, as Lars talked about, its got really nice materials and should be able to comment very competitive margins coming out of the suite once we start to launch the inter production

Unidentified Analyst

Tim you had a pretty nice market share again pretty significant one in OCD you are pretty strongly leveraged to OCD. Low looking forward what’s the technological go disruption that could happen or likely to happen that can swing the market share among the three players and how are you positioning that to you come out as a winner in that one?

Timothy J. Stultz

Okay, so, there is couple of questions about our market share and what swings and what are the drivers. I consider the markets that we serve still quite fertile, although I’m very pleased with the progress we’ve made in market share gains. When you look at that race track that’s been presented on a number of the presentations, there is a lot of space in there where we still can penetrate, and as on any given account, none of us own the entire account, whether its an Intel, Samsung, IMAC there are opportunities, we maybe strong in OCD but we may have opportunities in films. We maybe strong in films; we have opportunities in litho, so there is a lot of fertile ground for further market share gains and penetration.

When I look at, when I start to put these other tools together and I look at where we are in terms of strength, beyond the market share gains on OCD. We’re actually extremely well positioned in the wafer scale packaging, this is just a small market so far, we have tools in every major account and we believe is that technology starts to ramp. We’re going to be primary beneficiaries both with the UniFire and the SPARK platform and then again the inspection area that’s a whole new area for us that we’re just beginning to explore.

Unidentified Analyst

Again so, one question about the overlay market seems to be pretty strong and probably get stronger with the multiple patterning and I see that KLA pretty much owns that market. Do you think that there is a disruption coming on that market that image based overlay it’s probably in seen is that something you would want to target and what can we look for in that market?

Timothy J. Stultz

So the question as do with the whole overlay business and absolutely we have nearly zero position in the image based overlay we are very small market positions and same on us because we actually have a very good platform, but we were not consistent and we were not we didn’t have a good strategy for staying in that marketplace that product line.

In terms of disruption however there is a migration and growing use the interest in a different form of overlay called the fraction based overlay. The fraction based overlay rather than image based overlay is takes advantage of our core competencies said that with – whether the similar approach so use with our OCD platforms reflect geometry platforms. And we see that as potential way to get back in to overlay arena.

Unidentified Analyst

Regarding acquisitions I’ve got three quick questions do you expect any in the next 12 months, what determines whether use case or stock and do you have an upper limit as the size?

Timothy J. Stultz

Okay. Also during the equations our acquisition strategy, so I will remind you of the bullets that we made in terms of what is our criteria for acquisitions a lot of companies have different criteria such as I want to make sure our companies immediately accretive. For us where iBallz are on how do you get above $0.5 billion in revenue and how do you get to $1 billion. How do you become a viable competitor to $3 billion Company? And so our strategy is about expanding our serve markets increasing our footprint leveraging our core competencies. There are opportunities to continue to do that in the market and want to Kevin primary job is turnover identify wonderful opportunities with his smart platform. And find out how they are synergistic with our current product offerings.

I am not going to tell you whether we’re going make any acquisitions from the next 6 months to 12 months, but I will tell you that we like acquisitions for the extent that they complement our business model. We’ve don’t go through our acquisitions that are simply to add revenues for the top line they don’t change our model and if we have others we are always link towards cash over equity.

Unidentified Analyst

(Inaudible)

Timothy J. Stultz

Probably can’t be a lot bigger than I am.

Unidentified Analyst

When you look at new product release since like the Atlas II, what is the typical margin can you take initially and how many quarters is typically take to get back to normal margins?

Timothy J. Stultz

So, the question obviously what happen to that was two in how correctly we recover I think is a valid one. Atlas II most of our products go through the following we build the tool and we have a bomb. They engineered by the product and the hardware we put some tools out in beta site – the beta sites last six months may be to a year. We get feedback from the customers, reiterate on the product. And then we launch it and do a best convince the customers to convert.

During that timeframe we will go from the engineering build to the full manufacturing launch, we put in place long-term volume purchase agreements. We do the training we ramp up the manufacturing to improve the manufacturing efficiencies. The Atlas decided – the Atlas II broke all the rules. We put the tool out it was honest a six month free valuation within three months that particular customer not only said they want that tool at exclusion of the other tools they were in back like.

They converted the models to that model and they also know not only that was within the current quarter, but they totaled us in the subsequent quarters that was the only tool they are going to expect. We scrambled that this was good news, bad news but we had a scrambled engineering teams buying more parts on engineering procurements without the opportunity to negotiate the launch on agreements and actually have engineers help to build the tools because we haven’t even trained the full manufacturing resource. So we are little behind the power curve in terms of timing. This tool would be brought back into the margins of our other tools which we’ve dimensionally know to do in the third and fourth quarter of the year and it will make incremental improvement. We hope by the end of the year to get this back on track.

Unidentified Analyst

Thanks. And then you talked about some pretty impressive growth in capital spending by your big three customers. Would you expect your business with those guys to grow a little faster than they are growing because the move down the adoption curve where the nodes what they are looking at?

Timothy J. Stultz

So the question is will we gain further market share with the big spenders and that’s the assignment of everybody in the company. Now we measure ourselves by position in the market and market share gains not about tracking with the overall capital spending. I will point out that one of the things we try to do separate CapEX from WFE and which is always an interesting challenge on itself but for instance the Intel now has indicated we are going to spend more money on CapEx. But at this point we believe and mostly we believe that the WFE is actually going to be down year-on-year not substantially but down a little bit. But all of these things are – it’s not about this quarter, next quarter its 2013, 2014. I actually think there is some nice things are occurring that will support the in fact that we should see continue to help investments going into for the next couple of years not just the next couple of quarters.

Unidentified Analyst

A couple of questions on the SPARK and UniFire kind of where we are on the – I guess rollout to the customer base, has every major semi-device makers seen both these products as you shown data set kind of give us on I don’t know if some scale of where we are, and then a follow on when we think about supporting these systems in the field it’s simply going to be leveraging in the current employee base. We need to bring in new employee based across sites. How should we think about that?

Timothy J. Stultz

I mean okay, so I’ll take the question whereas how do we see the adoption curve of both the UniFire and the SPARK and how do we see the market awareness of the UniFire and SPARK and then how do we slow that out in the field. In the first case from an awareness point of view, we have been selling to all these customers for very long time. It’s a very long sales cycle in the case of the UniFire, we have sold to all the key customers and it’s really waiting for their volume adoption curves for both copper pillar, TSVs, and microbumps. We have one high volume customer production which has tools in three different facilities worldwide and we expect to that from that customer and then we see as particularly driven by TSV for UniFire to see the early placement to that tools following on very quickly.

From a market awareness from the SPARK point of view, all the customers that are UniFire customers are targeted to be SPARK customers. And so the fact that we are in there, and have sold to them, we know who are the right person is, we know exactly who the right audience is in those sites. So getting the attention of that person and getting the right value proposition to them is very straightforward.

So we expect that traction curve as well as the data that we’ve generated out of the iMac engagement is really helping to pre-sell the SPARK to all those customers. So we don’t have to sell Nano. We don’t have to sell our value proposition. In general we’re interesting to sell the incremental SPARK value. And then as we ramp that, we do really see how we can scale across the world with our installed service and applications base. We see minimal requirements for additional service or applications personnel only in a region where we may not have shift one of those tools before we may need incremental applications or service engineer who expect to take advantage of our worldwide channel, which is about just under 200 service engineers worldwide and over a 100 applications engineers worldwide. So they can absorb a few more tools into the key customers.

Unidentified Analyst

Just two quick follow-up on the SPARK, I mean you didn’t really give me an answer to be honest I’m kind of getting a reference of where we are. Every customer seeing that, are we still at a point, we still need to go show data kind of stuff I mean we all know that KLA is very dominate and getting in the door to that customer could be a little tough some time. So I just kind of get a feel because you’re projecting revenue in the shelf in about 18 months. So I just trying to get a feel for where you are in the sales side of it because if you miss a window for a node you may have to wait for the next node.

Timothy J. Stultz

So just keep in mind, it’s really only been what now 13 or 14 weeks since Nanometrics took over here and we have access to the mighty marketing muscle that we didn’t have as a small company. And as a startup company, I can say well I was it, I was the sales guys period and I did as much as I could and I have focused not on just about any customer but on a very small handful really and it was five, very select customers. And for example on Intel I never doubt to even go to because I knew it was pointless as a small company. So now I being part of Nanometrics one of the first things that I did is we sold them in 21, November, that was Thanksgiving week immediately after Thanksgiving I was on a plane to relay to all my customers that I have been pounding the payment was for a couple of years personally that we are now part of a larger entity and it was greeted with great relief.

I was very interesting to see because I never knew, because nobody had actually well, nobody would, very few of these customers had actually told me that look you know I like your technology, I like you, I like your team but I can’t buy from you because you are just small and you need to tell me who you are going to do this with. But some do it and this was one of the reasons why we decided to sell the company when we decided to sell it, right.

So now we are at the point where we are very much going out to all these customers, some of them have seen this for the first time, like Intel for example and Intel there are many places you can probe. So I’m spending, personally a lot of time training the sales force, the Nanometrics sales force, because I can’t possibly do it all by myself.

So the first three months now have really done by essentially doing that a handful of customer have just joined strategic customers that are good UniFier customer for example and another in that spending time with the sales force so that we can rapidly role this our and actually multiply the average. So I see the year 2012 not so much as the great revenue creation in the year but as the year where we place tools strategic investors and strategic customers.

Timothy J. Stultz

I just add a little bit to it. A largest work in very pretty diligently with the entire work sales force because in all the key accounts where we’re positioned. But the specific answer there are tools being placed, there will be more tool placements. There will be revenues this year with breaking amount because it won’t be material yet. So this is not a let’s hope we get revenues in 2013, but by 2013 we see sufficient revenues to become accretive to the business model. And do you have anymore questions?

Unidentified Analyst

And just on how, since the last guidance over last couple of months, how are you feeling about the year? Are you getting incrementally more positive, things are out of getting more request or getting more caution, anything that you can share with us, and not a question on guidance?

Timothy J. Stultz

Yes, I would say you know better than us with that question this time of the quarter where we’re certainly not going to be renew or confirm our guidance. But I think in general, the difference between when we talked at our last call what’s happened since then if you look back in November timeframe what was going on. We now have seen some major companies step up their capital spending plans and fortunately they are the ones that we are well positioned with.

So that gives us some confidence that 2012 can be a good year. We believe we can continue to out grow the spending, both as a combination of the market share gains and what we are doing in some of the other product areas. I think there is a lot of models out there, I suggest it’s going to be first half loaded and a dip in the second half and I leave it to the modelers on that. We have pretty good visibility into plans. We have good decent visibility into new fab activities. We are engaged in the 450 activities, we will be putting our first 450 tools out. So, I just, I’ve just excited about what we’re building as a platform, but I’m not going to renew my guidance.

Unidentified Analyst

Okay, two questions. When do you first expect the EUV to be implemented (inaudible) given.

Timothy J. Stultz

I’ll let Kevin because I don’t think anybody knows the answer anyway.

Kevin Heidrich

The EUV implementation I think is everybody is interesting and waiting for, and I think we also see all of our customers working very diligently to put in. Ultimate message to make sure that we keep that keep base of their patterning roadmap. We see already in the NAND case the fast move to the vertical BiCS and DNAND structure is specifically to deal with the fact that that scaling with those getting very expensive in NAND. So we see architectural changes driving as well as the double and quad patterning and I think we see most of our customers waiting for the EUV infrastructure to be ready and I think Nanometrics will and that is to make sure we’re tied into that infrastructure when it does get turned on whether its SPARK for backside inspection, vertical inspection, so forth or the OCD opportunities that we’ve seen. But when that happens I don’t think we can (inaudible) there is a good EUV symposium a couple of months ago and I think most of you attended that.

Unidentified Analyst

Next question is as the environment moves from PCs to tablets, the competition environment how would that secure?

Timothy J. Stultz

As long as they are making chips, I’m happy and I don’t really care who makes, I mean I’m really happy that one of our largest customers makes a lot big micro processors for PCs they’re also looking at mobile as we all understand. The foundry I think the foundry models continue to gain strength. And we’re fortunate to be well placed at least one major foundry spender and we’re working hard on the balance. But when I look at it is just how much silicon is being processed, how difficult is to make yielding devices and will our products play a role given, reduce the manufacturing cost and developing itself. We’re pretty agonistic to which way it goes. Any other questions?

I just really, really want to thank all of you for taking this time out here today to join us here someone ask me earlier, is this going to be an annual event, so it depends on how we feel at the end of the day. I’m pretty excited to have the opportunity to bring some of the members in my team to speak with lot of you that I spoke with you individually. And it helps you to see the depth and the breadth of the things we’re doing. I would appreciate any feedback primarily through Clair on whether or not the this was time well spent, what we can do better, where were the gaps will fill on and based on that we can look forward it perhaps do this again in another year. So thank you very much.

Copyright policy: All transcripts on this site are the copyright of Seeking Alpha. However, we view them as an important resource for bloggers and journalists, and are excited to contribute to the democratization of financial information on the Internet. (Until now investors have had to pay thousands of dollars in subscription fees for transcripts.) So our reproduction policy is as follows: You may quote up to 400 words of any transcript on the condition that you attribute the transcript to Seeking Alpha and either link to the original transcript or to www.SeekingAlpha.com. All other use is prohibited.

THE INFORMATION CONTAINED HERE IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL, CONFERENCE PRESENTATION OR OTHER AUDIO PRESENTATION, AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE AUDIO PRESENTATIONS. IN NO WAY DOES SEEKING ALPHA ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S AUDIO PRESENTATION ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS.

If you have any additional questions about our online transcripts, please contact us at: transcripts@seekingalpha.com. Thank you!

Source: Nanometrics' CEO Hosts Analyst Day (Transcript)
This Transcript
All Transcripts