Seeking Alpha

Technology Assessment Group

 
View as an RSS Feed
View Technology Assessment Group's Comments BY TICKER:
Latest  |  Highest rated
  • Another $800 Million Down The Drain For Bank Of America Shareholders [View article]
    I bought during the darkest hours at around $4.00 or a little less, knowing full well the very bad things CountryWide and other elements of the bank had done. I am expecting BofA to take hits, and will continue to take hits, but I am in it for at least a decade. I also am of the opinion, once these hits are taken, the bank will return to strong position it has had for decades in retail banking. Sometime I am not happy with the operational errors I see, but I see these as the routine risks of running a bank. I spent few years on assignment to the bank, visiting their branches, cash vaults, data centers, their HQ, and studying their business. Once they get by the bad from the meltdown era, their network and entrenched business relationships will be a money maker
    Apr 4 06:12 PM | 1 Like Like |Link to Comment
  • Nimble Storage Validates Hybrid Storage Array Concept - What Are The Upside Limitations? [View article]
    4-3-12
    Nimble is $33.78 as of few minutes ago.
    Off from high of $58.00
    I certainly did not forecast that Nimble would fall this low.
    But I was correct advising to be careful since there are upside limitations.
    One point of my article is don't get caught up in the flash memory euphoria.
    I have observed too many professional and non-professional investors get caught up
    in the pure technology perspective (e.g. flash memory) and completely miss that
    it takes more than just fast chips to be successful. This is a systems business, and it
    takes a systems person to understand where a product will win or lose, and what the limitations are. In addition, the requirements of a low end, mid-range, and Tier 1 Enterprise customer are completely different, and will act as constraints as to where a product will or will not be successful. It has been my experience that not even people
    with storage experience will understand the difference between these requirements, especially with respect to Tier 1 Enterprise requirements. This is one of the reasons why Violin is struggling.
    Apr 3 05:50 PM | Likes Like |Link to Comment
  • Flash Memory Arrays Coming Out Of The IPO Woodwork [View article]
    Today is lockup expiration day for employees.
    A few quick comments about the new strategy:
    Smarter strategy than the one directed by prior CEO
    A reasonable strategy, but full of risk.
    1- Getting rid of PCIe SSD - Good move.
    a) Chance of success in PCIe SSD is low
    b) It increased expenses for little revenue
    c) The war is over for this, and will be fought by the vendors with economies of scale, not botiques.
    d) The counter-intuitive part is that hyperscale need less functionality and
    more simple and cheap
    2. Focus on new data management stack
    a) this is smart.
    b) the core issue to break out to higher end market segments
    c) big risk as to whether or not they can pull this off
    d) hard
    e) they have been trying for a long time now, I do not see any
    new data which suggests that there is any new approach which
    will make them successful where they have failed in the past
    Mar 26 12:24 PM | Likes Like |Link to Comment
  • Former Bank Of America Banker Sentenced For Stealing [View article]
    I spent a number of years assigned to the Bank of America when I was younger, and was a supplier of some their strategic IT systems. I was fortunate to be able to interact with some BofA executives. I have been following Bank of America for 32 years.
    1. Personal Bankers at the branch level are not executives.
    a) There will be no impact on stock price due to this incident.
    2. It is the operational strategy to keep minimal skill sets in the branches; anything complex or requiring judgement is directed from central or regional. Maintaining skills set or training for a highly distributed branch system is very hard and expensive. So this is the BofA strategy, as well as the strategy of most large banks.
    3. These Personal Bankers are junior personnel operating within tight prescribed
    limits. With that said, even those with AVP and VP titles in banks are more
    operationally oriented, executing prescribed routines and procedures. They get
    AVP and VP titles to make their retail customers feel important. As we
    have been saying for decades "Banking AVPs are a dime a dozen".
    4. There are standard procedures to thwart these inside jobs. They are not always successful. That is why you have balancing procedures, dual custody, and audit trails. But all big banks have the same problem, and there will always be some schmuck trying to beat the system.
    5. I agree with BudH, Dom527, and Dirty Capitalist.
    6. It was pretty funny a few weeks when my BofA personal banker screwed up my high value wire transfer that was arranged well in advanced. I met a lot of very young " personal bankers' who were clearly recent college graduates, with limited knowledge, and the branch manager was pretty junior too. So I was quite harsh with them for failing to follow standard operational procedure. But that is par for the course at the branch level. When I was assigned to the Bank, the real executives such as Sam Armacost were at 555 California Street, not in the branches.
    Mar 24 09:03 PM | 1 Like Like |Link to Comment
  • Flash Memory Arrays Coming Out Of The IPO Woodwork [View article]



    Just a quick comment about a blog in Barrons.

    Violin expands push into channel.

    blogs.barrons.com/tech...

    A few key points:

    Violin has already had a channel presence for quite some time, so this is just an expansion of an existing strategy.

    One key question is whether this will lead to significantly more revenue.

    I suggest that this does little to expand Violin outside where it is currently successful. VARs (Value Added Resellers) are generally used to reach smaller customers, and direct sales forces are used to sell directly to the Fortune 500 or Fortune 1000. Yes, there are a few exceptions (and I've done business with them) but this is generally a true statement.

    The paradox is that the while Violin products have very high performance and low latency but need improvements in their data management software stack to meet the needs of the Fortune 1000, their products are a bit pricey for the smaller customers addressed by VARS. These smaller customers performance and latency requirements more modest. So it is my opinion that Violin will not get a major revenue bump from this effort. Mid-range vendors such as Nimble, Compellent/Dell, Tegile, Pure, Tintri, and Nutanix will continue to do fine in the mid-range space with little negative revenue impact from Violin.

    Violin cutting back on their direct sales force will reduce their SG&A (Sales, General and Administrative costs) in their forlorn effort to crack the Fortune 1000, but it is unlikely the additional focus on VARS will gain them substantial revenue. Violin is caught between a rock and hard spot. This is a very tough position to be in.
    Feb 19 05:08 PM | Likes Like |Link to Comment
  • Flash Memory Arrays Coming Out Of The IPO Woodwork [View article]
    Hi Mr_Z:

    I understand where both you and Nimble are coming from on this subject.
    I would like to provide a broader historical view that would provide perspective and context.

    First, a level set. The original discussion took place between Violin and EMC at the Flash Memory Summit August 2013 where Violin took the position that All Flash Arrays are just a cost effective as traditional spinning disk. EMC disagreed, and the EMC position was that there continues to be a role for traditional spinning disk, and hybrid storage arrays, with both hard disk and flash are viable, cost effective products. My position supports the EMC position. What struck me as not making sense was that the Violin products do not support deduplication. And if one is to make a claim of flash being just as cost effective as hard disk drives, I would think you would bring all the features to bear on solving the data reduction problem, and those three features would be deduplication, compression, and thin provisioning. To make the claim without one of these major features just seemed like a not-very-strong way to make your case.

    With respect to the Nimble position that their use of snapshots means that you don’t need duplication, I think a broader perspective should be taken.

    First, Snapshots have been several decades, and the use of pointers to source data to avoid doing full duplicate copies has been around for well over a decade. The Nimble use of snapshots which point back to existing data in a prior backup is use of existing technology that has been around a long time. What is new is positioning this as a replacement for deduplication, and Nimble is the only company positioning this tried-and-true snap shot technology this way. Many vendors have this “pointer” technology, and there is nothing new here. But it could be pointing back to multiple copies of the same data, since they do not have deduplication. This occupies more space.

    I was surprised to see this positioned as backup. The reason why I was surprised is that backup usually places the backup on a separate device, in case the primary device becomes non-operational. With that said, I do see people talk about snapshots as backup, but these tend to be very small workgroup configurations or hobbyists exploring the exploring products for very small scale use, where the consequences for the loss of data (e.g. the primary device with backup data on it becomes non-operational) are small. For example, losing 50 GB of pet photos has far less impact than losing 50 GB of banking transactions.

    In most commercial use cases, backup is made to a separate device, and combined with remote asynchronous replication. In addition to being off the primary device, a backup copy is removed to an off-site location periodically. One advantage of a local backup device is that a local restore is usually much faster than restoring across an asynchronous communications link to a disaster recovery site 500 to 1000 miles away. The key point is that the backup copy is not kept on the same device where initial data capture occurs. As a Tier 1 Enterprise Storage person, we would never allow our customers to keep a copy of the backup on the primary device; but that is on the high end, and I understand how compromises are made in the midrange and low end markets because of lack of time, staff, money, and technology.

    Here longer explanation how data center operations work which would put my shorter response above in context.

    First a little historical background:
    1. Snapshots have been around for decades and are provided by many vendors. In fact, there is an entire eco-systems around snapshots called data protection which include snapshots (read-only, full duplicate), copy-on-write snapshots (just copy changes), other snapshots which use pointers so to original data so very little space is consumed, clones (which are transformed snapshots so you can read/write to a clone), and consistency groups, which you make a bunch of copies at the same time. Where a lot of vendors are concentrating their resources nowadays is around snapshot application awareness – such as being able to work with other applications such as VMware or leverage MS VSS (Volume Shadow Copy Service), or integrate with Oracle management utilities (so you can launch snapshots from Oracle)

    2. Snapshots that occupy very little space have been around over a decade. “Thin Snapshots”, “No Copy Snapshot”, “Fast Snaps” are but some of the multitude of labels given them, and they have various methods underlying them.
    One underlying methodoly used by many vendors is using pointers to just point back to the original data. So a snapshot just pointing back to the original data doesn’t occupy much space. Many vendors use this technique, including Nimble.

    3. Initial use of deduplication was for backup, and the pioneer was Data Domain, which was subsequently purchased by EMC July 2009. Data Domain has historically been a post-process operation. This means that after the data is first captured, capacity is set aside, and dedupe operations are performed on this data on a batch basis. The benefit is that it does not impact systems performance, but the downside it takes more capacity and additional time to run the post-process operation. From my perspective, and some of the large industry analyst firms will agree with me, deduplication is becoming standard, with most (but not all) vendors, offering it. With that said, I have been with two companies where dudeupe efforts failed. One vendor recovered, the other vendor never recovered. Please note that the challenge has moved from post-process dedupe (pioneered by Data Domain) to inline dedupe, which is more ideal, but much harder to design, hence the higher vendor failure rate for inline dedupe attempts.

    4. From my perspective, thinking about dedupe and backup is a historical artifact. Note that the Nimble article is Dated 2010. In addition, there are pros and cons that need to be considered with respect to RTO (Recovery Time Objectives) and RPO (Recovery Point Objectives). I remember when EMC bought Data Domain. Dedupe was still viewed as a new technology, and their post-process methodology was state of the art.

    More on that later.

    5. Discussions about dedupe nowadays (at least from my vantage point), center on “Inline Dedupe” where incoming data is deduplicated at initial data capture. The benefit is that duplicates are eliminated before they take up excessive space, and there is no post processing required. The downside is that this technology strategy can impact performance. I have been with two vendors whose initial attempts to do inline deduplication failed. On the other hand, there are other vendors who have indeed succeeded with inline deduplication, and inline is an up-to-date way to think about deduplication.

    6. With respect to Nimble, thinking of thin-snapshots as a way to do backup is not new. Re-packaging the thinking around thin snapshots as a way to avoid deduplication is unique to Nimble, and they are entitled to present their thinking any way they wish. With that said, I have maintained a very positive view of Nimble in two of my articles. But this is just a bit unusual. But I see it more of just a re-packaging of technology that has been around for a long time. And there are upsides and downsides to this type of operation that the data center manager will have to think through.

    Below I go through standard Tier 1 Enterprise Data Center operations to illustrate the pros and cons of the approach. You may want to go get a cup of coffee, or take a nap first, since this is very boring. But necessary to make the point.

    Standard Tier 1 Enterprise Data Center Operations (think of the biggest companies in the world running a 24X7 corporate wide, global operations processing $millions of dollars per hour) are set up as follows:
    I. A monetary transaction comes in, and that first write is subjected to many forms of protection.
    1. The first write is written to two (sometime more) identical volumes on separate disks (or flash memory) for fast recovery. This called mirroring.
    2. The transaction is also written to a hot site, about 30 or so miles away for fast recovery in case the primary system becomes non-operational. This is synchronous replication, which has very low latency but has limited distance.
    3. A copy of the first write is *also* written out to another location perhaps 500-1000 miles away in a fortified location on a different tectonic plate. This is using asynchronous replication. It is slow.
    3. A snapshot of the data base is taken at regular intervals so the database can be recovered. Nowadays, this is usually a “Thin Snapshot”. The advantage of thin snapshots is that they are fast, and occupy relatively little space (just meta data). So you can take them more frequently, increasing your protection without slowing down your system.

    4. At a less frequent interval, a complete backup is made *on to a separate backup device*. The reason why it is on a separate device is used is so you can access your data in case the primary device becomes non-operational. Of course the question arises, since you have a asynchronous replication to a location 500- 1000 miles away, why do you need local backup? The reason is that is takes a relatively long time to restore the data over an asynchronous communications link, which means that is could take a long time to restore your operation and get back in business. And sometimes external communication links (e.g. leased lines) are down. A local backup on a separate device can be restored much more quickly. An old joke is the business for *bulk* data restoration “sometimes you can’t beat an old 1972 station wagon fills with disk or tape careening down the highway with rock and roll blaring in its 8-track tape”. (the intention of this paragraph is to be funny)

    5. A time-out on all this tediousness: This is really a belts and suspenders approach to deal with escalating levels of disaster from a limited database corruption, failure of a single system, a fire destroying the data center, a flood covering an entire city, or a hurricane or earthquake impacting a metropolitan or entire multi-state region.

    6. So here is the key point
    There are pros and cons to not having a local backup on a separate device from the primary systems at the main data center. This is a valid technique as long as the data center manager understands the implications to RTO (Recovery Time Objectives) and RPO (Recovery Point Objectives). Not doing a local backup is reasonable strategy, especially for smaller, non-mission critical operations, which do have limited staff, money or time, and which can sustain being out of service for a longer period time during the restoration process over an asynchronous communications link. Or if the amount of data Is of moderate size, the restoration over an asynchronous link might occur in a time frame that is acceptable. But it need to analyzed and planned for. It is acceptable for some customers, and unacceptable to other customers depending on their business requirements.

    II. Additional Context
    1. The backup specialist vendors such as Symantec Netbackup (the old Veritas stuff) had been doing deduplicated backup for a long time, and have been backing up changes only for a long time also.

    The notion of a full duplicate backup went away a long time ago as a response to the overwhelming amount of data in context of fixed backup windows. This is another reason I view the discussions around full backups and deduping backup as historical artifacts.
    Feb 3 03:07 AM | Likes Like |Link to Comment
  • Nimble Storage Validates Hybrid Storage Array Concept - What Are The Upside Limitations? [View article]
    VSD,
    Your assessment is correct.
    In my consulting business I provide background technical analysis to the financial analyst community in terms of Hedge Funds, Venture, Capital, Investment Banks, and also to large technology designers and manufacturers.

    The financial analyst community then takes my input and builds financial models on them.

    You may wish to note that in the Intro section, I stated the purpose of this article was to allow others to build their financial models upon this data.

    The key take-a-way that I would recommend is that there needs to be a technology/market based under-pinning to financial models.

    Here is an example: I was sent to help a team of engineers who were a few years into designing a new high end product. I evaluated about 20 success factors, and about of 12 of them where failures, and I ramped down the unit, revenue, margin forecast because the product would be uncompetitive in the market. My reasons were sound and withstood executive review. Lead engineer was terminated and product terminated, but the firm
    avoided spending additional $Millions on prototypes, mfg. inventory, testing, training, and deploying worldwide. In this example, the ability to deploy the technology into the correct features was the driver behind the financial model.
    Jan 27 12:46 PM | Likes Like |Link to Comment
  • Nimble Storage Validates Hybrid Storage Array Concept - What Are The Upside Limitations? [View article]
    Scottsherman
    I will update as per you guidance tonight.
    Thanks for you input
    Jan 23 04:06 PM | Likes Like |Link to Comment
  • Violin Memory's Collapse Continues, But How Does This End? [View article]
    User 15242982:

    I will agree with you on "does not have enough system level software to compete with EMC and others.". From my experience, this is the key issue which will obstruct Violin from gaining revenues more important Fortune 500 applications. This is the key reason for the revenue miss. JP Morgan has it right also (I think the analyst is Moscowiz or something like that) and is calling for Violin to make their own software stack (they use Symantec Storage Foundation Suite -based on the old Veritas software but was built or disk and is not tuned for flash level latencies (response time for internal housekeeping). But if Violin attempts to renew its internal system software efforts, and if they have 5 quarters of cash left, then this is going to be really hard.
    Of course, I do expect other on this site to beat me up as a co-conspirator with AE and Zen Mind and JP Morgan, and others to gang up on Violin. The only thing I have to say, is.....I also shot JFK in my spare time!

    I lay out my arguments here:
    http://seekingalpha.co...
    Dec 18 11:35 PM | Likes Like |Link to Comment
  • Violin Memory's Collapse Continues, But How Does This End? [View article]
    Iluminati: Many of the big vendors have already made their flash acquisitions to supplement their product portfolios; there are not many buyers left with both the need and deep pockets. If there is a buy, it will be extraordinarily low bid. If a company wants to do something with patents, they typically want to buy the people and products and sales pipeline along with them; patents alone don't generate a lot of money.

    Here is my product oriented review from Oct. 4.

    http://seekingalpha.co...

    At this part of the downward cycle, when key players like software
    CTO John Goldick leave, potential customers start losing faith a company will be around to service their product. Especially the Fortune 500 with respect to corporate wide systems. That is why they typically buy from big vendors with a long track record of being in business, and have bench strength if people leave. So revenue generation will face additional headwinds.

    I also agree with doggiecool perspective above.
    Dec 12 01:44 PM | Likes Like |Link to Comment
  • Oracle's Growth Drivers: Cloud Adaptation And Database Management Systems [View article]
    Alpine: Good insight. I actually wrote a paper on this a few years ago. Microsoft SQL server is showing up in more important workloads, and it surprised me. Hence the paper. I used to dismiss Microsoft SQL server as a very load end database, they have moved up the food chain.

    With than said, I would like to make a few suggestions. 1) Microsoft
    SQL server will not displace high end Oracle databases or IBM DB2. But on the low-end and mid-range of the classic SQL database market, Microsoft will be a strong competitor to Oracle. MS SQL is "good enough" at the low and mid-range, and this will limit Oracle growth in this market segment. The second suggestion is that MS Windows 8 is not the reason for improvements in MS SQL. MS has a separate data center group (which is different from the OS group) and focuses
    on features specific to running a high end data center. What jumped out of me is that the MS people actually did a pretty
    good job improving the high availability and managability of MS SQL.

    The third point is that I think it very unlikely that MS SQL will surpass Oracle DB or IBM DB2 on high end databases. There are literally thousands of pages of documentation of the Oracle and IBM DB2 DB ( I don't recommend reading them all unless you want to be driven to madness) that those outside of the specialty of DBs seldom think about. The only place that MS SQL will make inroads against Oracle is on the low-end to mid-range. The No-SQL and Object Oriented Databases (OODB) have advantages in scalability in unstructured data against Oracle, IBM DB2 and MS SQL.

    I don't see MS making any knock out blows in 2014. It will be incremental. When you get into more specialized skill sets specific to HA and RAS and ACID, and horizontal applications such as CRM, HCM, the pool of people who know what they are doing declines dramatically. It seems the world is now populated with people who know how to run Linux/X86 for web platforms but tend not to know too much about Tier 1 OLTP and mission critical applications or horizontal applications. It is pretty hard to achieve a break out in these areas.
    Dec 1 01:04 PM | Likes Like |Link to Comment
  • Oracle's Growth Drivers: Cloud Adaptation And Database Management Systems [View article]
    Banmate6: Agree with both your comments.
    In fact, as far as I can tell, you are the only person I've seen on this
    site that understands SAAS, both strengths and weaknesses. Most of the comments I see regarding cloud and is various permutations are a bit on the extreme side, without understanding the underlying technologies being used, and the pros and cons.
    Alpine: Analytics is already being applied to un-structured data. I am current advising a stealth mode start-up which is seeking to do this 600% faster.
    Just to make sure we are on the same page, there are good use cases for No SQL databases. They were just designed for a different mission than a traditional SQL database. The problem a No SQL database is designed to solve is that of scale of unstructured data - we are storing so much unstructured data that it boggles the mind. Once I was on site at a national intelligence site of one of the western allies. I was astonished that their requirements were already exceeding the theoretical capacity of my new machine, which was one of the largest in the world at that time. We now talk about Petabytes, Zettabytes, and Exabytes on a routine basis. To put this in context I watched Bill Maher rant about these new terms that he said were just invented, but to people in the industry, we have been planning for this for years, and continue to plan and come up with new ideas to address this really really hard problem.
    Nov 29 02:33 PM | 1 Like Like |Link to Comment
  • Oracle's Growth Drivers: Cloud Adaptation And Database Management Systems [View article]
    Alpine:
    Structured data is not a rotting or dying foundation. Oracle and IBM DB2 are leaders here. Much of this is Tier 1 OLTP (Online Transaction Processing) for big retail banks, big consumer; just examples two of many examples. This environment is very rigorous and expensive and complex to build. Not many vendors have the understanding or build products suitable for this. CAGR of structured data is about 20%.

    On the other hand, unstructured data is about CAGR is 60%. But handling unstructured data is less rigorous, and there are many more competitors here. Key point is that the "No SQL" DB vendors are targeted for scalability of huge amounts of unstructured (file) data. A "No SQL" database is not a replacement for Oracle or DB2.
    The No SQL guys are in "oops" mode and trying to retrofit their databases with ACID capability to be better at OLTP. But I am not holding my breadth. A lot of the capability you see in Oracle and DB2 has been built up over the last 30+ years, and cannot be re-built overnight.

    But here is the key point. The growth is in unstructured data. Oracle is strong in structured data, but not as strong in unstructured data. Oracle will find many more competitors as they engage in this space who are less expensive and good enough. These new vendors are not strong enough to destroy Oracle or IBM, but they will certainly constrain (e.g. make it harder for Oracle to get new software licenses in this use case) Oracle growth.

    I hear two extremes:
    1. Oracle is doomed and will be destroyed
    2. Oracle movement into the cloud will destroy all comers.

    I disagree with both.
    The middle ground is that Oracle will hold its ground in its traditional use cases (structured data ) and but will encounter more competition in unstructured data (where Oracle is not as strong), which will constrain revenue growth.

    It seem many people want to see a "knock out" blow, positive
    or negative. This is not Gulf War I, which was over this 100 days.
    This is will be more like Afghanistan for Oracle, with both sides slugging it out over contested ground in unstructured data.
    Nov 29 12:37 PM | 2 Likes Like |Link to Comment
  • Violin Memory Winning Over The Intelligence Community - Is The Fortune 100 Next? [View article]
    The basic premise of the article is not credible. Some success
    in national intelligence does not mean a vendor will be successful
    in high end commercial storage. National Intelligence and high end commercial have completely different use cases. Financial results un-surprising.

    Violin both lacks features and the implementation of the key features are sub-standard, making Violin product non-suitable for high end commercial databases, especially Tier 1 OLTP. Financial results un-surprising.

    In September 2011, Don Basile publicly stated in one of the trade rags that Violin was targeting EMC Symmetrix, which demonstrates a lack of understanding of the market opportunity since it took over 20 years for that product to achieve its level of maturity in terms of RAS, ACID, Data protection, disaster recovery, and hot site capability, security NDU, ect. Clearly, Violin cannot get close to fulfilling the requirements of that market segment.With respect to this article, Violin has been engaged in both national intelligence and high end commercial accounts (not just national intelligence) and has been failing at high end commercial for a long time. If you read their public use cases, they are really just deploying at the departmental, divisional, or small company level. They are not deploying as primary storage for Fortune 500. There is no data says that they have done anything to their product, processes, people which will allow them to learn, adapt, change and become successful (non-credible marketing claims notwithstanding) The definition of insanity is doing the same thing over and over, and expecting different results.
    That is what is doing. Financial results un-surprising.

    An example of non-credible Violin marketing occurred mid-August 2013 at the Flash Memory Summitt where I observed the Violin VP product marketing sparring with EMC. Net/Net: Violin said that All Flash Memory Arrays are the solution, and EMC's Flash/HDD perspective was wrong. Well, Violin does not even have deuplication! A combination of de-duplication and compression is
    needed to help make flash memory arrays cost effective for storing data. All the vendors have de-duplication. For some reason, Violin has been unable to design, build, or deploy de-duplication. There are a lot of implications behind this failure. So I found the statements made by the Violin VP of product management non-credible. Financial results un-surprising.

    I laid out my arguments here on Oct 4th: http://seekingalpha.co...
    Nov 22 01:11 PM | 2 Likes Like |Link to Comment
  • Violin Memory Winning Over The Intelligence Community - Is The Fortune 100 Next? [View article]
    One more thing, it has been my experience, in these types of
    precarious situations, is to look for "channel stuffing". This is the oldest trick in the book. Sell like mad into distributors, book the revenues now, but actual product does not show up in customers
    sites until much later, and sometimes it never ends up in customer sites.
    Nov 8 03:03 PM | 1 Like Like |Link to Comment
COMMENTS STATS
57 Comments
37 Likes