Seeking Alpha


Send Message
View as an RSS Feed
View FormerStorageGuy's Comments BY TICKER:
Latest  |  Highest rated
  • "A retailer (Amazon), an entertainment company (Netflix), an advertiser (Google), a résumé site (LinkedIn), and an address book (Facebook) have ended up shaping the future of [tech] infrastructure," writes a bemused Ashlee Vance. Unsatisfied with the solutions offered by the likes of IBM, HPQ, EMC, NTAP, and ORCL, Web giants have used home-grown hardware and open-source software projects to fulfill their needs. Their ideas and technology are already upending the server industry ... and their impact stands to grow as more companies buy in[View news story]
    This is 10 year old "news".

    The key is that this handful of companies build data centers on such a scale, and use compute/storage/network resources in such a consistent way, that they could absorb the work (the value add reflected in pricing) done by the software in a storage system (NTAP, EMC), the database (ORCL, MSFT), etc into their application software and customized Linux on which their application runs.

    The nature of those applications is also parallel and replicated, without the need to commit that bank deposit you just made once, and only once, to an account balance that needs absolute integrity. So much of the infrastructure allowing enterprise applications to achieve that level of robustness is irrelevant in Google type applications.

    This then allows them to buy bare hardware from the southeast asian ODMs, which is the supply chain typically used by the nominally hardware companies named. There is very much a disintermediation of the traditional hardware companies...but it only applies to a handful of named customers. The market in general can't duplicate this disintermediation, because customers do not have a single application with enough scale, much less one whose software is written in house.

    Google's first computer rack was on display at the Computer Museum in Mountain View last time I was there (probably 5 years ago). Took this track from the very, very beginning.
    Mar 16, 2013. 03:43 PM | 3 Likes Like |Link to Comment
  • Mellanox: Between Perfect Storms [View article]
    As to speed, FusionIO's latest board is I think 1.5GB/sec, and with enough slots, power and cooling I would expect you could put at least 5 in a server box. So for a single server the FusionIO approach remains an order of magnitude faster, until we start seeing 40G or 100G networking to the individual server.

    More importantly, if you use the native FusionIO interface the latency for an I/O is measured in tens of microseconds, where the latency through a SCSI stack will likely be over a millisecond.

    Given that the performance of a *traditional* application in an enterprise data center is proportional not to maximum bandwidth, but rather to 1/(storage latency), this new FusionIO model could radically increase the performance of legacy applications across the board. If you look at the internals of an Oracle database (I don't work there either) under heavy load the traditional limiters are 1/(write latency) for the redo log, and an exponential amount of lock contention for a given rate of transaction completion as each individual transaction takes longer and therefore the number of unfinished transactions in process rises proportionally. It's the storage latency that matters, not the bandwidth. And this is for everyone, not just the high frequency traders, which I do agree are outliers. Hence my conclusion that even though it will take decades to play out, the SCSI host software stack has reached maturity and will not see further design-ins.

    Witness the OpenStack "Swift" object storage application I mentioned above, running on a white box server with commodity disks. It doesn't use the SCSI stack -- just presents storage over Ethernet as if web pages were being read. Vastly simpler and lighter weight. I believe most Amazon Web Services customers choose object storage; suspect but don't know that Amazon wrote its own object storage software and that Swift was developed later.

    Google is even simpler, just puts a few disk drives with each server. You can see the basic idea in the original Google rack in the computer museum in Mountain View, but it's had many iterations since and the current details are of course Google trade secrets.
    Mar 13, 2013. 05:43 PM | Likes Like |Link to Comment
  • Mellanox: Between Perfect Storms [View article]
    I took the position, very strongly, over 15 years ago that Fibre Channel would rapidly converge into Ethernet. This helped influence the investments that led to iSCSI.

    I was wrong about the time scale.

    I also took the position very strongly that InfiniBand should converge into Ethernet. That was one of many voices which led to RDMA over Ethernet, a now 10 year old set of standards (recently refreshed by RoCE, which is an InfiniBand standard which amounts to achieving the functionality of InfiniBand over Ethernet).

    I was wrong about the time scale on that one too, but do think that when computers in a data center which are running parts of the same application collaborate, they will increasingly use RDMA, to match the time scale of flash storage instead of moving disk heads. We just saw Microsoft release "SMB Direct", which supports file storage (NAS) over RDMA in the data center. RDMA is tne enabler for file storage at the same speed as Fibre Channel block storage.

    I also didn't fully appreciate the technical difficulty of trying to mix bursty block storage traffic on the same wires as TCP/IP traffic. (Hours of arcane technical discussion omitted.) Mellanox in my opinion has a very good technology addressing this space (but nobody realizes this), while the FCoE community has not publicly addressed these fundamental issues.

    If you just randomly put FCoE traffic end to end across a 40Gb Ethernet network backbone, without thinking this through, it is very likely that under both a heavy load of FCoE traffic and a heavy load of TCP/IP traffic at the same time, performance of one of the two will collapse. The engineering it takes to prevent this isn't rocket science, the simplest fix is never having the two classes of traffic share a switch to switch connection, except at the edge of the network by the server or storage device. That's not real convergence, it's cosmetic convergence. (My opinions here, there's lots of room for technical debate but also lots of room for pulling wool over people's eyes.)

    Bottom line: the FCoE PR machine has been successful, storage interconnect evolves glacially, if you want to see where storage interconnect will be in a decade or two go look at Google or Amazon Web Services. And Mellanox is extremely well positioned for that future, technically.
    Feb 8, 2013. 02:36 PM | Likes Like |Link to Comment
  • Mellanox: Between Perfect Storms [View article]
    This is a very difficult problem that I have been struggling with professionally for over 15 years. Given that it takes a decade for a winning storage protocol to get from the first mover putting the first $100M on the poker table to widespread acceptance, at least a decade of maturity for those who succeed, and a decade or two of decline, storage interconnect moves almost as slowly as telecom equipment used to. (Not talking about 16 Gbit/s Fibre Channel replacing 8Gbit Fibre Channel replacing 4Gbit Fibre Channel, those are all the same protocol.)

    On the commodity side, an open source alternative to both block storage (ie Fibre Channel) and file storage (ie a Network Appliance box) called "object storage" has emerged over the last decade. A good example would be the OpenStack "Swift" open source effort. A web data center buys commodity x86 servers with lots of disk drive slots, loads commodity disk drives, runs this free software. Cash registers do not ring in the traditional "storage" world.

    More generally, and speaking in terms of evolution over decades and not what's going to affect a particular player's top line or bottom line next year: The computer industry as a whole has taken four swings at converging storage networks with TCP/IP networking over the past two decades. I've personally touched all four developments, but not been at the center of any of them. All four chose to preserve the legacy SCSI storage stack -- from the days when a disk drive was the size of a dishwasher and sat on the floor next to the computer -- and place the network boundary below the SCSI stack. The four are Fibre Channel, InfiniBand, iSCSI, and FCoE. It's my opinion that all four made significant contributions to the industry and made a positive contribution to meeting customer needs, and that none of the four succeeded at reconvergence overall. It's also my opinion that it is time to let go of the SCSI legacy.

    Look to how the Google search engine works, or most Hadoop implementations today, or the really advanced Oracle clustering (NFI / I don't work there), or how Microsoft Exchange (NFI / don't work at Microsoft either) has handled storage in its last few releases. The pendulum has swung back, we now have disk drives in the server again, and the server is running (part of) the application right next to the disk drives, minimizing the amount of data which has to flow over the network.

    Separately, look at what FusionIO (NFI / don't work there either) and similar companies have been able to do when they put solid state storage in a graphics card slot in a computer instead of at the end of an order of magnitude slower storage cable, under a 30 year old and two orders of magnitude slower software storage access path. Solid state storage is now fast enough that it can't run "at speed" when constrained by the legacy SCSI software path, or by hardware that looks like Fibre Channel.

    So in cases where a big application is running, and performance or concurrent users matters, I think we're looking at running (parts of) the application in the server where the storage happens to be, through much faster (non SCSI) interfaces.

    And in cases (think Amazon's cloud, NFI, don't work there either) where people are renting computer time and storage space, we'll probably see "object storage" as the low cost leader.
    Feb 8, 2013. 01:56 PM | Likes Like |Link to Comment
  • Mellanox: Between Perfect Storms [View article]
    "Low latency Ethernet" (by whatever trade name a given vendor sells it) can already displace InfiniBand in most customer situations.

    InfiniBand is about getting a simple network packet from here to there across a data center in minimum time. Ethernet (really TCP/IP over Ethernet) is about moving a network packet pre formatted to go around the world through a network path that needs shock absorbers to handle peak loads efficiently, and has sacrificed that "minimum time" (latency) to provide those shock absorbers (buffer depth).

    An Ethernet switch can be designed like InfiniBand, with on chip buffers, minimal provisions for packet processing (analyzing headers, rewriting headers, complex or big table lookups to figure out where the packet is going and if it's allowed to go there), and optimistic forwarding rather than coordinating with the destination to make sure there's space before sending. That's the first part of "low latency Ethernet" (along with similarly designed network interface cards). It's almost as fast as InfiniBand, faster if you have plan to translate the packets into Ethernet as they go out of the data center. Which gets to the most recent wrinkle, special purpose switches for the financial sector "High Frequency Trading" crowd, which *only* take a market feed from outside and broadcast it out all its downlinks, or accept trade packets directly from computers doing high frequency trading and multiplex them onto an uplink to the stock exchange. Particularly when such a switch does just enough (NAT) to avoid needing a more complex switch or router between it and the outside world, in this narrow application such switches are far faster (lower latency) than InfiniBand. Needless to say, very high prestige to be there, but I'm not so sure there's enough worldwide sales volume to make a profitable business for a big company...

    Separately, on FCoE: my opinion only, not that of my colleagues or employer, is that FCoE was a brilliant market development move on Cisco's part. From a technology perspective, FCoE dusted off a technology alternative that was discarded out of hand by the smart people on the standards committee who developed iSCSI 6 years earlier. From a sales perspective, Cisco has a dominant share in data center networking including account control through its direct sales team and worldwide resellers. Cisco has a maybe 1/3 share in Fibre Channel, virtually all indirect through the storage vendors like EMC, so does not have account control there. Smaller competitors are chipping away at its network revenue by cutting price, particularly for the simpler devices next to the servers (where the majority of revenue is). FCoE creates a story (get rid of Fibre Channel) which plays really well to executives and can be conveyed in just a few sentences on an elevator. Superficially it's simplification. What it actually does is cause your data center network to take on the single vendor-ness of Fibre Channel. Not only that, it requires every switch which will touch an FCoE packet to be replaced, because they need these new features (memories of all the money Cisco made refreshing campus networks so people could have VOIP phones on their desks). More importantly, the FCoE network is sold by Cisco direct and through its resellers: account control, customer lock in, who could ask for more?

    In the six years since Cisco/Nuova announced FCoE, in every sales situation in both the storage and network markets, Cisco has been able to raise FCoE as an issue in a way that put its competitors on the defensive. In the R&D labs, all the discretionary resource of the Fibre Channel industry (and then some) and much of the discretionary resource of the network industry has been diverted from other customer needs to responding to FCoE. And in the end when people started catching up 2 years ago, Cisco simply changed the game from the original technical plan (any DCB compliant Ethernet network can carry FCoE traffic any number of hops) to more of an FC-BB-6 approach (FCoE traffic is fundamentally forwarded by Fibre Channel aware switches from the same vendor). Brilliant business development move. Brilliant sales move.
    Jan 14, 2013. 01:51 PM | Likes Like |Link to Comment
  • Mellanox: Between Perfect Storms [View article]
    As a technologist more than an investor, who works for a big technology company, two observations:

    1. Mellanox has an excellent low level technical implementation of Fibre Channel in its switch chips. The big obstacle is that Fibre Channel (including Cisco's Fibre Channel over Ethernet) networks essentially require all of their switches to come from the same vendor. Despite claiming adherence to standards to the contrary, in the real world the customer has to buy their entire Fibre Channel network from the same technology vendor (Brocade or Cisco, typically as resold by their storage vendor such as EMC). Because neither Brocade nor Cisco uses Mellanox's switch chips in their Fibre Channel switches -- and since both Brocade and Cisco design their own switch chips this is unlikely in the future -- Mellanox is no more able to penetrate the Fibre Channel space than QLogic was when it tried to extend its top-2 position in Fibre Channel HBAs (computer interface cards) to be in the Fibre Channel switch business as well. The existing vendors are just too entrenched for a new entrant. Cisco's FCoE is about becoming more entrenched. The next generation Fibre Channel standard FC-BB-5 is likewise going to continue to entrench the incumbents (although it does allow other vendors to supply edge switches).

    2. This doesn't matter in the long run (decades). The new generation of data centers, such as those run by Google, Amazon, Facebook, Yahoo, etc, cram orders of magnitude more servers in a data center than traditional enterprise users. They are also extremely sensitive to both cost and overall power consumption. None of these have chosen to use Fibre Channel, and there is relatively little use of expensive or even mid priced storage systems historically used on Fibre Channel. Instead, typically individual disk drives are put in shelf level computers, and part of the application (like Google search) does useful work in the computer next to the disks, reducing storage traffic on the network.

    Mellanox's network is extremely well suited to being the "fabric" (the network) inside one of these massive data centers. Historically, Ethernet protocol has been used in these data centers, and Mellanox products are InfiniBand protocol. Mellanox has started putting out products that are dual use (can be either Ethernet or InfiniBand). Over time (measured in decades) as enterprises migrate to the more efficient data center model, they will phase out Fibre Channel and adopt cost effective data center fabrics (purpose built networks for the data center). Mellanox's core technology is faster and cheaper than Cisco's, not burdened with the legacies which will in the end be uncompetitive in streamlined data centers. Whether Mellanox has the critical mass and sales ability to actually win share over time is of course an open question.

    (I am not employed by, nor do I have or expect to take stock positions in, any company named in this article.)
    Jan 13, 2013. 09:55 PM | 2 Likes Like |Link to Comment
  • Activision Blizzard Share Drop Offers Golden Opportunity [View article]
    An interesting factoid: seems the new MMO "RIFT" has somehow acquired a million pre launch subscribers.

    Given that Blizzard has just made major changes in World of Warcraft with which a number of customers are unhappy, and has responded to the unhappy customers with more arrogance than understanding, it is possible that many of these customers will represent defections from WoW. And they're $10-$15/month defectors in the U.S. and Europe, not pennies-an-hour defectors in Asia.

    The sky isn't falling, and past history is that many people will try a new game and then return to WoW, but at the same time we have the intersection of a credible competitor and a disgruntled customer base really for the first time since WoW released.

    Just ups the volatility on that revenue / cash flow, at least for the next 6 months.
    Feb 25, 2011. 02:01 AM | 1 Like Like |Link to Comment
  • Notes From the 2010 Brocade Shareholder Meeting [View article]
    Ummm...let's see: IBM is really more about services than owning core technology these days. Brocade has been very successful selling Fibre Channel switches etc through big server companies like IBM and HP and Sun, but a buyout by one of them would likely over time cost the business of all the others. Cisco, having spent a billion dollars to enter the Fibre Channel switch market only to learn it had to follow Brocade's model so did not have the account control it's used to, 3 years ago launched "Fibre Channel over Ethernet" to cause a technology shift which would consolidate those sales back into its own network sales force (opinion). This inspired Brocade to acquire Foundry so it would no longer be a pure play in Fibre Channel (out of business if FCoE won), but in turn Brocade learned it couldn't sell very many Foundry switches against Cisco using the model that had made it so successful in Fibre Channel (opinion/guess) so has now pulled in John McHugh to drive channel sales, hopefully in a way that doesn't alienate its Fibre Channel OEM relationships. I don't think your buy of BRCD was a bet on an acquisition, I think it was a bet that Cisco has sufficiently alienated its channel that BRCD can take significant share there at the same time HP/3Com is coming together. BRCD is run by smart people. Time will tell.
    Apr 15, 2010. 01:40 AM | 1 Like Like |Link to Comment