Since Voltaire (VOLT), Mellanox (MLNX), and QLogic (QLGC) announced QDR InfiniBand products last year, many industry observers have been wondering if 40G InfiniBand will leap ahead of 40 Gigabit Ethernet, especially with the IEEE still figuring out what its standard should look like. Yet OC-768 Packet-over-SONET ports have been available on routers for nearly seven years now, and they are hardly a dominant technology in spite of having been first to market above 10G.
What's been impressive about QDR InfiniBand is not the fact that it's here, but its cost, under $500 a port, or less than a 10GBASE-SR transceiver module. OC-768 data ports still cost the same as they did seven years ago, about $600,000, and will need a lot than the 100+ ports AT&T (T) has purchased for its MPLS core in order to come down in price. Moreover, OC-192 Packet-over-SONET ports still go for well over $100,000, even with intermediate range 1310nm optics.
Why is InfiniBand so Inexpensive?
One of the reasons InfiniBand is so inexpensive is that the switches carrying the protocol are designed to haul traffic around data centers and supercomputing clusters. These platforms don't come with a variety of PoS, ATM, or T1/E1 configuration options, nor do they feature overcooked operating systems that can forward any type of packet to anyplace on the planet. And Ethernet vendors are starting to notice.
While there is plenty of attention being paid to the forthcoming 40 and 100 Gigabit Ethernet standards, one of the major advancements coming to Ethernet is not just a new line rate, but a new topology. Startups like Woven Systems and Fulcrum Micro are developing products that allow 10GigE to ride over Clos architectures instead of relying on Rapid Spanning Tree to forward frames. Moreover, like InfiniBand vendors, their design focus is on low latency, not line card flexibility.
Microeconomics and Microelectronics
While Ethernet will still need to be tweaked to match InfiniBand's microsecond latencies, the distinctions between the protocols are not as pronounced as the differences among Ethernet, ATM, Token Ring, and FDDI, out of which only one has really survived. Yet while Ethernet is likely to remain dominant, it is also unlikely to knockout InfiniBand the way it crushed competing link layer technologies in the late 90s.
Having shipped in the billions, Ethernet chips have reached an impressive cost per bit that no alternative will ever be able to match through high volume production. And one of the chief differences between Ethernet and its past competitors was how it framed data. Its long, variable-length frames were able to offer reasonable levels of quality of service by surpassing fixed-length competitors in speed while selling at lower prices. Many of the LAN protocol's detractors were stunned when Fast Ethernet came to market offering six times the speed at half the price of Token Ring. And there wasn't much hope for 25 Mbps ATM-to-the-Desktop NICs that sold for $300, and had to compete against Fast Ethernet NICs that were free. Even in the switch market, things didn't look good for 100 Mbps FDDI when Gigabit Ethernet arrived with 10x the bandwidth at less than 1x the cost.
But over a gigabit, it will be harder for Ethernet to deliver a knockout blow. With less than a nanosecond between bits, multi-gigabit networks rely on common clocking technologies, not individual framing technologies, to maintain quality of service. Sharing so many components, including LVDS signaling, 8B/10B encoding, SerDes transceivers, integrated Clock Data Recovery, and now Clos switching, the economics of deploying a particular link layer protocol are increasingly a function of connection distance, configuration, and transceiver reach, not the name of the framing technology.
Time to market might still be an important objective for individual vendors, but it really doesn't mean much for particular link layer protocols, especially when they're starting to look alike.