One of the maddening aspects of the cloud computing arena is the lack of adequate metrics with which to compare offerings and providers. Given that lack, anyone can say just about anything.
But that is about to change, and that will have a dramatic impact on the market, especially for public clouds. Once customers can know who has game and who has claims, market shares could change quickly, and this will have a dramatic impact on the underlying stocks.
Past efforts in this area have either been limited – Amazon's (NASDAQ:AMZN) CloudStatus mainly tells you if the cloud is up – or focused on data serving, like Yahoo's (NASDAQ:YHOO) Cloud Serving Benchmark.
But now we're about to get apples-to-apples comparisons between working clouds using standard work loads.
Duke professor Xiaowei Yang has just completed a study on cloud metrics with two Microsoft (NASDAQ:MSFT) researchers and a graduate student named Ang Li. They call their metrics for comparing clouds CloudCmp.
They focused on the relative speed of three basic functions:
Table – A measure for handling databases. How long does it take to get a row from a database, insert a row, and look-up a row.
Blob – A measure of file transfer speed. How long does it take to upload or download a picture or other object from the database?
Queue – A measure of speed with handling queries. How long does it take to send or receive a message from a queue?
The researchers then took measurements, using these metrics across four public clouds – Amazon EC2, Google (NASDAQ:GOOG) AppEngine, Microsoft Azure and Rackspace (NYSE:RAX) CloudSpaces. The authors did not identify which cloud was which in discussing their results.
What's important is that they found wide variation among the clouds studied. There was also a wide variety on pricing and pricing models. One provider prices per CPU used, others price based on use of four or even 8 cores, per instance or per program running.
Costs per data transaction ranged from a low of less than .1 of a cent to a full cent. It doesn't sound like much but these systems are designed to handle many transactions per second. Scaling latencies also varied widely, but in general Windows latency was longer.
The work is not complete. There are variables among networks that must be measured, both internal to the cloud and external, and variables based on the type of application being tested – an ecommerce application, a game-type application requiring low latency, a scientific application that is compute intensive – to evaluate.
The authors also know that there is a trade-off to be made between breadth and depth in making these measurements, that there may be a difference between a “snapshot” look at speed and continuous measurements over time, and that (as usual) your mileage will vary. Comparing what happens to your applications, and comparing those numbers with those from CloudCmp, gives a better result.
This is an early study, in other words, but it's pretty clear that serious speed comparisons among clouds from reliable third parties could be a matter of months, not years away.
Be aware of that as you invest.
Disclosure: I am long GOOG.
Additional disclosure: Yeah those 20 shares are still sitting out there.