By Carl Howe
Yesterday's New York Times nominates Google (NASDAQ:GOOG) as the Zen Master of the Anywhere Internet era because it is using network effects like Microsoft (NASDAQ:MSFT) did during the PC revolution. Personally, I like Google's chief economist's reason better: the company focuses on learning from experience:
Google, it seems, is the emerging dominant company in the Internet era, much as Microsoft was in the PC era. The study of networked businesses, market competition and antitrust law is being reconsidered in a new context, shaped by Google. Google’s explanation for its large share of the Internet search market — more than 60 percent — is simply that it is a finely honed learning machine. Its scientists constantly improve the relevance of search results for users and the efficiency of its advertising system for advertisers and publishers. “The source of Google’s competitive advantage is learning by doing,” said Hal R. Varian, Google’s chief economist.
But this isn't your father's learning by a few trials and errors. Google learns from what is rapidly becoming a new and powerful trend: organizing and learning from the petabytes of data it collects. Last week, Wired Magazine has a fascinating set of articles running, titled, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete . The argument put forward is simple and provocative: large data sets provide new insights impossible to achieve in other ways. The poster child that Chris Anderson's article cites is Google, whose data-driven ways I have written about before:
For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn't pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.
Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough. No semantic or causal analysis is required.
That's why Google can translate languages without actually "knowing" them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.Speaking at the O'Reilly Emerging Technology Conference this past March, Peter Norvig, Google's research director, offered an update to George Box's maxim: "All models are wrong, and increasingly you can succeed without them."
Now I don't buy the concept that we're going to use databases to replace the scientific method. But I do think that we're entering an era where companies will fall into two categories: those that build massive proprietary databases and those that rent them. The former will be the new landlords of Anywhere markets; the latter will be Anywhere sharecroppers.
An example of this trend is electronic mapping companies. Fifteen years ago, there were a bunch of little US companies who grabbed the Tiger database from the US Census, cleaned up the data, and published their maps for license. Today, that market has consolidated down to two dominant companies that manage multi-terabyte databases of maps: Tele Atlas (OTC:TLATF) and Navteq (NVT). And even Google recognizes that these geo-mapping companies have the advantage on them currently; Google just signed a 5-year deal with Tele Atlas for Google Maps and Google Earth data in more than 200 countries outside the US, while it has a similar licensing deal with Navteq for US data.
So even the mighty Google can't compete with everyone when big data is involved. And Google's recognition of that fact bodes well for a robust ecosystem of Anywhere services now and in the future.