As expected, Facebook (FB) filed this week its much awaited S-1 registration statement related to its proposed IPO. As a result, the Securities and Exchange Commission's EDGAR Web site, where regulatory documents can be accessed by investors, crashed and became almost unavailable, as reported by AllThingsD:
The SEC got back to us and in response to the question of whether this [crash] was related to a Facebook surge, spokesman John Nester said, "Greatly increased traffic that began shortly before 5 pm slowed the public website. We are bringing on additional capacity to handle the load."
Is Facebook better than the SEC at handling users volume spikes? At least, it is fully aware that an outage can harm its business (from the company's S-1 filing):
Our systems may not be adequately designed with the necessary reliability and redundancy to avoid performance delays or outages that could be harmful to our business. If Facebook is unavailable when users attempt to access it, or if it does not load as quickly as they expect, users may not return to our website as often in the future, or at all. As our user base and the amount and types of information shared on Facebook continue to grow, we will need an increasing amount of technical infrastructure, including network capacity, and computing power, to continue to satisfy the needs of our users.
More seriously, FB has been able, since inception, to grow its online business (and cope with increased users volumes) mainly through the use of third parties data centers, and it has only recently moved to a different strategy:
In 2011, we began serving our products from data centers owned by Facebook using servers specifically designed for us. We plan to continue to significantly expand the size of our infrastructure, primarily through data centers that we design and own.
There are several reasons leading Facebook to this decision. First of all, one very good economical approach, as the company explained on its "Building Efficient Data Centers with the Open Compute Project" FB page:
The result is that our Prineville data center uses 38 percent less energy to do the same work as Facebook's existing facilities, while costing 24 percent less.
The Prineville data center started operations in April 2011. Facebook developed specific designs, both in terms of servers (hardware) and software that were optimized for use in the new FB-owned planned facilities. The significant increases in energy efficiency achieved thanks to this different approach will contribute to reducing server operation costs.
Facebook also intends to extend this approach to its leased facilities, as explained in this recent article taken from Data Center Knowledge:
Facebook has been working with landlord DuPont Fabros Technology (DFT) to implement its Open Compute designs in a data center in Ashburn, Virginia, according to Frank Frankovsky, Director of Hardware Design and Supply Chain at Facebook. The ability to run Facebook's new hardware in leased facilities could be good news for the data center service providers, providing more flexibility as Facebook's infrastructure makes the gradual transition to company-built facilities.
The company is now planning additional facilities:
We are investing in additional Facebook-owned data centers in the United States and Europe and we aim to deliver Facebook products rapidly and reliably to all users around the world.
Facebook is obviously foreseeing incurring into additional CapEx, in the future, for these expansions:
Construction in progress includes costs primarily related to the construction and network equipment of data centers in Oregon and North Carolina in the United States and in Sweden, and our new corporate headquarters in Menlo Park, California.
However, lease expenses, which include data center facilities, are expected to decrease from $219 million in 2011 to $180 million in 2012, the first decrease in recent years (2010: $ 178 million; 2009: $ 69 million).
While it is not easy to break down Facebook's exact expenses on data center leases, the company is working with all the main US listed colocation wholesalers, as resumed by Data Center Knowledge in this September 2010 article:
Here's what we know about Facebook's spending on its major data center commitments:
• Facebook is paying $18.13 million a year for 135,000 square feet of space in data center space it leases from Digital Realty Trust (DLR) in Silicon Valley and Virginia, according to data from the landlord's June 30 quarterly report to investors.
• The social network is also leasing data center space in Ashburn, Virginia from DuPont Fabros Technology . Although the landlord has not published the details of Facebook's leases, data on the company's largest tenants reveals that Facebook represents about 15 percent of DFT's annualized base rent, which works out to about $21.8 million per year.
• Facebook has reportedly leased 5 megawatts of critical load - about 25,000 square feet of raised-floor space - at a Fortune Data Centers facility in San Jose.
• In March, Facebook agreed to lease an entire 50,000 square foot data center that was recently completed by CoreSite Realty in Santa Clara.
The choice of Sweden as the location for FB first owned European data center seems mainly due to the country's weather:
Temperatures hover at 20 below freezing in Luleå, Sweden at this time of year, but that hasn't stopped construction on Facebook new data centre.
A YouTube video [link] posted last week shows front loader trucks rolling through the snow in front of the skeleton of the data centre, with cranes suspended above.
The frigid weather in the city, dubbed "the Node Pole" since Facebook's arrival, is perfect for keeping servers cool and will save on expensive costs of air conditioning.
It will be Facebook's first data centre outside the U.S. and will manage traffic from European users, Facebook's Tom Furlong said in a news conference in October.
While investors can expect future savings from Facebook's new approach in managing its leased or owned facilities, FB users will also be interested in knowing that the company may have just a few large data centers in mind to become more efficient, but also keeps an eye on remaining very well interconnected to all its users, irrespective of their locations.
A quick look at PeeringDB reveals that the company already has a presence in most key peering points all over the world.
The list includes both the European leading Internet Exchange Point like LINX, DE-CIX and AMS-IX, some key facilities like MEGA iAdvantage and HKIX in Asia, and the most important network-neutral data centers players like TeleCityGroup in Europe, with 3 facilities, and TelX, CoreSite (COR), and Terremark (VZ) in the USA, with a strong presence in most Equinix (EQIX) locations in the USA and Asia. In some of these peering points Facebook also interconnects with its most important partners, like Zynga (ZNGA):
(slide from Equinix's analyst meeting in 2010)
In summary, Facebook seems very well equipped to take advantage of its size and lower its data center costs, in percentage, for the future, improving performance metrics and energy consumption, while still delivering to its customers a great experience thanks to its extensive peering throughout the world.
As data center costs make most of FB revenue expenses, it is already a good sign that, as a percentage, costs of revenues decreased from 29% in 2009 to 25% in 2010, and reached 23% in 2011. The company expects this positive trend to continue, as it forecasts a further decline, in percentage, due to efficiencies and scale.
Disclosure: I am long EQIX.