Archived Content

The following content is from an older version of this website, and may not display correctly.

eBay says its newest data center in Salt Lake City, Utah, is a great deal for the company financially speaking (although it will not disclose actual build costs), and because it offers a glimpse into the future of designing, building and operating data centers. Contrary to convention, it does not rely on the electrical grid for power or mechanical chillers for cooling. There is no raised floor and there aren’t any uninterruptible power supply (UPS) systems or generators. It is powered by electricity generated on-site, and the grid is the backup plan.

 

The facility will be part of the infrastructure that supports all eBay businesses. These include its core online marketplace, as well as PayPal, StubHub, eBay Enterprise (a business that builds and runs online shopping websites for clients) and a few other smaller ventures.

 

While there are a number of design elements that break with convention, the thing that makes Project Quicksilver stand out the most is its primary power source: fuel cells that convert natural gas into electricity. Not only is this the biggest on-site data center deployment of fuel cells by Bloom Energy, it is the only data center in the world powered entirely by them.

 

Dean Nelson, VP of global foundation services at eBay, says the facility’s first phase provides 4.8MW of critical load. The site has the capacity to eventually expand up to 18MW. Phase I provides about 16,000 sq ft of data center floor, and eBay has a way to expand that square footage very fast, using data center modules by Dell or HP. In addition to regular data center space, there is one of each vendor’s modules (often referred to as containers or containerized data centers) at the site. This variety enables Nelson’s team to have different power densities for different types of load at a single site – another rare capability. “We can mix and match whatever densities we want,” he says.

 

Nelson will be speaking about Project Quicksilver at the DatacenterDynamics Converged conference in London on 20 November

 

This is the second site where eBay has deployed both Dell and HP modules. The first one is Project Mercury in Phoenix (launched last year), where the two modules live on the building’s roof. The modules in Utah are different, however, from the ones in Phoenix, or anywhere else for that matter. The vendors customized the modules for this particular deployment and now offer the customized models to other clients.

 

Why two types of containers? Nelson says the Dell one was designed for the highest power density (50kW per cabinet), and the HP one was designed to be the largest in size.

 

The “regular” non-container data center floor at Quicksilver is not exactly regular either. There is no raised floor in the facility. Instead of pushing cold air from underneath a raised floor through perforated tiles, IT equipment at the site is cooled directly by plate heat exchangers bolted onto the back of the racks. Another rare feature of the facility’s cooling system is the absence of power-hungry mechanical chillers. Cooling capacity is produced by cooling towers, cooled naturally by outside air and supplemented by an evaporative cooling system. In addition to savings that come from not having to power chillers, this approach removes an entire layer of cooling infrastructure the team would otherwise have to buy and install.

 

Taking a page from the Mercury design, the data center also has an option to bring warm water directly to server processors for cooling, should future technology upgrades make that necessary. Nelson says he expects this data center’s lifespan to be from nine to 15 years – enough time to go through three-to-five technology refresh cycles.

 

Gas – cheaper and more reliable

As its IT capacity expands over time, the Bloom fuel-cell installation will be expanded simply by adding more 1MW energy servers. The decision to use fuel cells is, to a large extent, what makes the economics of Project Quicksilver work.

 

It is simply cheaper to buy and convert natural gas than it is to buy electricity from the grid where this facility is located, Nelson says, and Bloom’s Peter Gross confirms this. Gross is in charge of Bloom's mission critical business. “Cost-effectiveness is primarily a function of a ratio between the cost of electricity and cost of natural gas,” Gross says. The wider the gap between the two – and today, this gap is famously wide in favor of gas – the more cost-effective fuel cells are as an energy source.

 

But “there is a correlation between the two,” Gross says. Because natural gas is currently so cheap and coal-fired power plants are becoming increasingly expensive to operate, there is a trend in the US and elsewhere around the world toward a gradual switch from coal-fueled plants to plants fueled by gas. The price of gas may, as a result, go up, but the price gap between gas itself and gas-fueled grid energy will remain, since electrical transmission infrastructure costs a lot more to operate and maintain than gas pipelines do.

 

The natural gas grid in the US is also a lot more reliable than the country’s electrical grid. Unlike electrical cables, this grid is underground. The many underground gas storage facilities ensure there is continuous delivery of gas even when there is an outage, Gross says. One failure in the electrical grid, however, can cause a wide-spread outage.

 

The inherent reliability that comes with generating electricity from gas on-site is another thing that made Project Quicksilver so much cheaper to build. There are no UPSs or generators. The utility grid is the backup to the fuel-cell installation. Omitting the traditional backup electrical infrastructure reduced the size of the building almost by half, according to Nelson. “Everything just got simpler and smaller and still got more available,” he says.

 

Traditional electrical infrastructure equipment for data centers is expensive and, as a rule, underutilized. Actual load on the system is always significantly lower than its capacity, Gross says, especially in 2N architectures. Each system is loaded to no more than half of its capacity and often to about 30% of capacity to provide for growth. “This is a very costly situation, because you spend all this money and utilize a very low capacity of the available system,” Gross says.

 

This inefficiency is caused by an inability to reconcile four of the most important needs for a data center: reliability, cost, energy efficiency and environmental friendliness. “These four are probably what people care most about,” Gross says. Legacy architecture makes them divergent requirements: higher reliability is achieved through redundancy, which leads to inefficiency and poor sustainability characteristics.

 

“With Bloom… you don’t have this divergence anymore,” Gross says. Reliability is constant, regardless of capacity, and the cost is linear – no more than a function of size. Importantly, energy efficiency (at least in the eBay data center’s case) is not affected by the level of redundancy, and redundancy does not mean the cost is double. The two power sources (the fuel cells and the utility grid) are almost completely independent.

 

Modular designers put to work

Since Mercury, the story of eBay’s data center infrastructure has been a story of an infrastructure team that likes to leave itself with as many options as possible, avoiding elements that lock them into certain technologies. Hence, the fuel cells and the power grid, the plate heat exchangers and the direct-to-chip warm-water cooling, the regular data hall and the containers – containers by Dell and containers by HP.

 

As was the case with the Phoenix project, both Dell and HP had to do some design work to win the Salt Lake City deal. Jon Mormile, HP’s portfolio marketing manager, says HP’s EcoPod module had to go up to 1.44MW of IT capacity, increase redundancy, have a redesigned cooling system and accommodate “rack and roll”. That is where racks are rolled into the module and installed in place instead of coming pre-installed with the module as usual.

The problem with pre-installed racks, according to Nelson, is that it forced the customer to replace the entire module when they wanted to refresh IT hardware. Now eBay can go through four refreshes within the same module.

 

Drew Schulke, director of marketing for Dell’s data center solutions group, says Dell also had to change its module’s design to accommodate rack-and-roll. Both vendors changed cooling systems from a mix of free cooling and direct expansion to a mix of free and evaporative cooling. “All of these elements we customized and tuned around the climate that eBay will see there throughout the year,” Schulke says.

 

Wade Vinson, power and cooling strategist at HP, says that each of the 12 coolers in the EcoPod deployed at eBay has two power feeds (one from the fuel-cell installation and the other from the electrical grid) and an automatic transfer switch. The N+1 scenario ensures 100% of the cooling capacity is delivered at all times, he says.

 

Contrary to the general assumption that data center modules are for users who need high-density data center space that comes already packed with IT gear, the EcoPod HP delivered to eBay was empty. The company is using it as ‘white space’, where it can roll racks of servers when it needs to. The Dell module was delivered to the site with IT pre-installed.

 

eBay is one of the rare customers that buys modules such as HP’s EcoPod and Dell’s EPIC, and it doesn’t mind talking about it publicly. Ever since containerized data centers became a product, vendors have generally stayed tight-lipped about who buys them and why, and this remains the case.

 

Mormile refuses to talk in any detail about HP’s EcoPod business but maintains that the product has been successful around the world. “China would be the only region where we haven’t deployed EcoPods,” he says. The customers are generally service providers and users that deploy the modules for high performance computing (HPC) infrastructure. A few enterprises have also recently bought EcoPods, and HP is currently looking at a 10MW EcoPod deployment for a financial services company in Russia.

 

For Dell, the only other customer that has been willing to go on the record about its use of the vendor’s data center modules is Microsoft, which has deployed the solution in Colorado to host infrastructure for Bing Maps. All in all, “we’ve deployed over 15MW of critical IT load to date in modular solutions supporting a little over… 250,000 servers,” he says.

 

Heat – another fuel

The data center’s two power sources are going to be joined by a third one in about 18 months. A company called Ormat is building a 5MW plant that converts heat into electricity nearby, and eBay has contracted to buy all the energy the plant generates for 20 consecutive years. It will use heat generated by a compressor station that pumps gas through the same pipeline that supplies the data center site.

 

Bob Sullivan, VP of business development at Ormat, says eBay is the first data center customer for the company. Ormat’s business in the US consists primarily of operating its geothermal and heat-recovery plants and selling power to electrical utilities, since utilities in the country do not generally own the bulk of their generation capacity. The majority of its revenue outside of the US comes from building and selling the plants to other companies.

 

The deal with eBay in Utah is a unique one, since the user is buying electricity directly from the producer, without a utility between them. Nelson and his colleagues lobbied state regulators to pass legislation that will allow this – something utility regulations in the rest of the country prohibit.

 

Ormat has been around for about 50 years and has built out more than 1,600MW of generation capacity around the world, Sullivan says. While the majority of this capacity is in the US, it has done business in Latin America, Asia-Pacific and Africa.

 

It is a favored supplier in New Zealand, has sold a facility in the Philippines, is working on a large development in Kenya and has supplied 30% of all electrical load consumed on Hawaii’s Big Island.

Ormat’s plants get their heat in the US almost exclusively from compressor stations on gas pipelines, Sullivan says. The company has bolted these onto other sources, such as glass plants or cement kilns, but the gas pipeline compressor station remains the best heat source for this purpose. The technology requires a lot of heat (preferably 800F to 900F), and these stations do the job perfectly, he explains.

 

Is this the future?

Ormat says he sees the natural gas boom in the US as potentially a big growth opportunity. It is possible that the country will become a big exporter of natural gas. To prepare gas for shipping, it needs to be liquefied in gigantic compressor stations. If the US were to become an exporter, a lot more of these stations would have to be built, which would create a lot of heat sources for plants like Ormat’s. (Read about TeraCool, another innovator hoping to make use of this boom on Page 124.)

 

Like Bloom, Ormat is another company that stands to benefit in a big way from natural gas replacing coal as the work-horse fuel. And eBay, with its latest data center, is also placing a big bet on gas. That bet seems to be a safe one and, according to eBay, the cost equation beats any traditional data center build on the planet.

 

Considering this project and the pressure on the data center industry to get off coal, are on-site gas-fueled generation plants the future of data centers?

 

This article first appeared in the 32nd edition of DatacenterDynamics FOCUS magazine. Follow the link for a free subscription.