The Quincy, Washington campus is a microcosm of the history and the future of data centers
Quincy, located in Washington state’s agricultural region, about three hours east of Seattle, seems like a typical plains farm town. Driving through the area you see miles of irrigated farmland, stacks of hay bales, and produce processing plants. But nestled in the heart of this farm community are five major data center operators (Intuit, Microsoft, Sabey Data Centers, Vantage Data Centers, and Yahoo). Microsoft, in particular, has an extremely large campus containing multiple data centers.
Prior to this week’s Ignite conference we were offered the rare opportunity to get a glimpse inside the Quincy facilities, getting not only a feel for where Microsoft is going with its data centers and cloud plans, but also a look at the living history of Microsoft’s data center designs, which reflect the 21st century evolution of the data center. It’s safe to say things have changed significantly since Microsoft’s original data center launch in 1989.
Our guide on this expedition was Rick Bakken, senior director of datacenter evangelism for Microsoft’s cloud infrastructure and operations team. Rick was as open and forthcoming as he could be when asked about current and future operations of the data center, and visitors were simply asked to refrain from publishing until the Ignite conference.
Security was designed into the data center structures, as customers had expressed concerns over the physical security of their stored information. In fact, security personnel make up the bulk of the approximately 100 permanent jobs currently found at the Quincy site (roughly 30 IT related jobs are in place at the facility).
The large campus contains facilities built to Microsoft’s generations two, three and four designs, and is being brought up to 270 acres as the generation five data center is being built. The facility drawsrelatively inexpensive hydroelectric power over their own set of power lines from the nea by Grand Coulee hydroelectric power generation station. . Microsoft is achieving a measured PUE which can below 1.4
If you’ve been following Microsoft data center operations you may recall that there were some issues with the testing of backup power generation at the Quincy site due to community complaints. Those issues have all been addressed at this point and the banks of diesel backup generators are in place and ready to roll.
The containers, some of which are in a building and others simply on the cement pads outdoors, use adiabatic cooling, which is the current plan for all future designs.
Power availability made the first stop on our tour an interesting one from the aspect of the changes in the IT load equipment that have happened since their first data halls were commissioned. These halls were built with 18 rows of IT load equipment using 2.5MW of power. The same space is now more than half empty, but drawing the same energy load with much more efficient and powerful servers, delivering more resource availability, but requiring much less floor space. Traditional cooling techniques, such as hot aisle and cold aisle containment are used throughout this earliest facility, and are continued on into the next generation of data halls, with additional changes there such as OCP compliant racks and hardware, painted white, to reduce temperatures within the data center.
Taking the fourth
Moving on to the fourth generation, we now find ourselves in a separate facility that contains multiple containerized data center racks. The equipment is designed by two different vendors to the same set of specifications, to provide slightly different solutions. The containers, some of which are in a building and others simply on the cement pads outdoors, use adiabatic cooling, which is the current plan for all future designs.
These data centers, along with more than 100 other Microsoft facilities of similar vintage (2007-2014) are currently supporting more than 200 online services, handling more than 30 trillion stored data objects and over 1.5 million network requests per second, along with more than a million servers.
Ready to scale
This density will also allow the Microsoft data center network to scale; Bakken expects it to need to grow exponentially over the next four years. His goal is to be able to support any fiuture technology in the new firth generation designs, be it solar power on the roof or significant changes to networking infrastructure (currently about 20 percent of the operational costs). SDN will play a big part in allowing that flexibility and managing costs and is a clear part of the Windows Server 2016 design and future.
And once the costs of the infrastructure are sunk, the continually increasing density means that the overall operational costs will continue to shrink. Given the level of scaling necessary to remain competitive, optimizing costs while scaling is critical to a successful operation.