Cambridge University opened a new data center in 2014, which has replaced many disparate facilities with one combined site that cost £20 million and will make substantial reductions to the University’s carbon footprint while expanding its technological leadership.
The facility uses a highly efficient ‘chilled water’ hybrid cooling system unique amongst multi-user data centres in the University sector, and is expected to reduce power consumption while delivering a 10 percent reduction in carbon emissions compared against its 2013 levels. This is an important step towards the University’s stated goal of reducing energy-related emissions by 34 percent by 2020 compared to 2005/06 base levels.
Consolidation beats refurbishment
The project began with a difficult decision. The University has many departments, which run their own servers and also use the University Information Service. There is a high performance computing service (HPCS) for research, which became part of UIS in 2014. The University is also home to Cambridge Assessment, an international service which manages three examinations boards, as well as academic publisher Cambridge University Press.
All these entities had diverse computing resources, some of which were aging. Rather than attempting to refurbish all the different IT resources separately, the bodies decided to make a partnership, and build a bespoke data center that could ultimately support all the Cambridge bodies’ computing needs for teaching, learning and research, for years into the future.
The partnership appointed Arup aslead consultant, providing MEP, civil, structural and geotechnical engineering, as well as ICT consultancy. Arup collaborated with Davis Langdon and TTSP Architects.
The site was completed and handed over in July 2014, and loads have been moved there during 2014 and 2015. To start with, the West Cambridge Data Centre will serve the current and future needs of UIS and the institutions for whom it manages IT infrastructure, the High Performance Computing Service (HPCS) supporting the University’s research activities, and the administrative needs of Cambridge Assessment. There are plans to move Cambridge University Press into the Cambridge Assessment Data Hall in future.
Diverse cooling needs
The three user groups have diverse requirements, needing varying amounts of IT load, ranging from low- densities of 3.5kW per cabinet to a high IT density of 30kW per cabinet for intensive research-based data processing. Traditional design approaches would have used three separate facilities for the three major types of IT load, but the partnership decided the emerging technologies would enable them to adopt a single system.
”The early design decision to supply air at the elevated ASHRAE A2 temperature range for all three user types unlocked the potential for creating our highly efficient design,” says UIS director Martin Bellamy in a UIS article about the site. Low density areas use hot-aisle containment and normal computer-room air handling (CRAH), while the HPC areas, with high IT density, use back-of-rack cooling.
The facility uses a single cooling system, with the cooling coming from the outside air (a ‘100% free’ cooling system). The UIS points out that some data centers would see this as a brave decision, but it worked out that the benefits of the system would far outweigh the risks. ”Many approaches to cooling were explored, including all-air indirect evaporative systems,” says the UIS. ”In order to meet the University’s aspirations, however, it became obvious that the right solution for our data centre would need to go beyond the capabilities of all-air evaporative cooling.”
The system uses chilled water cooled without compressors, giving all the benefits of evaporative cooling without the use of chillers. To meet the HPC users’s demands, water is routed to a back-of-rack cooling solution in that hall. This system came from local supplier, Coldlogik, and was selected after a test.
The cooling system allows the chilled water temperature to float with the control system to stay at the most efficient set point operating temperature while keeping supply air temperatures within the ASHRAE allowable range. Temperatures can float above the recommended range for a couple of hours per year, in accordance with the latest ASHRAE recommendations; this also allowed the University to stick with a compressorless water based system, even for the HPC loads. Traditional wisdom would have required a DX compressor for days when the outside air is too hot.
The system will use more power during hot periods, as the fans will work harder, but this has been factored into the design.
The levels are managed carefully using Emerson’s Trellis DCIM solution.
Two PoPs and dual power
The centre has two independently routed Point of Presence (PoP) rooms for communications,and it also has dual 11,000KV electric feeds from UK Power Networks (UKPN) via separate substations, and a single 3,150KVA transformer. The centre has a 2,200KVA initial capacity from its provider, but this can be increased to 3,000KVA.
Backup power comes from three 1,100KVA generator sets. This is an N+1 configuration, so only two of them are needed to supply sufficient backup power. the gensets have enough fuel to run for 72 hours. The cooling system is also N+1: there are three hybrid cooling towers, only two of which are needed fo rnormal operation.
There are three 1,000Kva modular UPSs (again, configured to N+1), each of which has five 200Kva modules with intelligent controls. This delivers 98 percent power efficiency. There are two separate A and B feeds to each cabinet, delivered via an overhead track busbar system, to intelligent metered cabinet power strips.
Four halls
The data center is in a new two story building, housing four data halls. Hall 1 accommodates the HPC systems, with a high density IT load going up 900KW; and up to 96 cabinets. Hall 2 provides 201KW for Cambridge Assessment’s needs, and Hall 3 has 240KW for UIS’ servers. Hall 4 is empty, ready for 40-50 racks of fture growth.
The building also has a “build room” for engineering work, an operations room, security office and meeting room space. There is 24x7 security on the premises, with extensive internal and external CCTV coverage. Access to the data halls is controlled by a “Star Trek-like ” anti-tailgating doorway, featuring built-in weighing scales in its floor. Once in the data halls, intelligent key cabinets monitor and control access to the individual cabinets’ keys.
Overall, the site expects to show a power usage effectiveness (PUE) of 1.2, putting it close to monolithic web scale providers like Google and Facebook. This improvement over the previous disparate sites should reduce the University’s carbon emissions by ten percent.
User acceptance
To succeed, the data center would have to convince potential users: each of the University’s departments (Physics, Chemistry etc) was still free to run its own servers, but they have been moving to the shared data center to save money and get an improved service. After nine months, the site was loaded to 26 percent capacity, and it has been calculated to be making savings of £1 million per year on energy bills, and saving 40 percent of carbon emissions.
Getting central funding for the project could have been a challenge, but the team used the designed efficiency levels to demonstrate that relocating equipment into the Data Centre would result in a net saving in utility charges equivalent to - or slightly higher than - the running costs of the data center. The net cost to the central University is effectively zero - and the site also has significant expansion potential. The HPC space, for instance, has been designed to allow a 300 percent increase in processing capacity compared to the University’s previous space.
As well as saving on power, indivdiual departments would also be able to free up space where their current data processing was located, providing more teaching space, or space for more research, adding up to a significant extra benefit.
Despite this, some potential users were concerned that the increase in operating temperature might cause an increase in failure rates of the hardware, but UIS was able to produce figures that demonstrated that any decrease in hardware reliability was negligible. The University believes that in opting for a compressorless system, it has led the way, and future systems will increasingly have sufficient faith in the raised ASHRAE guidelines to follow it and omit compressors entirely.
Data center manager Ian Tasker says: “The West Cambridge Data Centre project will mark out the University as a clear early-adopter of the latest energy-efficient technology, showing what can be achieved and leading the way for other Higher Education establishments to follow.”