Data centers, like people, have a lifetime, and different things mark each stage of their existence. They have an extended and sometimes painful gestation, they can have a difficult birth, but then look forward to a long and useful life.
Like people, they may pass through multiple relationships during their lifetime, forming lasting partnerships with owners and with clients installed within their space. They may become part of a larger family, or spin-off offspring of their own.
During their lifetime, they will have health crises, when their vital signs crash to zero and remedies are needed. At the end of their life, data centers may be more or less decrepit, and ready to retire, eventually to be cleared or demolished.
And beyond their life, data centers are part of an evolutionary cycle contributing to changes in the next generation.
Planning permission is the normal start of the data center’s story. A location is found and the prospective parents go through the actions which they hope will eventually produce a healthy, bouncing baby data center that will delight them and enrich their lives.
We needn’t go into those activities in too much detail. The early stages of a data center pregnancy are normally kept very private. This isn’t coyness: it’s merely that the companies involved don’t want to alert the competition.
“There’s a lot of sensitivity around these exercises,” says Malcolm Howe, critical systems partner at engineering consultancy Cundall. “If word gets out that someone is looking at a particular site, it complicates things no end.”
Engineers will start to have some involvement now, says Howe: “We get some involvement in site selection, but normally the client has its own site selection team. We often get drawn into discussions with utilities on power supply and things like that.”
The site will have been chosen for numerous factors. It must have energy available - and preferably renewable energy. It must be easy to hook up to networks. The local market for data centers should indicate there will be customers. For instance, the site may have to be close to a population center, like London or New York, so prospective clients can visit easily.
Conception may be tricky, but construction can take about as long as a human pregnancy
Even after the planning is underway, the actual conception can take a long while. A data center can find it hard to get planning permission, as residents may object to the noise or the looks of their potential neighbor. Even if planning permission is granted, the other elements may prove difficult.
Some data center projects languish for years. For instance, Lockerbie Data Centres sought permission to build a $1 billion facility at Lockerbie in South West Scotland in 2008. Nearly ten years later, the project is still on the drawing board, and has had its planning permission extended three times.
An anchor tenant has to be found, unless the facility is being built for one specific customer, be it a cloud provider or a large enterprise.
Once all that is in place, it’s time to design and build the facility. This can take some time, but once capital is committed, there’s an incentive to get it built quickly.
The building time is normally measured in months. There have been some extremely fast builds - CyrusOne claims a record for building a 30MW facility in only 180 days - but most sites will take about a year, meaning data center gestation takes longer than a human pregnancy.
Celebrating the birth
One thing that speeds up the planning and build process is a pre-existing design, particularly if the data center is being built for a service provider with multiple facilities. “There is no blank sheet of paper,” says Howe. “With a few exceptions, you are working to a reference design.” The design may need to be localized or adapted however, for instance selecting a cooling system suitable to the location: “These things are never completely static.”
For instance, Telehouse London North 2 was a one-off, because the price of land and the demand for space mandated a multi-story building. “The fact that it needed to be a tower was preordained, but the shape and size was something unique to that project.”
Another shortcut will be the use of pre-selected equipment. “The client may have pre-existing framework agreements with vendors, so when they order up generators, they can get them quicker,” adds Howe.
As with any birth, the completion of a data center is a time for celebration. There’s a grand opening, people drink champagne, they make speeches, and look forward to the bright prospects of this new arrival.
The data center may be built in phases, and take some time to fill up. When it does, we could see this as the facility reaching adulthood. “It’s rare to build out a site in one hit,” says Howe. “There’s a master plan, with successive phases turned out quite quickly. The designs get polished with each phase. There’s always a bit of evolution going on, say moving from a direct cooling scheme, to an indirect air scheme.”
Standards help the facility to have a safe and predictable life. “Data center owners have internal standards they adhere to, so lessons learnt in one place are shared across the whole team. Also, if you have a facilities team maintaining these sites, they don’t want to be confronted with completely different equipment at each site.”
Finally, building and operating a data center can be streamlined with a good asset management system, which can give builders and operators information on where something should be and how it should be configured.
“This can increase the speed of deployment, as installers and operators get feedback in real time,” says Peter Kazmir, director of product management at RF Code. “If someone takes down the wrong network switch, this could take down hundreds of customers and have an service level agreement (SLA) penalty.”
For the data center, big life experiences will follow, among which will be relationships with new partners. These backers are like a spouse, supporting them through the good and bad times, but sometimes abandoning them, forcing them to find another partner.
The tenants in the data center, meanwhile, can be like children, needing nurture and attention, sometimes being demanding, and sometimes causing trouble. Winning a new life partner might necessitate big changes, like a makeover or, in the case of a data center, some major hardware upgrades. This might be part of a planned expansion route, such as an increase in power density that makes use of room left for a new chiller.
It might alternatively be a redesign and a major upgrade. “There may be trapped capacity, where the facility has power, but isn’t configured to use it effectively,” says Howe.
For instance one site changed from conventional chillers and CRAH to an evaporative cooling system. This cut the power usage effectiveness (PUE), with less power used in cooling: “That was the whole objective. Less was used on infrastructure and this released power for the IT load.”
Sometimes spaces designated as switch rooms can be turned into IT space. “Squeezing the last drop of value out of the facility is very important, especially in the colo market,” says Howe. That increase in PUE might also lure in new tenants with a better SLA.
But a need for bigger changes may actually precipitate some tough choices. “If you look at what’s there, and it’s knackered old chillers and CRAHs, the brutal truth may be that the best thing would be to blow it up and build something else. Often the format of the building doesn’t lend itself to doing something new.”
Upgrading a live site is fraught with difficulties. Even if it is Tier III certified and has two concurrently maintainable power and cooling paths, a planned upgrade will involve powering these down and replacing them one at a time.
If you have to move the IT load to upgrade the building, why not just move it into somewhere better, and keep it there?
“Any interruption to power and cooling systems creates risk,” says Howe. “You can work on one path, while the facility hangs off the other. But you need to choreograph it so you don’t drop the load.”
The site may need temporary power and cooling during an upgrade, but even then, the upgrade may be too risky for the tenants, who might decide to migrate elsewhere.
Migration is not done lightly, and is likely to be a one-way journey, Howe says: “If you have to move the IT load, why not just move it into somewhere better, and keep it there?”
Migration has also become big business, says Tom Forbes of specialist firm Technimove. “It used to be down to the client, and left to the last minute, but now it is early in the project plan.”
Migration services have become part of the colocation vendors’ armory: “If the client is coming to the end of the contract, they may want to save money,” says Forbes. A new colo provider may win the business by including the migration service: “That minimizes the risks and makes the costs predictable. If the new provider doesn’t include migration, the customer is averse to taking the risk.”
What happens when all the clients move out, leaving the data center with an empty nest? For a person whose children leave home, that could lead to a new lease of life. But the analogy with a human life finally breaks down here. Data centers normally have a much shorter lifespan than humans. The buildings are typically on 25 year leases or less.
“A hyperscale facility could last 15 to 20 years,” says Howe. “The steel frame and paneling may last 60 years, but the IT will be updated every three to four years, recycling the servers and crushing the drives.”
At this point they are re-occupied by the landlord and someone has to deal with the high tech equipment, and any continuing tenants. By this stage, the mechanical and electrical hardware may be out of date, and the only sensible thing is to decommission the facility.
“The lifespan of data centers is short compared to other buildings,” says Howe.