"Retrofitting a live data center is not all that different from open heart surgery,” says Frank McCann, principal engineer for project management and implementation at Verizon Wireless. Just as the heart surgeon has to fix a problem without killing the patient, he points out: “A successful data center is a living, breathing entity - and we have to keep the customer equipment up and running while improving the overall environment.”

McCann has plenty of facilities to keep healthy. Even though Verizon sold its colocation business to Equinix, and its managed hosting to IBM, it still has a number of data centers running its core applications and services.

Death is an option

Unlike the surgeon, the data center engineer actually does have the option of killing the patient - and resurrecting them in a new body. In some cases shifting all the services into a new facility is possible. But McCann says this is rarely simple: “You have to deal with the ancillary things such as locating energy and telecoms, and relocating your employees,” he explains. “In the South, a new build might make sense as the savings make up for ancillary costs."

In his home territory of New Jersey, he sees plenty of upgrade work, as urban data centers generally don’t have the option to expand or move: “In the New York metro area, a retrofit makes sense because there isn’t space for a new build.”

The biggest problem with an upgrade is the fact that it may have to be done while the facility is live. “If it’s not concurrently maintainable, then you either switch it off, or work on live equipment,” McCann says.

This is not impossible, but it requires care. Engineers can work on different sections of the facility, by moving IT loads. Power is switched away from the racks which are no longer supporting live services, which can then be worked on. Then the loads are moved back, so the rest of the facility can be updated.

The most important thing is to test this procedure before using it on live facilities, but operators are rightly reluctant to test reliability in this way: “I don’t want to test the airbag in my car,” McCann observes.

Testing in a data center environment

An upgrade can expose dangerous differences between theory and practice in an old facility, he warns: “The biggest danger of outages when upgrading a live data center is the lack of documentation. Something you think is not connected may be connected, or is connected in a way which is non-obvious.”

For example, a rack may appear to have an A and a B feed, but in reality they both go to the same source - or one is not connected at all: “They may be labelled wrong, or connected incorrectly.”

Like a heart surgeon, a data center modernizer has to be ready for a crisis. When adding a new chiller to a Verizon facility, McCann had difficulties cutting the chilled water pipe. It’s normal practice to freeze the pipe, then cut through it, but this pipe was too big to freeze all the way through.

“We had to do a lot of research to find a tool that could cut a hole in a pipe and insert a valve without it leaking,” he says. Even after finding such a tool, the trouble wasn’t over: “There’s a danger of shavings getting into the water or the cut-out getting washed down the line and breaking the cooling system.”

The situation arose because this was an old facility. The upgrade had never been planned, and the data center was built without a shut-off or a bypass for its chilled water pipe. “New builds are much easier,” he says.

“One thing that gets overlooked in retrofits to older buildings is the people,” he says. Building a data center, you have the site to yourself, but when working on a live facility, you need to work alongside facilities staff. “How can people park, and get in and out for lunch?” he asks. “How can they get day-to-day work done with all that noise?”

Changes in IT architectures can paradoxically make his job harder, he says - as virtualization allows workloads to be consolidated, hardware is driven harder and pushed to its limits. “It now requires more cooling, and failover processes need to work better. As things get more software-defined, there is more reliance on hardware.

Inevitability

Verizon has a long heritage, which can sometimes be a burden but is currently proving very useful. It has hundreds of telco rooms, with space and power, but it has taken a while to properly turn them into data centers, with features like cold aisle containment. This is an upgrade which fits into the current evolution of infrastructure: these will become edge facilities.

Built to hold racks of old telecoms switching equipment, they now have plenty of space as the telco footprint consolidates. “We are not running into space issues,” McCann says. “Now it’s power and cooling.”

IT equipment is now rated to run at higher temperatures, which ought to reduce the burden on cooling systems, but the increased density may well negate this benefit. And, in any case, operators are still reluctant to run their equipment hotter, if it is critical to their operations: “Equipment can go hotter and hotter, but if you let it go hotter, it might die sooner,” McCann notes. “It’s a balancing act.”

He agrees that, thanks to modular technologies, future data centers will be easier to upgrade than the first facilities which are now being retrofitted: “We thought this would be all, and we would never outgrow it. We realize that there will be changes in the future.”

This feature appeared in the February issue of DCD Magazine. Subscribe for free today: