Earlier this month, DCD held a lively debate about hardware strategies used in colocation, with participants from Switch and Panduit. The choice of equipment is a topic of paramount importance for both colocation customers and data center operators, having an impact on performance, reliability, and most importantly, cost.
The conversation started with a question: how do you manage infrastructure deployments across multiple regions, taking into account variation in climate conditions and power quality?
Eddie Schutter, CTO at Switch and former head of Critical Infrastructure Engineering at eBay, said the key for large businesses was standardization: “The way that Switch is able to deploy to uniformity is we use our patented methods and capabilities for our facilities. They're configured relatively the same way in all of our deployments.”
Michael Gallagher, senior business development manager for Global Data Center Solutions at Panduit, came up with an example showing the importance of standardization: “We're working with a global colocation provider out of the Asia Pacific. They wanted to shorten the deployment time of a 30-cabinet solution. They were sending this to 17 different sites across 12 countries. The average time from procurement to production, when we were first engaged, was about 18 weeks.
“The challenges they're running into is that there were long deployment times, introducing the risk of losing revenue. The site to site inconsistencies were also causing service and support issues.
“We performed a site survey at a pilot site and actually built one full design - that's typically what we'll do and then replicate that. We proposed our free configured converged cabinet solution. We did the port mapping, labeling, cable management, power distribution; everything we can possibly do, we do in the factory so it arrives at the dock.”
This approach enabled Panduit to decrease the time from procurement to production, from 18 weeks to 10 weeks. The company doesn’t stop there – it plans to reduce this by another week. Gallaher added that the customer “saved significant cost” by eliminating the data center installation labor.
Learning from hyperscale
Next, the discussion turned to the rack-and-roll concept: “Seven to eight years ago, rack-and-roll was really specifically used by hyperscale companies, particularly those who are providing large volumes of equipment and deploying through a similar or repetitive configuration through SKUs,” Schutter said.
“In the last five years or so, we've seen an increase in the growth of enterprise clients and medium-sized business clients who are beginning to roll out a rack-and-roll strategy as well.
“Of course, the benefit of that strategy is that, instead of shipping the equipment itemized, that has to be racked into the cabinet by a labor that's on site, is that the equipment comes fully racked and already tested and burned in.
“That speeds up the delivery and it also reduces the amount of error that's generated whenever you have multiple technicians who are touching the equipment during the rack-and-stack cycle.”
Gallagher concurred – just a few years ago, Panduit sold zero rack-and-roll configurations. Today, 20 to 30 percent of its products shipped in pre-configured racks and cabinets.
Another topic covered in the debate was the importance of planning for the future, since the lifecycle of data center equipment can last anywhere up to ten years.
“You might not be in a situation today where you're density requires containment, but you might be trending that way within the life cycle of that investment,” Gallagher said. “Same with network trends and any emerging technologies.”
“I think we're going to start seeing more modularity around those items that need to be replaced more often than those items inside of the equipment that you don't,” Schutter added. “If your business is built on a rack-and-roll model, then I think you'll see that those customers typically will be fine when the equipment within the rack fails. Then at some point or another, they replace the entire rack at a time.”
SLAs - the necessary evil?
The final subject covered in the debate was service levels – and the dreaded Service Level Agreements (SLAs). Switch is a particularly interesting company in this regard; Schutter said it never had to pay a penalty over breaching an SLA, since none of its facilities have ever had a critical outage.
“The reality is that SLAs don't act as an insurance policy at all,” he said. “There's no recoupment for any loss, particularly if your business is heavily dependent on internet productivity or any kind of sales associated with internet connectivity. I think the challenge for a lot of customers, particularly the smaller customers, is to understand that most likely the SLA recoupment, even if you do get a credit from a colocation provider, it's not going to recover your debt or your impact on your business revenue.”
“I think SLAs are more of a gauge or a report card at this time,” Gallagher agreed. “When you have an SLA, you have to have a way to monitor or measure or show whether or not that's being met. One of the things that always comes to mind when I think about this, especially from an environmental perspective, is you can't manage what you don't monitor.”
“When it comes to environmental optimization, visibility and insight are just absolutely critical if you're serious about that.”