Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

How Telehouse dealt with Hurricane Sandy

  • Print
  • Share
  • Comment
  • Save

It was European weather modelling that first predicted Hurricane Sandy would track North and West onto mainland America and not, as normally happens with big weather systems originating in the Caribbean, track North and East before fizzling out over the North Atlantic. The weather modelling gave firms an opportunity to start putting their contingency plans in place, and for Telehouse America, this meant hitting the phones and arranging meetings with authorities and landlords.

The company had three sites in the path of the storm: Teleport on Staten Island is a 2MW facility operating at 80% utilization; 25 Broadway, running half a Mega Watt, was 80% utilized but since its closure was announced operations are winding down and it is now running at 55% utilization; and 85 10th Avenue is about 40% committed to running around one quarter MW of power.

Once it knew Sandy was coming it began preventative maintenance, including checking and testing equipment, and fuelling tanks to 90% of their capacity. It made sure the chillers and gensets were supplied, and it tested for weak links in the battery strings. It knew it would have a 30-minute warning ahead of the power going down, but no one predicted the scale of the devastation.

When it hit
While the company could prepare by speaking with its suppliers and landlords, it couldn’t trigger disaster recovery plans for its clients, says Fred Cannone, director of marketing and sales at Telehouse America.“ The facilities team were managed from a micro level minute to minute – and we were communicating to clients by email and web updates, and drafting notices every two hours – the phones were staffed 24/7. Medium-to-large customers were preparing to engage their own disaster recovery plans and they needed to communicate those to third parties like ourselves.”

The company admits it learned some lessons; where it found itself busy was with communications. A lot of the telecoms are driven by where the local PoPs (Points of Presence) are, so if one of the facilities in the area had problems everyone ended up having problems. When Verizon’s West Street PoP was flooded, it affected everywhere those circuits led, he says. “One thing we did well was to shift communications load – if they had multiple vendors, customers could fail over to different carriers… we set up connections to multiple ISPs to bring them back to life.”

Staying up
Telehouse America stayed up throughout the hurricane for several reasons. The fuel kept arriving; none of the sites have power back-up equipment below ground; the generators are on the first and fourth floors, and on the roof; and going off the grid is not uncommon for the company. “In summer we’re part of a load-curtailment strategy with the utility. Should there be an extraordinary load placed on the grid, they ask us to go off-grid so they can maintain the power to the population. When there is this high demand we transfer and run our facilities on generators for six to eight hours at a time,” says David Kinney, director of facility planning at Telehouse.

The hurricane has changed how the business now operates. Cannone says there have been many follow-up meetings and outreach to customers. “Customers had issues they had not anticipated and now recognize that they need a layered approach to business continuity.”

Related images

  • Hurricane Sandy Manhattan

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Live Customer Roundtable: Optimizing Capacity (12:00 EST)

    Tue, 8 Sep 2015 16:00:00

    The biggest challenge facing many data centers today? Capacity. How to optimize what you have today. And when you need to expand, how to expand your capacity smarter. Learn from the experts about how Data Center Infrastructure Management (DCIM) and Prefabricated Modular Data Centers are driving best practices in how capacity is managed and optimized: - lower costs - improved efficiencies and performance - better IT services delivered to the business - accurate long-range planning Don;t miss out on our LIVE customer roundtable and your chance to pose questions to expert speakers from Commscope, VIRTUS and University of Montana. These enterprises are putting best practices to work today in the only place that counts – the real world.

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (APAC)

    Wed, 26 Aug 2015 05:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (Americas)

    Tue, 25 Aug 2015 18:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • Power Optimization – Can Your Business Survive an Unplanned Outage? (EMEA)

    Tue, 25 Aug 2015 14:00:00

    Most outages are accidental; by adopting an intelligent power chain, you can help mitigate them and reduce your mean-time to repair. Join Anixter and DatacenterDynamics for a webinar on the five best practices and measurement techniques to help you obtain the performance data you need to optimize your power chain. Register today!

  • 5 Reasons Why DCIM Has Failed

    Wed, 15 Jul 2015 10:00:00

    Historically, DCIM systems have over-promised and under-delivered. Vendors have supplied complex and costly solutions which fail to address real business drivers and goals. Yet the rewards can be vast and go well beyond better-informed decision-making, to facilitate continuous improvement and cost savings across the infrastructure. How can vendors, customers and the industry as a whole take a better approach? Find out on our webinar on Wednesday 15 July.

More link