Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Details emerge of Global Switch outage in London

222ms power break causes headaches for customers 

A brief power fault at the Global Switch 2 data center in East London on 10 September left some customers struggling. It’s the second problem at the data center in three months, and customers are asking Global Switch for explanations.

The site, which Global Switch describes as “Europe’s largest purpose built data center”, suffered a high-voltage fault lasting 222ms. Customers including Claranet, EX Networks and Tagadab reported difficulties which were resolved within a couple of hours, but the site operated on the mains without backup power for two days.  

Switch off 

global switch 2 london east lead

Global Switch 2, East London

Source: Global Switch

At around 11am on Saturday 10 September, GS2 suffered a fault in the high-voltage circuit breaker for one of the site’s diesel rotary uninterruptible power supply (DRUPS) devices.

Like most large data centers, the site runs an inline UPS, with the building powered from energy stored in UPS systems topped up from the mains, so it is protected from issues with the mains supply. The building normally has four independent power supply systems to different floors and suites.

The circuit breaker which failed was part of the H1 system, which supplies three floors of the building, When the breaker failed, the system switched to the mains, we gather, but with a 222ms break. This interrupted power to GS2 customers on those three floors, and was enough to trigger their shutdown procedures.

Claranet, one of Global Switch’s biggest customers in the GS2 site has two feeds from H1, both of which failed. “We are configured to keep power turned off if we see a failure in both of our power feeds,” Claranet technical director Martin Saunders told DatacenterDymamics. ”If both fail, you have to expect the worst.”

To protect life and property, Claranet engineers checked and prepared for a manual restart. “We have procedures we run through,” Saunders told us. “We have engineers on site, who run through checks. If it is clear there’s no danger, we work with the data center provider or the power provider, to ensure that when the power is stable it can be switched on safely.” Claranet and Global Switch were confident the power was stable at 13.00, and started to power up, switching on the network, storage and servers in that order. 

Everything was running smoothly again by 13.30, Saunders said, with almost no hardware failing (only one hard drive broke, and that may have had nothing to do with the power issue). 

Claranet has five other facilities in the UK, and many customers have resources spread amongst them, so many were completely unaware of any problem, Saunders assured us. Even the hard drive that failed did not result in the loss of any data. 

The data center is “built to an enhanced Tier III standards”, according to Global Switch, which means it has resiliency features but hasn’t been certified by the Uptime Institute. 

Normal procedure 

Global Switch has been tight-lipped about the incident, issuing a generic statement from John Stevenson, managing director London: “Incidents of this nature are extremely rare. The Global Switch infrastructure protected supplies, but one of the four power stations experienced a 222 millisecond switching event. Throughout the weekend we sought to keep our customers fully informed at all times.”

Stevenson also said the site is operated “to exceed Uptime Institute Tier III standards,” but its performance “has exceeded Tier IV.”

Despite this, the incident is the second in three months, following an outage on 23 June (the day of the Brexit vote), which is believed to have been caused by a lightning strike, which was not dealt with by the data center’s power systems. This incident also affected three floors of the data center, according to reports, and affected customers for some hours. 

Just days before this most recent problem it emerged that Global Switch’s owners, the Reuben Brothers, are apparently planning to sell half the company to a consortium of Asian investors, including companies from China. 

Readers' comments (4)

  • Perhaps GS should actually get the site Tier certified instead of making unsupported claims about the resiliency of the site.

    Unsuitable or offensive? Report this comment

  • Consider the OCP approach to power UPS as this shrinks the failure blast area to a just half of a white space rack.

    Unsuitable or offensive? Report this comment

  • Phil - your certification would not protect against this type of failure with this type of UPS as you have certified the same on other sites. It is a failure scenario that is a consequence of not using an on-line UPS design.

    Unsuitable or offensive? Report this comment

  • BTW - attending the DCPro Power Professional course covers this subject in detail...

    Unsuitable or offensive? Report this comment

  • .. and here's a link to it

    Peter Judge

Have your say

Please view our terms and conditions before submitting your comment.



More link