Centralized data centers are a familiar concept today with most organizations buying into the idea that cloud computing, virtualization and specialist expertise can combine to deliver resilient, scalable and cost-effective IT services from large remote data facilities.

At the same time a number of factors are forcing companies to consider moving some key IT assets to the edge of the network and closer to the users, be they customers or employees. The upshot is that in reality, many organizations will deliver specific applications and services from a variety of data centers ranging from smaller in-house networking facilities to gigantic colocation centers. This has implications for the overall levels of security and resilience that may be expected.

SmartBunker FX
– Schneider Electric

Who guards a micro data center?

It has long been taken for granted that large centralized data centers have the highest standards for deployment of such functions as data backup, failover systems and physical security. Backups are performed regularly and punctiliously; there is ample storage and server redundancy, enhanced by virtualization, to take up the slack in the case of equipment failure; highly redundant power and cooling systems, and physical security is strictly enforced to ensure no unauthorized access to sensitive areas by those with malicious intent.

Further down the data center chain, some or all of these functions may not be as readily available. A micro data center installed in a spare office, network closet or basement is unlikely to merit its own security guard or have the same level of redundancy as a larger facility

Furthermore, the sort of applications hosted locally may be used by only a minority of staff but they are also more likely to be proprietary applications, specific to the organization and critical to the businesses well being. Therefore, when calculating the overall availability of IT services it is important to take into account the variance between the different data centers so that one can attain a true picture of the strength or vulnerability of its IT assets.

Combining reliability scores

Recent research carried out by Schneider Electric proposes that the overall availability of IT services to an organization should be based on a holistic view of the organization’s data centers, and that a score-card methodology be adopted so that a dashboard can be drawn up depicting system-level availability. This produces metrics showing that the relatively poorer levels of availability from smaller sites can have a disproportionately large effect on overall IT availability.

Simple steps like locked rooms and biometric access are now cheap enough to be deployed throughout a network

For example, a user might be dependent on applications hosted by two data centers; one a centralized Tier III facility with an availability of 99.98 percent and 1.6 hours of downtime and the other a local Tier 1 site with 99.67 percent availability and 28.8 hours of downtime. The total availability rating of the two systems in series (meaning a failure occurs if either system fails) is the product of the two systems’ availabilities, or 99.65 percent (99.98*99.67) resulting in a total downtime of 30.7 hours.

That’s the bad news.

The good news is that what gets measured gets managed, and visibility of the situation allows steps to be taken to improve availability throughout an organization.

Best practice

New best-practice measures can be adopted to improve availability at the edge of a network. Simple steps like moving equipment to locked rooms, or installing biometric access controls, which are now cheap enough to be deployed throughout a network, can boost security appreciably.

Remote monitoring software is now flexible enough to take account of IT assets distributed across a wide geographical area as well as those housed centrally. Thus, monitoring can be consolidated at a centralized platform providing similar levels of management and reporting throughout an organization.

For power and cooling, one should consider monitoring of temperature and humidity levels at all sites and the introduction of redundant power paths to maintain availability. A similar focus on redundancy for network connectivity should also be considered, depending on the critical nature of a locally hosted application.

As information is exchanged between local and central facilities one must take into account the challenges that may emerge as a consequence. These include service disruption, latency and in some cases network reliability. Fortunately the availability of monitoring software, cost-effective security and high-availability edge solutions will ensure that these challenges can be met.

Kevin Brown is CTO and SVP of innovation of Schneider Electric’s IT Division and Wendy Torell is senior research analyst at Schneider Electric’s Data Center Science Center