The cloud is catalyzing massive change in the IT landscape. This rapid redefinition of an entire industry offers a unique opportunity to companies who are able to supply the cloud infrastructure - in particular, by the data centers that actually physically house the cloud’s content. What are the current trends in this area and how are they likely to play out over the next decade?

The forecasts vary depending on their source and scope, but all estimates of the cloud market put it in the hundreds of billion dollars region by 2020. It is clear there is a major shift happening in the IT landscape. Multinational technology companies such as IBM are spending billions acquiring cloud technology companies as well as developing its own products and services.

The data center itself is becoming less important for these global players as they move to provide cloud-based data center services for customers on a global scale. The focus of IBM Global Technology Services for example will be on network, mobility, IT systems and the critical aspect of resilience. The business of data centers themselves will decrease in significance to become the concern of special-interest parties, such as banks. IBM will continue still building data centers, but for themselves to provide their cloud services.

cloud team developer devops scaleFT
– Thinkstock / Paul Sutherland

Software as a service

Among those ramping up their cloud use are Amazon, Adobe, Microsoft and Google, as well as prominent new entrants to the market, such as Ericsson, to name but a few. The software as a service (SaaS) that is increasingly being offered by these companies is just one operation that is driving cloud expansion.

Adobe and Microsoft, for example, have made a major shift to SaaS in recent years with their introduction of subscription-based software services - gone are the days of paying a large sum of money upfront, obtaining and installing a program from a DVD and then keeping an eye out for updates as time went on. Subscription services allow the user to download the installation components needed, pay on a monthly basis, be kept up-to-date with the latest version and store all their data on the cloud. Applications can even be run on the cloud server instead of locally. This trend toward thin clients is gathering pace.

IoT, off-site data storage and processing

Another driver – one that is as important or even more important point than SaaS - is off-site data storage and processing.

In practice, this can deliver remote monitoring and the possibility to control almost any equipment that is connected to the virtual world. It means most new devices will, by default, not only have their physical form, but also their virtual profile – one that will be stored in the cloud. For security reasons, the devices will most likely be the ones to initiate connection and send data periodically describing their status and condition. This will generate a huge amount of data that needs to be filtered, stored, analyzed, processed and distributed to the person who needs it - via multiple user interfaces/applications, each dedicated to a specific mission.

This type of technological development would be unthinkable without a cloud-based solution. To attempt a local set-up that tried to mimic the immense number-crunching and data storage abilities of the cloud would be futile as well as wasteful. One good example of this is big data analytics, which is being seized upon by companies eager to increase their margin or improve their business model.

This results in huge data quantities – eg, one sensor on a GE gas turbine blade can generate 520 GB of data per day – and there are 20 of them in a plane, according to a report in Computer Weekly. The Internet of Things (IoT) will see a massive increase in storage and processing requirements in the cloud as data pours in from personal health sensors, RFID tags, building automation systems, industrial machine health monitoring, smart sensors in public infrastructure and transportation, phone apps, smart cars, smart cities – the list is open-ended. The IoT comes into its own when this data is gathered and then analyzed and leveraged. The cloud is essential for such large scale and far-reaching collection, analysis and actioning.

PaaS (platform as a service) will provide companies with preconfigured tools to store and analyze data with the possibility to build value-adding, software solutions on top for different dedicated purposes.

Off-site but not out-of-sight

Off-site data storage and processing is welcomed too by many companies for whom the cost of maintaining in-house IT resources has become onerous. Cloud facilitators make it easy to hire just whatever IT muscle is required – provision and deprovision can be done effortlessly. The key here is simplicity: leasing server space and processing power in a remote data center is now as easy as hiring a car – without the difficult collision damage waiver decisions. The user sees no difference between in-house and cloud provision of services and is unaffected. It is in the interest of cloud facilitators to make the process as simple as possible and ABB provides many of the tools to do this - of which more, later.

Outsourcing in this manner removes a whole host of unknowns from a business operation and allows company capex and opex to be much more predictable. The allure of the low cost of ultra-efficient data centers will become irresistible and over the next decade - operating an in-house, fully private data center infrastructure will be an activity for a few niche enterprises - such as those in finance and banking, where the very best security and utter reliability trump the benefits of the cloud - and most companies will look back in wonder at the investment and manpower they used to invest in a facility that they now get for a fraction of the cost.

3 ways data center evolution is driven by the cloud:

1. Big is beautiful - Hyperscale

All this cloud growth directly drives the expansion and evolution of data centers. Several such evolutional trends are clear today. For example, data centers are getting bigger – “hyperscale” and multi-tenant facilities are emerging. This makes sense as sharing infrastructure – especially power and cooling – reduces cost for the entire operation.

2. The Great Divide

In data centers, segmentation is becoming more common. This is where service provision depends on the customer’s cost, security and speed concerns – banks and other critical users, for example, are hosted on servers that are not only conceptually separate from the rest of the data center, but are also physically fenced off. As data center infrastructure management (DCIM) tools become more sophisticated over the next decade, this sort of differentiation will become more refined. In addition, the criticality level of a particular application will define the design of the data center infrastructure in which it is hosted.

3. A perfect data world

A third example of how cloud growth is driving data center evolution is the move toward perfection of data center infrastructure. Here, in a world of critical factors, power protection is the most critical factor – without power, data becomes unavailable and many applications - and irate users and end customers - are left high and dry. Availability is everything, so enterprises turn to the uninterruptible power supply (UPS) to ensure that critical loads have a continuous source of clean power. An appropriate UPS is the most important part of the power protection concept and it can ensure security of power supply, zero downtime, availability and low cost of ownership for a modern data center. However, the UPS must be considered in the context of the application, criticality and other data center infrastructure. Simple UPSs have been around for decades, but only recently have they been adapted and developed for use in data centers. The field is in its infancy, so what is driving UPS development and how will this play out over the coming years?

Protect and survive - UPS

For the data center operator - and user – availability is key. And the ultimate guarantor of availability is power protection. This puts the UPS, as the chief mainstay of power protection, into center stage.

Apart from availability, costs and margins are also of primary importance. For the data center operator, running costs need to be stable and predictable, power usage effectiveness (PUE) has to be under control and optimized, and maintenance costs known and scheduled. As far as capex is concerned, flexibility is key: upfront investments should be minimized, but infrastructure should be scalable so that future expansion can be easily accommodated. Due to the segmentation described above, equipment must be able to be deployed in sections, segments or individual modules. So-called core and pod architectures are already gaining traction. Here, the best gear configuration for a particular customer is contained in a pod and connected to the network core that distributes data and network traffic to customers.

Currently, the best UPS design that satisfies not only these opex and capex criteria, but also availability and total cost of ownership challenges, is decentralized parallel architecture (DPA) and in a future article, ABB will describe how the modular approach of DPA delivers all these via its unique scalability, online hot-swapping and energy saving characteristics.

The weather forecast

Predicting the path of technology is difficult at the best of times - and is doubly difficult for an area moving as fast as the cloud, especially as there are many unconventional and oblique drivers. The “cloud” can be defined as any computing power delivered through IP. It really covers everything and talking about “the cloud” without reference to a specific area of discussion can easily make things foggy.

The concept of the cloud is not particularly new. However, what is new is the extreme rapidity in the growth of the services that can be provided through the cloud and their sophistication. The technological development that allows better, faster and more reliable delivery of software, computing power and all the related services that can be run on a cloud platform is revolutionary.

This evolution of individual data centers into a global organic network ties in nicely with other trends – for example, that of shifting data around the world to take advantage of low, offpeak electricity prices in other continents, or moving data around based on weather patterns to reduce loads on centers in hot climates. The cloud is having a major impact on the data center business, so future data centers must become more efficient, more agile and capable of achieving higher levels of availability at lower cost.

Elina Hermunen is head of marketing and product management for power protection at ABB