In 2018, the Internet passed 3.6 billion users. That is almost half the world’s population. With more people having access to the Internet, there has naturally been an increase in demand on the data center infrastructure that powers it. More people being online, though, is not the only driver of the increased burden on data centers.
With cloud adoption continuing to increase demands on data center teams, new technologies such as the Internet of Things (IoT) are only going to add to the pressure that data center businesses and their teams are under.
A recent independent report [Ed: Conducted by Sapio Research, but commissioned by Future Facilities] surveying data center drivers found - unsurprisingly - that you are not alone. In fact, over three quarters (77 percent) of decision-makers are seeing increased demands. For colocation providers, for instance, the problem is particularly bad as four in ten decision-makers in this sector are seeing ‘significant’ increases in demands on their infrastructure. This growing demand is naturally leading to a need for more cooling and power, with nearly half (49 percent) looking to invest in systems that can deal with the rising heat and power demand.
Investing in additional cooling and power management is a decision that no one takes lightly given its impact on the capital expenditure (capex) and the bottom line for a business. However, this investment is nothing compared to the £122,000 ($152,000) per year that the average data center loses in downtime alone. These costs, combined with the possible loss of reputation and client relationships make managing costs and risks a top priority. Nonetheless, while stopping outages is evidently important, throwing more cooling at the problem is an expensive stopgap solution.
More capacity, less downtime
So the real challenge then is to increase capacity without increasing the risk of an outage or the cost of additional power and cooling infrastructure. So far though to prevent downtime, businesses have focused on investing more into power, cooling and networking solutions. In fact, 45 percent of all businesses and nearly half (42 percent) of colos are focused on investing in these specific areas. As they continue to do so, businesses are over-provisioning to the tune of 36 percent. This means that current data centers have a lot of excess capacity that is being held in reserve. All this unused capacity is being wasted and leading to additional capacity that simply is not needed.
Wasted capacity on this level should really not be an industry standard. By making slight gains in insights and understanding, data center teams can maximize data center’s existing capacity without run the risk of extra downtime. Getting a data center into this position - its highly optimized state - can really only be achieved through the use of physics-based simulation. Visualizing airflows and local hot spots in a physics-based simulation make it easier for data center managers to test and ultimately manage these issues with subtle adjustments to layout prior to implementation.
The data center digital twin
A physics-based digital twin of the data center is not just a one time tool. Data centers can continuously test out changes and adjustments when new equipment is deployed and gain valuable insights and data ahead of an actual installation. With ever-increasing demand being placed on data centers, digital twins can help ensure that businesses meet the demands of their customers while still remaining profitable. This is especially true for colos as they look to maximize every square inch of floor space without incurring penalties or SLA breaches for going over agreed temperature levels.
The reality is that the demand and pressures that data centers are under are unlikely to go anywhere soon. Maximizing capacity without increasing the chances of extra downtime or additional costs is a difficult situation to juggle for any data center business. However, the use of digital twins can help with management and operation. It is time to move on from over-provisioning and meet your data center digital twin. With data centers that utilize simulations such as a digital twin suffering three times fewer outages than those that don’t, can you really afford to miss out?