Sometimes when organizations are asked to set out the trends and issues that will shape technology over the next year, there’s a tendency to reach too far and start imagining a future that’s probably years if not decades away. Analyst firms are great at this! For 2019, for example, Forrester has suggested that ‘digital goes surgical’ and that ‘purpose regains meaning,’ while Gartner is all about AI, blockchain and turning continuous change into an asset.

All good stuff, but perhaps not all that helpful when you’re actually thinking about what you can do to maximize your data center performance in 2019. So what can DC managers do to actually make a difference within the next 12 months?

Try these five things

Availability of DCIM for the rest of us

Effective data center infrastructure management is a key requirement, so why do most traditional DCIM suite solutions seem to make it so hard? 2019 will see an increased focus on more accessible approaches that are simpler to use and that directly address the requirement to have all the right cooling, power and space strategies in place.

So, if you’re uncomfortable with over-complex DCIM or consultancy-led CFD approaches, you really don’t have to go down the DCIM route when there are equally effective SaaS-powered solutions available that can now give you all the control you need to monitor, manage and maximize your data centers.

Greater focus on Edge integration

Maximizing your data center performance isn’t truly achievable until you’ve successfully integrated all your operations – including all your different ‘edge’ micro and modular data center activities. All too often advanced M&E Capacity Planning and Simulation capabilities have remained the preserve of the largest data center halls and facilities.

There’s no excuse for this to remain the case in 2019, particularly as features such as SaaS access, wireless sensing and mobile network access let you apply the same best practice optimization standards to all your DC operations.

Fully-sensed data centers become a reality

It’s only when data rooms are carefully mapped with all the appropriate data fields that operations teams can really start to gain a real-time understanding of their data center performance. To do this properly we estimate that more than 1,000 sensors are required for the typical data center, enabling the measurement of a range of previously unknown factors including energy usage, heat outputs and airflow (above and below floors).

Until recently this used to be a problem due to the market cost of sensors, however the introduction of low-cost IoT-enabled wireless devices has changed the cost dynamic making new levels of sensing achievable.

Beyond subjective data center performance optimization judgments

While data center subject matter experts are able to build up a mental picture of the dynamic behavior of any cooling system over time, the critical nature of today’s data center operations means that cooling is just too important an issue to leave to the subjective judgement of expensive consultants.

Now, however, having access to increasingly granular rack-level data provides operators with exactly the sort of data platform that’s needed for true software-enabled real-time decision-making and scenario planning.

Learning from other sectors to secure new insights into infrastructure management

Some of the challenges we’re facing in the data center and other built environments can be better addressed if we’re smart about using innovations from other sectors. Our 3D data center visualization drew directly on the latest gaming technologies.

We’re also learning from the geospatial data sector to help our customers populate their advanced data center models by using advanced LiDAR-enabled spatial mapping equipment.