Artificial Intelligence (AI) and Machine Learning (ML) technologies are a proven way for data center operators to maximize uptime, optimize energy usage, quickly detect potential risks and defend against cyber-attacks. So it’s no surprise that 83 percent of organizations have increased their AI/ML budgets year-on-year, according to Alorithmia’s “2021 Enterprise Trends in Machine Learning.”
For example, major hyperscalers have developed in-house AI to support use cases such as cooling. But smaller operators can achieve AI/ML benefits too, by leveraging AI-as-a-Service on cloud platforms.
Data center vendors are also steadily making it easier for their customers to begin using AI/ML by embedding the technology in their products. One example is specialized silicon designed to perform complex mathematical and computational tasks in a more efficient way. Most AI use cases today are very narrow, so these AI chips can be trained for a specific task such as pattern recognition, natural language processing, network security, robotics and automation.
AI is maturing, which means its capabilities are growing at the same time it rides down the cost curve. Those two trends will enable data center vendors to embed AI/ML into more of their products. For example, RISC-V and other open-source technologies are lowering the barriers to purpose-built “building blocks” that can focus on efficiency, performance and scalability like never before. That in turn will drive even more adoption and use cases, including among smaller data center operators that currently consider AI/ML too expensive to implement widely or at all.
What’s more, AI/ML can be applied to the data center’s mechanical and electrical equipment to enable actionable insights and automation, saving money for the operator. This requires integrating traditional physics-based modelling approaches with state-of-the-art ML techniques using data from Internet of Things (IoT) sensors. ML and physics-based modelling both have their strengths. Combining them leverages the best of both worlds to solve complex data center issues involving mechanical and electrical equipment.
With 5G and related industry 4.0 use cases there is a steep increase in demand for ‘anywhere, anytime’ access to applications and services such as autonomous vehicles, smart cities, advanced manufacturing, AR/VR gaming etc. Latency is no longer tolerable. As a result, edge data centres are taking centre stage as are multi-access edge compute (MEC) capabilities. With compact, inexpensive and powerful hardware in edge data centres, it is now possible to run AI/ML workloads close to the user where data is generated, and get real-time insights and experiences, delivered by highly responsive and contextually aware apps.
Use AI/ML for new construction and retrofits
Operators should make AI/ML a key part of their planning and construction process, such as with building information modelling (BIM) and building performance simulation (BPS) tools. This advice applies to retrofit projects, such as enabling predictive maintenance at an existing facility. To ensure a successful retrofit:
- Develop a retrofit strategy
Identify the business objectives of retrofitting, including which machines in the installed base will be “brought online” and the potential sales of digital services and associated costs.
- Develop a data strategy
For companies working in a legacy manufacturing environment, a significant concern is lack of real-time visibility into operations. Accessing data in legacy systems is challenging. Even if a legacy system can generate data, the reports often arrive days or weeks later — sometimes too late to do anything about a problem.
- Choose the right set of hardware and software solutions
These should facilitate the connection of assets regardless of their type, brand, age, protocol or communication standard.
- Put security at the forefront
Ensure that the retrofit solution uses encryption and development approaches such as security by design and Trusted Execution Environment (TEE) procedures.
Leverage digital twins
Digital twins also are worth considering with data center design and management, so the 3D virtual replica can simulate its physical behaviour under any operating scenario. It encompasses the entire data center ecosystem, including virtual representations of the facility’s building blocks: the power, cooling and IT system components from all major OEMs.
The digital twin brings all stakeholders together to strategize and take control of the performance and business impact of operations on your data center. The digital twin provides empowering visibility to reduce operational risk, remove process bottlenecks and enable the analysis of “what-ifs” - all in one system.
Rather than using an exclusively data-driven model, data center digital twins are also physics-based, with the ability to simulate the performance of a new configuration. A physics-based digital twin consists of a full 3D representation of the data center space, architecture, mechanical and engineering systems, cooling, power connectivity and the raised floor’s weight-bearing capability. This enables operators to predict, visualize and quantify the impact of any change in the data center prior to implementation, empowering them to make decisions with confidence.
The digital twin integrated with AI can help technology teams cope with the growing complexity of modern data center environments. Even though data centers are the critical performance hubs behind the digital world, operations still require a lot of manual work and in-depth, specialist know-how to keep things up and running.
The bottom line is that data center operators have a wide variety of options for leveraging AI/ML, with even more on the way as the technology gets cheaper and even more sophisticated. There’s a bright future ahead.