Artificial intelligence (AI) is being used in data centers to drive up efficiencies and drive down risks and costs. But it also creates new types of risks.

Some of these risks are not clear-cut. Take, for example, new AI-driven cloud services, such as data center management as a service (DMaaS), that pool anonymized data from hundreds or thousands of other customers’ data centers. They apply AI to this vast store of information and then deliver individualized insight to customers via a wide area network, usually the Internet. But that raises a big question: Who owns the data, the supplier or the customer? The answer is usually both: customers can keep their own data but the supplier typically also retains a copy. This means that, even if the paid service stops, the data remains an anonymous part of the supplier’s data lake.

Does this lack of clarity over data ownership constitute a risk to data centers? The answer is vigorously debated. Some say that if hackers accessed data, it would be of little use as the data is anonymized and, for example, does not include specific location details. Others say hackers could apply techniques, including their own AI analysis, to piece together sensitive information to build up a fairly complete picture.

This is just one example of the risks that should at least be considered when deploying AI. Uptime sees four areas of risk with AI offerings:

Commercial risk

  • AI models and data are often stored in the public cloud and outside of immediate control (if using a supplier model). Even if they are on-site the models and data may not be understood.
  • Commercial machine learning products and services raise the risk of lock-in because processes and systems may be built on top of models using data that cannot be replicated.
  • Pricing may also increase as adoption grows. At present, prices are low to attract new data (to build up the effectiveness of AI models) or to attract equipment services or sales.
  • A high reliance on AI could change skills requirements or “de-skill” staff positions, which could potentially be an issue.

Legal and service level agreement risk

  • Again, AI models and data are stored outside of immediate control (if using a supplier model) or may be on-site but not understood. This may be unacceptable for some, such as service providers or organizations operating within strict regulatory environments.
  • In theory, it could also shift liability back to an AI service supplier - a particular concern for any automated actions provided by the service.

Technical risk

  • While we usually understand what types of data are being used for human actions and recommendations, it is not always possible to understand why and exactly how a machine reached a decision.
  • It may not be possible to easily change or override decisions.
  • As machines guide more decisions, core skills may become outsourced, leaving organizations vulnerable.

Interoperability risk and other “unknown unknowns”

The risk from the development of “2001” HAL scenarios (i.e., singularity) are over-played but there is an unknown, long-term risk.

One example is that AI is likely to be embedded in most cases (i.e., inside an individual equipment and management system). This could lead to situations where two or three or five systems all have some ability to take action according to their own models, leading to a potential runaway situation - or conflict with each other.

For example, a building management system may turn up the cooling, while an IT system moves workload to another location, which turns up cooling elsewhere.

Data center operators are already applying AI in their facilities, and cloud providers intend to profit from AI services. These moves should be informed by an awareness of the possible downsides.

The full report Very Smart Data Denters: How artificial intelligence will power operational decisions is available to members of the Uptime Institute Network here.