Many employees approach AI-based systems in the workplace with a level of mistrust. This lack of trust can slow the implementation of new tools and systems, alienate staff, and reduce productivity. Data center managers can avoid this outcome by understanding the factors that drive mistrust in AI and devising a strategy to minimize them.

Perceived interpersonal trust is a key productivity driver for humans but is rarely discussed in a data center context. Researchers at the University of Cambridge in the UK have found that interpersonal trust and organizational trust have a strong correlation with staff productivity. In terms of resource allocation, a lack of trust requires employees to invest time and effort in organizing fail-safes to circumvent perceived risks. This takes attention away from the task at hand and results in less output.

In the data center industry, trust in AI-based decision-making has declined significantly in the past three years. In Uptime Institute’s 2024 global survey of data center managers, 42 percent of operators said they would not trust an adequately trained AI system to make operational decisions in the data center, which is up 18 percentage points since 2022 (Figure 1). If this decline in trust continues, it will be harder to introduce AI-based tools.

Managers who wish to unlock the productivity gains associated with AI may need to create specific conditions to build perceived trust between employees and AI-based tools.

Figure 1
Figure 1 – Uptime Institute

Balancing trust and cognitive loads

The trust-building cycle requires a level of uncertainty. In the Mayer, Davis, and Shoormen trust model, this uncertainty occurs when an individual is presented with the option to transfer decision-making autonomy to another party, which, in the data center, might be an AI-based control system. Individuals evaluate perceived characteristics of the other party against risk, to determine whether they can relinquish decision-making control. If this leads to desirable outcomes, individuals gain trust and perceive less risk in the future.

Trust toward AI-based systems can be encouraged by using specific deployment techniques. In Uptime Institute’s Artificial Intelligence and Software Survey 2024, almost half of the operators that have deployed AI capabilities report that predictive maintenance is driving their use of AI.

Researchers from Australia’s University of Technology Sydney and the University of Sydney tested human interaction with AI-based predictive maintenance systems, with participants having to decide how to manage a situation with a burst water pipe under different levels of uncertainty and cognitive load (cognitive load being the amount of working memory resources used). For all participants, trust in the automatically generated suggestions was significantly higher under low cognitive loads. AI systems that communicated decision risk odds prevented trust from decreasing, even when the cognitive load increased.

Without decision risk odds displayed, employees devoted more cognitive resources toward deciphering ambiguity, leaving less space in their working memory for problem-solving. Interpretability of the output of AI-based systems drives trust: it allows users to understand the context of specific suggestions, alerts, and predictions. If a user cannot understand how a predictive maintenance system came to a certain conclusion, they will lose trust. In this situation, productivity will stall as workers devote cognitive resources toward attempting to retrace the steps the system made.

Team dynamics

In some cases, staff who work with AI systems personify them and treat them as co-workers rather than tools. Similar to human social group dynamics, and the negative bias felt toward those outside of one’s group (“outgroup” dynamics), staff may then lack trust in these AI systems.

AI systems can engender anxiety relating to job security and may trigger the fear of being replaced — although this is less of a factor in the data center industry, where staff are in short supply and not at high risk of losing their jobs. Nonetheless, researchers at the Institute of Management Sciences in Pakistan found that the adoption of AI in general is linked with cognitive job insecurity, which threatens workers’ perceived trust in an organization.

The introduction of AI-based tools in a data center may also cause a loss in expert status for some senior employees, who might then view these tools as a threat to their identity.

Practical solutions

Although there are many obstacles to introducing AI-based tools into a human team, the solutions to mitigating them are often intuitive and psychological, rather than technological. Data center team managers can improve trust in AI technology through the following options:

  • Choose AI tools that demonstrate risk transparently: Display a metric for estimated prediction accuracy.
  • Choose AI tools that emphasize interpretability: This could include descriptions of branching logic, statistical data, metrics, or other contexts for AI-based suggestions or decisions.
  • Combat outgroup bias: Arrange for trusted “ingroup” team leads to demonstrate AI tools to the rest of the group (instead of the tool vendors or those unfamiliar to the team).
  • Implement training throughout the AI transition process: Many employees will experience cognitive job insecurity despite being told their positions are secure. Investing in and implementing training during the AI transition process allows staff to feel a sense of control over their ability to affect the situation, minimize the gap between known and needed skills, and prevent a sense of losing expert status.

Many of the solutions described above rely on social contracts — the transactional and relational agreements between employees and an organization. US psychologist Denise Rousseau (a professor at Carnegie Mellon University, Pittsburgh PA) describes relational trust as the expectation that a company will pay back an employee’s investments through growth, benefits, and job security — all factors that go beyond the rewards of a salary.

When this relational contract is broken, staff will typically shift their behavior and deprioritize long-term company outcomes in favor of short-term personal gains.

Data center team leaders can use AI technologies to strengthen or break relational contracts in their organizations. Those who consider the factors outlined above will be more successful in maintaining an effective team.