The inability to explain why AI and ML models produce the outcomes they do is a major hindrance for enterprise success. Without full model transparency and explainability, businesses are left in the dark on instances of model decay and bias that could render results incomplete, ineffective, and dangerous [or, in some case, can result in some weird images = Editor]. This not only makes delivering smarter, AI-powered strategies a challenge, but will also create significant issues as new regulations focused on demystifying algorithms and protecting individuals take hold in the U.S. and EU.

While there’s still time to prepare for these regulations, data science, MLOps, and legal, risk, and compliance teams are likely to feel the pressure as executives increasingly turn to them to continue developing models that successfully inform business decisions.

In order to ensure compliance and maintain momentum with AI, every organization must understand what these regulations entail, what actions need to be taken, and how to build responsible AI solutions.

What regulations have been established?

Much in the same way its General Data Protection Regulation (GDPR) paved the way for protecting consumer privacy in 2018, the EU is also leading the charge in AI regulations with its proposed Digital Services Act (DSA). The regulation covers everything from social networks and content-sharing platforms to app stores and online marketplaces, as well as requiring independent audits of platform data and insights derived from algorithms. While there’s still uncertainty around how exactly DSA will be enforced, one thing is clear: companies must know how their AI algorithms work and have the ability to explain it to users and auditors. 

Following the EU’s DSA, the US Office of Science and Technology Policy (OSTP) announced its plans to pursue legislation in the form of an AI Bill of Rights. The idea is to protect American citizens and manage the risks associated with ML, recognizing that AI “can embed past prejudice and enable present-day discrimination.” The U.S. government has also initiated requests for information to better understand how AI and ML are used, while the National Institute of Standards and Technology (NIST) is building a framework “to improve the management of risks to individuals, organizations, and society associated with AI.” 

But regulations aren’t just being instituted at the federal level; governing bodies in highly regulated industries like banking and finance are also instituting their own compliance requirements.

Ultimately, each of these regulations ensures that any organization using AI that is unable to explain why their algorithms made decisions will be considered non-compliant—resulting in significant fines and brand detriment (particularly if their models lead to discrimination).

What do companies need to do—and by when?

The EU’s DSA legislation could go into effect as early as January 2024, starting with a focus on examining algorithms developed and leveraged by Big Tech companies. Although the DSA will only cover EU citizens, its influence will extend to organizations all over the world, as any company whose AI models touch EU consumers will need to comply. And while the timeline for the US AI Bill of Rights is still uncertain, it's inevitable passing will come with similar regulatory responsibility.

Although it is one of the stricter standards globally, the EU’s DSA legislation is expected to become the standard-bearer. As a result, every organization—even those not directly impacted by the DSA—should use those regulations as a guide. Preparing for the DSA now by creating responsible, trustworthy AI models will save organizations time and prevent fines in the future.

How can companies build trust into AI?

In the past, companies have claimed their algorithms are proprietary to keep any AI inaccuracies, biases, or malfeasance under wraps—but these regulations will change that. It’s imperative that every organization be able to explain what their models do and how results are created, though doing so is not easy.

Most existing enterprise AI solutions are limited in their ability to drive model explainability, making it challenging for companies to extract causal drivers in data and ML models and to assess whether or not model bias exists. While some organizations have attempted to operationalize ML by creating in-house monitoring systems, most of these lack the ability to comply with DSA. 

Rather than continue relying on opaque models that could result in inaccurate results and compliance issues, organizations need out-of-the-box AI explainability and model monitoring. There must be continuous visibility into model behavior and predictions and an understanding of why AI predictions are made—both of which are vital for building responsible AI. 

As AI adoption continues to expand across industries, global regulations that ensure responsible algorithms and consumer safety will become commonplace. And while not every organization may have to comply with the EU’s DSA now, it is inevitable that they’ll need to provide explanations for their models to a governing body at some point. Rather than waiting for that time, every company leveraging ML and AI should take the necessary steps to create responsible, transparent models now.

Subscribe to our daily newsletters