European Union lawmakers have made some changes to the EU’s draft artificial intelligence (AI) act that will impose tougher rules on the technology.
The revisions include a ban of the technology in biometric surveillance, and for generative AI systems to disclose generated content.
"While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose," said Brando Benifei, co-rapporteur of the bill.
The EU is not the only organization to express concerns. Leading AI scientists and company executives including Elon Musk signed an open letter in March 2023 saying that “human-competitive intelligence can pose profound risks to society and humanity” and calling for a pause on the development of AI systems. Musk then went on to assemble a team and purchase 10,000 GPUs for Twitter’s AI project.
The EU says it a ban on biometric surveillance technologies such as predictive policing tools or social scoring systems like those used in China, is necessary to protect human rights from potential abuses. This decision, however, may clash with some EU countries which are opposed to a total ban on AI in surveillance systems.
The Council of Europe Commissioner for Human Rights touches on this in the Human Rights by design report which suggests that while many businesses and nations have not considered human rights when it comes to the implementation of AI, 21 of 46 Council of Europe member states have made considerations.
“Some National Action Plans (NAPs), including that of Norway, make reference to the human rights impacts of specific technologies, including military and surveillance technologies that require tighter export licensing regimes to prevent abuse,” says the report.
“Others, including the Lithuanian, Polish, and Italian NAPs, reference the promotion in particular of renewable, environmentally friendly, and ecologically sound technologies. Switzerland’s NAP mentions working with international institutions to establish “authoritative guidelines on application of UN Guiding Principles to fundamental issues in connection with the development, use, and governance of digital technologies.”
The EU AI Act will also make any company using generative AI technology disclose copyrighted material used to train its systems and for companies working on "high-risk applications" to risk-assess the potential impact on rights and the environment.
Generative AI systems will, additionally, be required to disclose that content was AI-generated. This has led to OpenAI saying that, depending on the final text of the act, the company may be forced to withdraw from Europe.
Response from companies has been mixed. Microsoft and IBM approve of the changes, with Microsoft saying: "We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI." Others, such as Meta, have dismissed that AI has a risk, with the company’s chief AI scientist Yann LeCu calling AI “intrinsically good.”
The AI Act is two years in the making, having first been proposed in 2021. It is currently expected to be adopted by the start of 2024.