Californian Senator Scott Weiner has put forward a bill that aims to ensure the safe development of large-scale AI systems through the introduction of what he describes as “clear, predictable, common-sense safety standards.”

The bill was first proposed by the Democrat from San Francisco in 2023, who, at the time, filed what is known as an intent bill at the time – a piece of legislation that needs further development before it can be brought forward.

US Senate
– Getty Images

The bill, which is also known as SB 1047, would seek to establish safety standards for developers of the largest and most powerful AI systems, with Weiner specifically stating that startups would not fall under the scope of the legislation.

Furthermore, if passed, the bill would prevent price discrimination and anticompetitive behavior; institute know-your-customer requirements; and protect whistleblowers at large AI companies.

Weiner is also seeking to establish what he has called CalComputer, a public AI cloud compute cluster that would be made available to startups, researchers, and community groups participating in the development of large-scale AI systems.

“Large-scale artificial intelligence has the potential to produce an incredible range of benefits for Californians and our economy—from advances in medicine and climate science to improved wildfire forecasting and clean power development,” said Senator Wiener in a statement. “It also gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks.”

Weiner added that by developing responsible, appropriate guardrails around the development of the biggest, most high-impact AI systems, SB 1047 would ensure the technology is used to improve Californians’ lives, without compromising safety or security.

Although Joe Biden signed an executive order in October 2023 that requires AI developers to adhere to a number of guidelines, such as the undertaking of safety assessments and civil rights guidance, the wider US legislature has yet to pass any laws regarding AI safety.

The UK government has also spoken about the need for AI safety in the wake of the AI Safety Summit it hosted in November 2023. However, instead of enacting any binding legislation, the government instead oversaw the signing of the so-called Bletchley Declaration where 28 countries, including the US and China, agree that "AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible."

Meanwhile, the member countries of the European Union reached an agreement on the Commission’s AI Act earlier this month and are now awaiting the formal approval of the European Parliament, which it is expected to receive in April 2024.

The legislation aims to “ensure AI protects fundamental rights, democracy, the rule of law, and environmental sustainability while boosting innovation and making Europe a leader in the field,” the European Parliament previously said in a statement.