The UK’s AI Safety Summit in Bletchley Park was the first of its kind and despite a seemingly endless flurry of predictions and debate in the weeks prior as to what it would entail and achieve, this sense of something utterly ‘new’ prevailed for its duration. It is extremely rare to feel like one is witnessing something truly unseen before – but this is how it felt, as an attendee.
At the Summit, Elon Musk labeled AI “the most disruptive force in history.” Ironically, on the surface, this disruptiveness of AI generated a strong sense of unity across the different political, diplomatic and corporate parties in attendance.
The most fundamental question that everyone – from senior government officials to private sector tech leaders – had about this ‘new’ technology was in regard to its regulation. Should it be regulated? Can it be regulated, and what would such regulation look like?
Although bound by the collective aim to reach a consensus about AI’s regulation, the motivations of different groups for doing so seemed to vary. At this latent stage, some might argue that differing motives do not matter – what matters is agreeing on some form of regulation and implementing it as quickly as possible. Yet, this risks laying down faulty foundations that expose fundamental cracks later on, such as the blocking of particular regulations by one stakeholder group, or the disregard for certain concerns due to a lack of representation from a particular sector.
Governing the unknown
For political representatives, the predominant motivation for regulation tends to be national security – an understandable priority, given the threat AI poses of advanced cyber-attacks, autonomous weapons, information warfare and surveillance.
Delegates from the 28 attending nations grappled with what AI safety might mean across their different countries and cultures, culminating in the signing of the Bletchley Declaration. Recognized by many as a major diplomatic achievement, the aim of the agreement to tackle the risks of frontier AI models is certainly an admirable one. As are the UK’s efforts to build the first ‘testing facility’ which will independently test the models, including their parameters and data used, for safe usage before their release.
Unlike other technologies, which have, historically, had much longer lead times to go through studies before being received by the general population, generative AI has been placed in the hands of the public without the supply of any of the usual background knowledge or controls. Such pre-deployment testing is, therefore, a vital step in safeguarding against the risks that these machine-learning models pose.
In the hands of the Giants
On the surface, the tech giants’ desire for regulation seems to broadly align with that of the various governments. For some, this willingness of the giants to participate in discussions about regulation may come as a surprise. Do regulations and legal limitations not risk stifling innovation and growth?
Potentially.
However, the likes of Open AI and Google seem to have a different primary concern: the open-source question. Open-source machine-learning models cannot be tracked in the same way as closed-source ones can, as the user does not need to make an account to access the software.
Therefore, it is much more difficult to track activity back to a specific individual in the event of misuse. Whilst this is, undoubtedly, an important factor, open-source models can also turbo-charge the speed and potential of innovation by enabling collaboration and knowledge-sharing. For the private sector tech giants, this poses a risk to their position as top-dog and so, it is unsurprising that their desire for regulation seems to be motivated by a need for exclusivity.
The forgotten majority
Teachers, shop assistants, administrators – those with ‘everyday’ jobs – were largely left out of the Summit. Ironically, this larger section of society will likely feel the impact of AI on their day-to-day lives the most: advanced machine learning will mean that administrative tasks increasingly become automated, school lessons will be supplemented by (and maybe even, one day, replaced by) chatbots and it will be possible for shelf-stacking in shops to be done by robots. As Musk put it – “AI will put an end to work.” For some, this may be an exciting prospect, but for many, potential job losses in an already-fraught labor market and amid an ongoing cost-of-living crisis is a terrifying and depressing prospect.
National security, innovation potential and the debate between open and closed-source models are all vital topics for discussion. However, the majority of the population, who will be affected by all of these things, cannot be forgotten. Representatives from industries which are not directly involved in the building of these technological products and solutions must also be invited to the table to discuss AI safety and regulation – so that their concerns can be addressed and their voices heard.
The year ahead
One of the most important outcomes of the Summit is its proposed annual occurrence, along with the additional half-year smaller event in the first half of 2024 – these events mustn’t be all talk with no continued action. Large organizations are looking to events like these and international regulation to help guide them on their path to an AI-enabled future.
Whilst many of the leaders I speak to are still hesitant about applying these technologies on a large scale in their business, I have seen a widespread agreement that now is the time for further experimentation, in a controlled way. By identifying very specific business-use cases and starting to take the first steps, we will be able to make sure that the output of AI is reliable. From there, trust in AI will grow, opening up further opportunities which will see these technologies deliver on the lofty ambitions they promise.