It’s safe to say that AI has been dominating the news lately, particularly over the past 12 months, with the launch of AI applications such as ChatGPT.
As AI systems have become more ‘mainstream’, we have seen an increase in pressure on Governments and other regulatory bodies for clarity on what safeguards will be implemented to protect against the potential risks of AI (such as minimising job opportunities and exploitation by cyber criminals) whilst still being able to realise its full potential.
For example, an open letter published in March, which was signed by dozens of tech leaders, including Elon Musk and Stuart Russell, suggested that the ‘race to develop and deploy AI systems was out of control’ and called for a ‘six-month pause on developing AI systems which are more powerful than those already on the market’.
Currently, the UK Government has no intention to introduce any particular legislation or an independent regulator but is instead drawing upon the experience of existing regulatory bodies to put forward approaches which are tailored to the way in which AI is being used in the specific sector.
We are already seeing contrasting approaches to the regulation of AI by other nations.
For instance, the EU is already in the latter stages of finalising the AI Act, which is set to become the world’s first comprehensive AI law.
With concerns rising, the UK announced that it would host the first global AI Safety Summit at the beginning of November to consider the risks of AI and discuss how they can be mitigated through joint global efforts.
Myerson Solicitors' Technology team have set out some of the key takeaways of the Summit below.
The three key emerging points from the AI summit are:
28 countries (including heavyweights such as the USA, the EU and China) have signed this aptly named declaration with a nod to the infamous wartime Codebreakers.
The declaration establishes a shared understanding of the opportunities and risks posed by AI development and the need to collectively manage potential risks through a joint global effort to ensure AI is developed and deployed in a safe and responsible manner.
In theory, such a joined-up approach is welcomed. Still, on a practical level, there will need to be an equal investment in the training of experts in adjudicating and enforcement.
It remains to be seen as to whether this be a global body or whether countries are left to implement their own systems, which doesn’t feel like it would sit with the global nature of AI that has no physical or jurisdictional boundaries at the moment.
It therefore raises the question as to whether we need gatekeepers of the gatekeepers, a two-tiered approach?
The UK government announced a major £225 million investment into a new supercomputer called Isambard-AI, which will be built at the University of Bristol and is intended to achieve breakthroughs across healthcare, energy, climate modelling and other fields and forms part of the UK’s aim to lead in AI while partnering with allies like the US.
Whilst the Summit will be the first of many, with France agreeing to host the next in-person AI Safety Summit in a year’s time, it has been made clear that the Summit is just the start of the discussions, and it’s unlikely that the two-day Summit will establish any major new global policies on AI.
Further, the UK has said it is in no rush to regulate the sector but rather wants to start a general, global conversation around AI safety.
With the U.K. set to publish its anticipated responses to the AI regulation white paper later this year, we will continue to closely monitor the latest developments in relation to AI.
Whilst it’s understandable that the Government is not rushing into creating legislation which could quickly become outdated and struggle to be agile enough to adapt with this rapidly ever-evolving technology.
The Ministerial Foreword to the Government’s recently published policy paper on the AI Safety Institute, Rt Hon Michelle Donelan MP stated, “we were surprised by rapid and unexpected progress in a technology of our own creation”.
Really? Do we need to revisit some of the Hollywood blockbusters that have been warning us for decades?
Films such as:
And the list goes on. Whilst Sci-fi films allow for exaggeration and imagination, it’s not beyond the realms of possibility that some fiction could become fact!
Do we need to book the film nights, revisit some classics, and start taking notes from these movies?