Overview: The UK AI summit
Ahead of the first global summit on AI safety, which will take place next week south of the border, Holyrood takes a look at the challenges the event has faced and those it aims to tackle.
Hosted at Bletchley Park from 2-3 November, the summit will bring together country leaders and relevant stakeholder representatives to consider what protective measures need to be taken to mitigate the potential threats of AI.
On announcing the event in June, Prime Minister Rishi Sunak acknowledged the potential of AI “to transform our lives” yet warned on the need to develop a global strategy moving forward.
“No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way”, Sunak added.
With AI holding a value of almost £4bn to the UK economy, the prime minister wants the UK to be “not just the intellectual home, but the geographical home of global AI safety regulation”.
What is the current situation with AI in the UK?
In 2021, the Scottish Government announced its own AI strategy, with the Scottish AI Alliance in charge of delivering the framework.
The document outlined actions for Scotland to become “a leader in the development and use of trustworthy, ethical and inclusive AI”.
The framework explained how the technology would enhance sectors such as health and education, and tackle challenges such as climate change.
Later that year, the UK Government announced its national AI strategy, outlining a 10-year scheme to make the UK a “global AI superpower”, with focal points including the transition to the journey to an AI-enabled economy.
Earlier this week, Sunak also shared plans to develop a UK AI safety institute – which will evaluate the opportunities and risks posed by new AI models.
What will the event focus on?
The summit will put ‘frontier’ AI – defined as highly capable models that could pose a significant danger to society – and how to use the technology for the public good in the spotlight.
Other key ambitions of the two-day event are to create an international framework for AI safety and a collective guideline that organisations should follow to enhance their AI safety as well as inspire collaboration on AI safety research.
Discussions on safety will focus on misuse and loss of control. In other words, delegates will talk about how to tackle AI-powered crimes and what to do if the technology turns against us.
It is also expected that the event will discuss measures on how to prevent AI from spreading misinformation during elections and becoming a weapon of war.
Speaking on emerging technologies earlier this week Sunak said: “Risks to political systems and societies will increase in likelihood as the technology develops and adoption widens. Proliferation of synthetic media risks eroding democratic engagement and public trust in the institutions of government.”
“Get this wrong and it could make it easier to build chemical or biological weapons.
“Terrorist groups could use AI to spread fear and disruption on an even greater scale,” he added.
Why is the summit believed to be important?
Last May, a letter released by the Centre for AI Safety (CAIS), warned the technology could pose a “risk of extinction”. A total of 350 executives supported the statement saying the matter should be a “global priority”.
Amongst those who signed were OpenAI chief executive Sam Altman and Google DeepMind chief executive Demis Hassabis, both of whom are expected to appear at the summit next week.
CAIS director Dan Hendrycks said: “For risks of this magnitude, the takeaway isn’t that this technology is overhyped, but that this issue is currently underemphasised relative to the actual level of threat.”
This letter was preceded by a call in March from other representatives to halt the development of powerful AI models until a regulatory framework was decided.
Who is attending?
President of the European Commission Ursula von der Leyen is expected to attend as well as US Vice-President Kamala Harris.
Deputy Prime Minister Oliver Dowden has confirmed a representative – yet to be announced – from the Chinese Government will also be in attendance. It is an invitation that has attracted polarised responses, with former prime minister Liz Truss saying she was “deeply disturbed” by the invitation as she stated China saw AI “as a means of state control and a tool for national security”.
Other potential representatives include Meta president of global affairs Nick Clegg and Google’s senior vice president for research, technology & society James Manyika.
The final official list of attendees is yet to be released.
What are people saying?
Some stakeholders are concerned the focus of the summit is too narrow. They have raised concerns the event will centre too much on the long-term impacts AI could have rather than discussing the challenges it poses right now.
Some say the summit will only duplicate or distract from other regulations already in place including the EU’s AI Act as well as the G7’s – of which the UK is part – commitment last September to develop an international AI code of conduct.
In an interview with Wired, several British AI executives raised concerns on the lack of representation from British tech executives within the list of those invited.
Dev Duggal from Builder AI, an AI-powered app startup in London, told Wired: “On one hand we’ll say we’re the AI centre of the world, but on the other, we’re saying we don’t want to trust our own CEOs and entrepreneurs or researchers to have a more prevalent voice. It doesn’t make sense.”