No 10 recognizes for the first time the “existential” risk of AI | Artificial Intelligence (AI)
The ‘existential’ risk of artificial intelligence was acknowledged by No 10 for the first time, after the Prime Minister met the heads of the world’s leading AI research groups to discuss safety and regulations.
Rishi Sunak and Chloe Smith, the secretary of state for science, innovation and technology, met on Wednesday evening with the chief executives of Google DeepMind, OpenAI and Anthropic AI and discussed how best to moderate development technology to reduce disaster risk.
“They discussed safety measures, voluntary actions the labs are considering to manage risk, and possible avenues for international collaboration on AI safety and regulation,” the attendees said in a joint statement.
“Laboratory officials have agreed to work with the UK Government to ensure that our approach responds to the speed of innovations in this technology, both in the UK and around the world.
“The Prime Minister and CEOs discussed the risks of technology, ranging from disinformation and national security to existential threats…The Prime Minister explained how the approach to AI regulation will need to keep pace rapid advances in this technology.”
It is the first time that the Prime Minister has acknowledged the potential ‘existential’ threat of developing ‘superintelligent’ AI without proper safeguards, a risk which contrasts with the UK Government’s generally positive approach to AI development. AI.
The growing awareness of this risk comes a day after OpenAI chief executive Sam Altman issued a call for world leaders to create an international agency similar to the International Atomic Energy Agency, which regulates atomic weapons, in order to limit the rate at which these AI is developed.
Altman, who has toured Europe to meet with ChatGPT platform users and developers as well as policymakers, told an event in London that while he doesn’t want short-term rules are too restrictive, “if somebody cracks the code and builds a superintelligence… I would like to make sure that we treat this at least as seriously as we treat, say, nuclear materials”.
The UK’s approach to AI regulation has been criticized by some for its light-touch approach. During a Guardian Live event earlier this week, Stuart Russell, professor of computer science at the University of California, Berkeley, criticized the UK for relying on a hodgepodge of existing regulators rather than sit down and figure out how best to regulate the field to ensure everything from labor market effects to existential risk is minimized.