politicsUSA

Britain expands its AI security institute to San Francisco, home of OpenAI

An aerial view of the city of San Francisco and the Golden Gate Bridge in California, October 28, 2021.

Carlos Barría | Reuters

LONDON — The British government is expanding its testing facilities for “frontier” artificial intelligence models in the United States, aiming to boost its image as a leading global player tackling the risks of the technology and increasing cooperation with the United States as governments around the world. the world is jostling for leadership in AI.

The government announced Monday that it will open a U.S. counterpart to its AI Security Summit, a state-backed body focused on testing advanced AI systems to ensure their safety, in San Francisco this summer.

The American version of the AI ​​Safety Institute will aim to recruit a team of technicians led by a research director. In London, the institute currently has a team of 30 people. It is chaired by Ian Hogarth, a prominent British technology entrepreneur who founded the music concert discovery site Songkick.

In a statement, UK Technology Minister Michelle Donelan said the US rollout of the AI ​​Safety Summit “represents UK AI leadership in action”.

“This is a pivotal moment in the UK’s ability to explore both the risks and potential of AI from a global perspective, strengthening our partnership with the US and paving the way for other countries to leverage our expertise as we continue to lead the world in this area of ​​AI security.

The expansion “will enable UK to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs, headquartered in London and San Francisco, and to strengthen relations with the United States to advance AI safety for the public.” interests,” the government said.

San Francisco is the headquarters of OpenAI, the Microsoft-backed company behind viral AI chatbot ChatGPT.

The AI ​​Safety Institute was established in November 2023 at the AI ​​Safety Summit, a global event held at Bletchley Park in England, home of the World War II codebreakers, which aimed to strengthen cross-border security cooperation. AI.

The AI ​​Safety Institute’s expansion into the US comes on the eve of the Seoul AI Summit in South Korea, first proposed at the UK summit at Bletchley Park last year. The Seoul summit will take place on Tuesday and Wednesday.

The government said that since the establishment of the AI ​​Safety Institute in November, progress has been made in evaluating cutting-edge AI models from some of the industry’s leading players.

It said Monday that several AI models have tackled cybersecurity challenges but struggled with more advanced challenges, while several models demonstrated doctoral-level knowledge in chemistry and biology.

Meanwhile, all models tested by the institute remained highly vulnerable to “jailbreaks”, where users trick them into producing responses they are not allowed to under their content guidelines, while some would produce harmful results even without attempting to circumvent protective measures.

The models tested were also unable to complete more complex and time-consuming tasks without humans present to supervise them, according to the government.

He did not name the AI ​​models tested. The government had already gotten OpenAI, DeepMind and Anthropic to agree to open up their coveted AI models to the government to help inform research into the risks associated with their systems.

The development comes as Britain has been criticized for failing to introduce formal regulations for AI, while other countries, such as the European Union, are rushing with AI-friendly laws.

The landmark EU AI law, which is the first major AI legislation of its kind, is expected to become a model for global AI regulation once it is endorsed by all Member States of the EU and that it will enter into force.

cnbc

Back to top button