Tech

UK opens San Francisco office to tackle AI risks

Ahead of the AI ​​Security Summit kicking off in Seoul, South Korea, later this week, co-host the United Kingdom is expanding its own efforts in this area. The AI ​​Safety Institute – a British organization created in November 2023 with the ambitious aim of assessing and managing the risks associated with AI platforms – announced that it would open a second site… in San Francisco.

The idea is to move closer to what is currently the epicenter of AI development, with the Bay Area home to OpenAI, Anthropic, Google and Meta, among others, developing foundational AI technology.

Fundamental models are the building blocks of generative AI services and other applications, and it is interesting to note that although the UK has signed a memorandum of understanding with the US for the two countries to collaborate On AI security initiatives, the UK still chooses to invest in creating a direct presence in the US to address this issue.

“By having people on the ground in San Francisco, it will give them access to the headquarters of many of these AI companies,” said Michelle Donelan, Britain’s secretary of state for science, innovation and innovation. technology, in an interview with TechCrunch. “A number of them have bases here in the UK, but we think it would be very useful to have a base there as well, to have access to an additional talent pool and to be able to work even more in collaboration and hand in hand. with the United States. »

This is partly because, for the UK, getting closer to this epicenter is useful not only for understanding what is being built, but also because it gives the UK more visibility to of these companies – which is important, given that AI and technology in general are seen by the UK as a huge opportunity for economic growth and investment.

And given OpenAI’s latest drama surrounding its Superalignment team, this seems like a particularly opportune time to establish a presence there.

The AI ​​Safety Institute, launching in November 2023, is currently a relatively modest affair. The organization now has just 32 people working there, a veritable David versus Goliath of AI technology, considering the billions of dollars of investment that rests on companies building AI models. AI, and therefore their own economic motivations to obtain their technologies. out the door and into the hands of paying users.

One of the AI ​​Safety Institute’s most notable developments was the release earlier this month of Inspect, its first set of tools for testing the safety of foundational AI models.

Donelan today called the release “phase one.” Not only has it proven difficult so far to evaluate the models, but so far the commitment is largely a voluntary and inconsistent agreement. As a senior source at a UK regulator pointed out, companies have no legal obligation to have their models checked at this stage; and not all companies are willing to have their models verified before release. This could mean that, in cases where a risk could be identified, the horse could already have run away.

Donelan said the AI ​​Safety Institute is still developing how best to collaborate with AI companies to evaluate them. “Our evaluation process is an emerging science in itself,” she said. “So, with each assessment, we will develop the process and refine it even further. »

Donelan said one of the goals in Seoul would be to introduce Inspect to regulators gathered at the summit, with the aim of getting them to adopt it as well.

“We now have an evaluation system. The second phase must also aim to make AI safe across society,” she said.

Longer term, Donelan believes the UK will develop more AI legislation, although, repeating what Prime Minister Rishi Sunak has said on the subject, it will resist until it better understands it. the extent of risks linked to AI.

“We do not believe in legislating before having a good mastery and complete understanding,” she said, noting that the recent international report on AI security, published by the institute , aimed primarily at trying to get a complete picture of the research to date, “highlighted that there are large gaps missing and that we need to incentivize and encourage more research globally.

“And it also takes about a year to legislate in the UK. And if we had just started legislating instead of holding the AI ​​Security Summit (which was held in November last year), we would still be legislating now, and we would actually have nothing to demonstrate.”

“Since day one of the Institute, we have made clear the importance of taking an international approach to AI security, sharing research and working collaboratively with other countries to test models and anticipate the risks associated with advanced AI,” said Ian Hogarth, President of the Institute. the AI ​​Security Institute. “Today marks a pivotal moment that allows us to further advance this agenda, and we are proud to expand our operations in a region teeming with technology talent, adding to the incredible expertise that our staff in London have brought since very beginning.

techcrunch

Back to top button