Entertainment

Scarlett Johansson’s AI feud echoes the bad old days of Silicon Valley

Image source, Getty Images

Scarlett Johansson’s AI feud echoes the bad old days of Silicon Valley

  • Author, Zoe Kleinman
  • Role, Technology editor
  • Twitter,

“Move fast and break things” is a motto that continues to haunt the technology sector, some twenty years after it was invented by the young Mark Zuckerberg.

These five words have become the worst symbol of Silicon Valley – a combination of ruthless ambition and rather breathtaking arrogance – profit-driven innovation without fear of consequences.

I was reminded of this phrase this week when actor Scarlett Johansson clashed with OpenAI. Ms Johansson claimed she and her agent refused to let her be the voice of her new product for ChatGPT – and when it was revealed, it sounded like her anyway. OpenAI denies that this is an intentional imitation.

It’s a classic illustration of what worries the creative industries so much: being imitated and ultimately replaced by artificial intelligence.

Last week, Sony Music, the world’s largest music publisher, wrote to Google, Microsoft and OpenAI asking whether its artists’ songs had been used to develop AI systems, saying they had no not authorized to do so.

There are echoes in all of this of the macho Silicon Valley giants of old. Seek forgiveness rather than permission as an unofficial business plan.

But the tech companies of 2024 are extremely eager to distance themselves from that reputation.

OpenAI was not made from this mold. It was originally created as a non-profit organization that would reinvest any additional profits reinvested into the business.

In 2019, when it created a for-profit arm, they said the for-profit part would be run by the nonprofit part and a cap would be placed on the returns investors could earn.

Not everyone was happy with this change – it appears this was one of the main reasons behind original co-founder Elon Musk’s decision to step down.

When OpenAI CEO Sam Altman was suddenly fired by his own board late last year, one theory was that he wanted to move further away from the original mission. We never knew for sure.

But even though OpenAI has become more profit-driven, it still needs to face its responsibilities.

In the policymaking world, almost everyone agrees that clear boundaries are necessary to keep companies like OpenAI in line before disaster strikes.

So far, the AI ​​giants have largely played ball on paper. At the first Global AI Safety Summit six months ago, a group of tech bosses signed a voluntary pledge to create responsible and safe products that would maximize the benefits of AI technology and would minimize its risks.

These risks, originally identified by the event organizers, were true nightmares. When I asked questions at the time about the more mundane threats posed to people by AI tools that discriminate against or replace them in their jobs, I was told firmly that this meeting was devoted solely to discussing the absolute worst case scenarios. – it was Terminator, Doomsday, the territory of AI going rogue and destroying humanity.

Six months later, when the summit resumed, the word “security” had been removed entirely from the conference title.

Last week, a draft British government report by a group of 30 independent experts concluded that there was “no evidence” that AI could generate a biological weapon or carry out a sophisticated cyberattack. The plausibility of humans losing control of AI is “highly controversial,” according to the report.

Some people in the field have long argued that the most immediate threat posed by AI tools was that they would replace jobs or not be able to recognize skin color. AI ethics expert Dr Rumman Chowdhury says these are “the real issues”.

The AI ​​Safety Institute declined to say whether it had tested the safety of new AI products launched in recent days; notably OpenAI’s GPT-4o and Google’s Project Astra, both of which are among the most powerful and advanced publicly available generative AI systems I’ve seen to date. Meanwhile, Microsoft has unveiled a new laptop containing AI hardware – the start of AI tools physically integrated into our devices.

The independent report also states that there is currently no reliable way to understand exactly why AI tools generate the results they do – even among developers – and that Red’s established security testing practice Teaming, in which reviewers deliberately attempt to obtain an AI tool. misbehave, has no best practice guidelines.

At this week’s follow-up summit, hosted jointly by the UK and South Korea in Seoul, the companies pledged to suspend a product if it fails to meet certain safety thresholds – but these will not be set until the next gathering in Seoul. 2025.

Some fear that all these commitments and promises do not go far enough.

“Voluntary agreements are essentially just a way for companies to grade their own homework,” says Andrew Straight, associate director of the Ada Lovelace Institute, an independent research organization. “This essentially does not replace the legally binding and enforceable rules that are necessary to encourage the responsible development of these technologies.”

OpenAI just released its own 10-point security process that it says it’s committed to — but one of its lead security engineers recently resigned, writing on X that his department was “sailing against the wind” internally.

“In recent years, safety culture and processes have given way to shiny products,” said Jan Leike.

There are of course other teams at OpenAI that continue to focus on safety and security.

However, at present there is no official, independent monitoring of what each of them actually does.

“We have no guarantee that these companies will honor their commitments,” says Professor Dame Wendy Hall, one of the UK’s leading computer scientists.

“How can we hold them accountable for what they say, as we do with pharmaceutical companies or in other high-risk industries?

We might also find that these powerful technology leaders become less receptive once the pressures are on and voluntary agreements become a little more enforceable.

When the UK government said it wanted the power to suspend the deployment of security features by big tech companies if there was a risk they would compromise national security, Apple threatened to remove the Kingdom’s services -United, describing it as “unprecedented overreach” by lawmakers. .

The legislation was passed and so far Apple is still here.

The European Union’s AI law has just been enacted and is both the first and strongest legislation in force. Severe sanctions are also planned for companies that do not comply. But that creates more work for AI users than for the AI ​​giants themselves, says Nader Henein, vice president analyst at Gartner.

“I would say the majority (of AI developers) overestimate the impact the law will have on them,” he says.

All companies using AI tools will need to categorize them and assign them a risk rating – and the AI ​​companies that provided the AI ​​will need to provide enough information to be able to do so, he explains.

But that doesn’t mean they’re out of the woods.

“We need to move towards legal regulation over time, but we cannot rush,” says Professor Hall. “It is very difficult to establish principles of global governance that everyone adheres to. »

“We also need to make sure that we are truly protecting the whole world and not just the Western world and China.”

Those who attended the AI ​​Summit in Seoul said they found it helpful. It was “less flashy” than Bletchley but with more discussion, one attendee said. Interestingly, the final declaration of the event was signed by 27 countries, but not China, although there were representatives there in person.

The overarching problem, as always, is that regulation and policy evolve much more slowly than innovation.

Professor Hall believes that “the stars are aligning” at government level. The question is whether the tech giants can be persuaded to wait.

BBC in depth is the new home on the website and app for the best analysis and expertise from our top journalists. Under a distinctive new brand, we’ll bring you new perspectives that challenge assumptions, along with in-depth reporting on the biggest issues to help you make sense of a complex world. And we’ll also be showcasing thought-provoking content from BBC Sounds and iPlayer. We’re starting small but thinking big, and we want to know what you think – you can send us your feedback by clicking the button below.

Get in touch

InDepth is the new home for the best analysis from BBC News. Tell us what you think.

Gn entert
News Source : www.bbc.com

Back to top button