Categories: Technology

Do deepfakes of the dead rewrite the past?

OpenAI’s new text-to-video conversion app, Sora, was meant to be a social playground for AI, allowing users to create imaginative videos of themselves, friends, and celebrities while building on the ideas of others.

The app’s social structure, which allows users to adjust the availability of their image in others’ videos, seemed to answer the most pressing questions of consent around AI-generated video when it launched last week.

But while Sora sits atop the iOS App Store with more than a million downloads, experts worry about its potential to flood the Internet with historical misinformation and fakes of deceased historical figures who cannot consent or opt out of Sora’s AI models.

In less than a minute, the app can generate short videos of deceased celebrities in situations they’ve never been in: Aretha Franklin making soy candles, Carrie Fisher trying to balance on a slackline, Nat King Cole ice skating in Havana, and Marilyn Monroe teaching Vietnamese to schoolchildren, for example.

It’s a nightmare for people like Adam Streisand, a lawyer who has represented several celebrity estates, including Monroe’s at one point.

“The AI ​​challenge is not the law,” Streisand said in an email, pointing out that California courts have long protected celebrities “from AI-like reproductions of their images or voices.”

“The question is whether a judicial process without AI and dependent on humans will ever be able to play an almost fifth-dimensional punching game. »

The Sora videos range from absurd to delightful to confusing. Aside from celebrities, many videos on Sora show convincing deepfakes of manipulated historical moments.

For example, NBC News was able to produce realistic videos of President Dwight Eisenhower admitting to accepting millions of dollars in bribes, of British Prime Minister Margaret Thatcher saying that the “so-called D-Day landings” were exaggerated, and of President John F. Kennedy announcing that the moon landing was “not a triumph of science but a fabrication.”

The possibility of generating such deepfakes from non-consenting deceased people has already sparked complaints from family members.

In an Instagram Story posted Monday about Sora’s videos featuring Robin Williams, who died in 2014, Williams’ daughter Zelda wrote, “If you have any decency, stop doing this to him and me, to everyone, period. It’s stupid, it’s a waste of time and energy, and believe me, it’s NOT what he would want.”

Bernice King, the daughter of Martin Luther King Jr., wrote on X: “I agree about my father. Please stop.” King’s famous speech “I Have a Dream” has been continually manipulated and remixed on the app.

George Carlin’s daughter said in a BlueSky article that her family is “doing our best to combat” deepfakes of the late comedian.

Sora-generated videos depicting “horrific violence” involving famed physicist Stephen Hawking have also gained popularity this week, with many examples circulating on X.

An OpenAI spokesperson told NBC News: “While there are strong free speech interests in the depiction of historical figures, we believe that public figures and their families should ultimately have control over how their image is used. For recently deceased public figures, authorized representatives or owners of their estate can request that their image not be used in cameos Sora. “

In a blog post published last Friday, OpenAI CEO Sam Altman wrote that the company would soon “give rights holders more granular control over character generation,” referring to broader content types. “We’re hearing from many rights holders who are very excited about this new type of ‘interactive fan fiction’ and think this new type of engagement will bring them a lot of value, but want the ability to specify how their characters can be used (including at all).”

The rapid evolution of OpenAI’s policies for Sora has led some commentators to argue that the company’s approach to moving quickly and breaking things was deliberate, showing users and intellectual property owners the power and scope of the application.

Liam Mayes, a lecturer at Rice University’s media studies program, thinks increasingly realistic deepfakes could have two key societal effects. First, he said, “we will see trusted people falling victim to all kinds of scams, large and powerful corporations exerting coercive pressure, and nefarious actors undermining democratic processes,” he said.

At the same time, the inability to distinguish deepfakes from real videos could reduce trust in authentic media. “We could see trust in all kinds of media establishments and institutions eroding,” Mayes said.

As founder and chairman of CMG Worldwide, Mark Roesler managed the intellectual property and licensing rights of more than 3,000 deceased figures in entertainment, sports, history and music such as James Dean, Neil Armstrong and Albert Einstein. Roesler said Sora is just the latest technology to raise concerns about protecting the characters’ legacies.

“There is and will be abuse as there always has been with celebrities and their valuable intellectual property,” he wrote in an email. “When we started representing deceased people in 1981, the Internet didn’t even exist. »

“New technologies and innovation help keep alive the legacies of many historic and iconic figures, who have shaped and influenced our history,” Roesler added, saying CMG will continue to represent its clients’ interests within AI applications like Sora.

To differentiate between a real video and a video generated by Sora, OpenAI has implemented several tools to help users and digital platforms identify content created by Sora.

Each video includes invisible signals, a visible watermark, and metadata – behind-the-scenes technical information that describes the content as AI-generated.

Yet many of these layers are easily removable, said Sid Srinivasan, a computer scientist at Harvard University. “Visible watermarks and metadata will deter casual abuse despite some friction, but they are fairly easy to remove and won’t stop more determined actors.”

Srinivasan said an invisible watermark and associated detection tool would likely be the most reliable approach. “Ultimately, video hosting platforms will likely need access to detection tools like this, and there is no clear timeline for broader access to these internal tools.”

Wenting Zheng, assistant professor of computer science at Carnegie Mellon University, echoed this view, saying: “To automatically detect AI-generated materials on social media posts, it would be beneficial for OpenAI to share its image, audio and video tracing tool with platforms to help people identify AI-generated content.” »

When asked if OpenAI had shared these detection tools with other platforms like Meta or X, an OpenAI spokesperson referred NBC News to a general technical report. The report does not provide such detailed information.

To better identify authentic images, some companies are using AI to detect AI results, according to Ben Colman, CEO and co-founder of Reality Defender, a deepfake detection startup.

“Human beings – even those trained in the problem, as some of our competitors are – are flawed and wrong, missing the invisible or the inaudible,” Colman said.

At Reality Defender, “AI is used to detect AI,” Colman told NBC News. “AI-generated videos can become more realistic for you and me, but AI can see and hear things we can’t.”

Similarly, McAfee’s Scam Detector software “listens to the audio of a video for AI fingerprints and analyzes it to determine whether the content is authentic or AI-generated,” according to Steve Grobman, chief technology officer at McAfee.

However, Grobman added: “New tools make fake videos and audios look more real all the time, and 1 in 5 people told us they or someone they know has already been the victim of a deepfake scam. »

The quality of deepfakes also differs across languages, as current AI tools in commonly used languages ​​like English, Spanish, or Mandarin perform much better than tools in less commonly used languages.

“We are regularly evolving the technology as new AI tools emerge, and we are expanding beyond English to cover more languages ​​and contexts,” Grobman said.

Concerns about deepfakes have already made headlines. Less than a year ago, many observers predicted that the 2024 elections would be overrun by deepfakes. This turned out to be largely false.

However, until this year, AI-generated media, such as images, audio and video, were largely distinguishable from real content. Many commentators found the models released in 2025 particularly realistic, threatening the public’s ability to distinguish real human-created information from AI-generated content.

Google’s Veo 3 video generation model, released in May, was called “terribly accurate” and “dangerously realistic” at the time, prompting one critic to ask, “Are we doomed?”

James Walker

James Walker – Technology Correspondent Writes about AI, Apple, Google, and emerging innovations.

Recent Posts

Gurman: Three new Apple products likely to launch ‘this week’

With the Apple iPhone launch event over, many enthusiasts were eagerly awaiting what comes next. The company typically launches a…

10 minutes ago

‘Furious to Be Here’: Amy Poehler and Tina Fey Reunite to Make Bondi and Noem’s Abrupt Takedown on SNL Cold Open

Your support helps us tell the storyFrom reproductive rights to climate change to Big Tech, The Independent is on the…

11 minutes ago

Which planets are the youngest and oldest in our solar system?

About 4.6 billion years ago, a celestial cloud collapsed, paving the way for the formation of our solar system. Then,…

12 minutes ago

Qatar Air Force facilities to be built on base in Idaho: Hegseth

U.S. Secretary of Defense Pete Hegseth and Deputy Prime Minister and Minister of Defense of Qatar Sheikh Saud bin Abdulrahman…

13 minutes ago

Jets vs Broncos Live Updates: Score, Analysis, Highlights from London Game

The New York Jets are in the midst of their worst start in franchise history, and the task of winning…

16 minutes ago

Vance says Trump is ‘considering all his options’ as president threatens to invoke Insurrection Act

WASHINGTON — Vice President JD Vance said Sunday in an interview on NBC News' "Meet the Press" that President Donald…

18 minutes ago