World News

For a first, OpenAI removes influence operations linked to Russia, China and Israel: NPR

For a first, OpenAI removes influence operations linked to Russia, China and Israel: NPR

OpenAI, the company behind generative artificial intelligence tools such as ChatGPT, announced Thursday that it has ended influence operations linked to Russia, China and Iran.

Stefani Reynolds/AFP via Getty Images


hide caption

toggle caption

Stefani Reynolds/AFP via Getty Images

Online influence operations based in Russia, China, Iran and Israel are using artificial intelligence in their efforts to manipulate the public, according to a new report from OpenAI.

Malicious actors have used OpenAI’s tools, including ChatGPT, to generate social media comments in multiple languages, invent names and bios for fake accounts, create cartoons and other images, and debug code .

OpenAI’s report is the first of its kind from the company, which has quickly become one of the leading players in AI. ChatGPT has gained over 100 million users since its public launch in November 2022.

But even though AI tools have helped the people behind influencer operations produce more content, make fewer mistakes, and create the appearance of engagement with their posts, OpenAI says that operations that she found did not gain traction with real people or reach a wide audience. In some cases, what little genuine engagement their posts received came from users calling them fake.

“These operations may be using new technologies, but they still grapple with the old problem of how to convince people to fall for them,” said Ben Nimmo, a senior researcher on the intelligence and security team. OpenAI survey.

This echoes Facebook owner Meta’s quarterly threat report released on Wednesday. Meta’s report says several of the recently dismantled covert operations used AI to generate images, videos and text, but that use of the cutting-edge technology did not affect the company’s ability to disrupt efforts to manipulate people.

The rise of generative artificial intelligence, capable of quickly and easily producing realistic audio, video, images and text, opens new avenues for fraud, scams and manipulation. In particular, the possibility of fake AI disrupting elections is fueling fears as billions of people around the world head to the polls this year, including in the United States, India and the European Union.

Over the past three months, OpenAI has banned accounts linked to five covert influence operations, which it defines as “attempts to manipulate public opinion or influence political outcomes without revealing true identity or intentions.” of the actors behind these operations.

This includes two operations well known to social media companies and researchers: Russia’s Doppelganger and a sprawling Chinese network dubbed Spamouflage.

Doppelganger, who has been linked to the Kremlin by the US Treasury Department, is known for spoofing legitimate news sites to undermine support for Ukraine. Spamouflage operates on a wide range of social media platforms and internet forums, spreading pro-China messages and attacking critics of Beijing. Last year, Facebook owner Meta said Spamouflage was the largest covert influence operation ever disrupted and linked it to Chinese law enforcement.

Doppelganger and Spamouflage used OpenAI tools to generate comments in multiple languages ​​that were posted on social media sites. The Russian network also used AI to translate articles from Russian into English and French and to turn website articles into Facebook posts.

The Spamouflage accounts used AI to debug the code of a website targeting Chinese dissidents, to analyze social media posts, and to search for news and current events. Some posts from fake Spamouflage accounts only received responses from other fake accounts on the same network.

Another previously unreported Russian network banned by OpenAI has focused its efforts on spamming the messaging app Telegram. He used OpenAI tools to debug code for a program automatically posted to Telegram, and used AI to generate comments posted by his accounts on the app. Like Doppelganger, the operation’s efforts were largely aimed at undermining support for Ukraine, via posts that weighed on politics in the United States and Moldova.

Another campaign that OpenAI and Meta said they disrupted in recent months dates back to a Tel Aviv political marketing company called Stoic. Fake accounts pose as Jewish students, African Americans and concerned citizens. They published articles about the war in Gaza, praised the Israeli military and criticized academic anti-Semitism and the United Nations relief agency for Palestinian refugees in the Gaza Strip, according to Meta. The messages were intended for audiences in the United States, Canada and Israel. Meta banned Stoic from its platforms and sent the company a cease and desist letter.

OpenAI said the Israeli operation used AI to generate and edit articles and comments posted on Instagram, Facebook and X, as well as to create fictitious personas and biographies for fake accounts. The study also revealed network activities targeting elections in India.

None of the operations disrupted by OpenAI used only AI-generated content. “It wasn’t about abandoning human generation and moving to AI, but about mixing the two,” Nimmo said.

He said that while AI offers threat actors some advantages, including increasing the volume of what they can produce and improving translations across languages, it does not help them overcome the main challenge of distribution.

“You can generate content, but if you don’t have the distribution systems to present it to people in a way that seems credible, then you’re going to have a hard time getting it out there,” Nimmo said. “And really, what we see here is this dynamic playing out.”

But companies like OpenAI must remain vigilant, he added. “Now is not the time for complacency. History shows that influence operations that have spent years failing can suddenly explode if no one is looking for them.”

News Source : www.npr.org
Gn world

Back to top button