World News

OpenAI says Russian and Israeli groups used its tools to spread disinformation | OpenAI

OpenAI on Thursday released its first-ever report on how its artificial intelligence tools are used for covert influence operations, revealing that the company disrupted disinformation campaigns from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and publish propaganda content on social media platforms and to translate their content into different languages. None of the campaigns gained traction or reached a wide audience, according to the report.

As generative AI has become a booming industry, researchers and lawmakers are widely concerned about its potential to increase the quantity and quality of misinformation online. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried, with mixed results, to allay these concerns and place guardrails on their technology.

OpenAI’s 39-page report is one of the most detailed accounts by an artificial intelligence company of the use of its software for propaganda purposes. OpenAI claimed its researchers discovered and banned accounts associated with five covert influence operations over the past three months, emanating from a mix of state and private actors.

In Russia, two companies created and distributed content critical of the United States, Ukraine and several Baltic countries. One of the operations used an OpenAI model to debug code and create a bot published on Telegram. China’s influence operation generated texts in English, Chinese, Japanese and Korean, which the agents then posted on Twitter and Medium.

Iranian actors generated comprehensive articles attacking the United States and Israel, which they translated into English and French. An Israeli political company called Stoic ran a network of fake social media accounts that created a range of content, including posts accusing American student protests against Israel’s war in Gaza as anti-Semitic.

Several of the disinformation spreaders that OpenAI banned from its platform were already known to researchers and authorities. The US Treasury in March sanctioned two Russian men allegedly behind one of the campaigns detected by OpenAI, while Meta also banned Stoic from its platform this year for violating its policies.

The report also highlights how generative AI is being incorporated into disinformation campaigns as a way to improve certain aspects of content generation, such as publishing more compelling messages in a foreign language, but that it is not the only tool propaganda.

“All of these operations used AI to some extent, but none used it exclusively,” the report said. “Instead, the AI-generated material was just one of many types of content they published, alongside more traditional formats, such as manually written text or memes copied from the Internet.”

While none of the campaigns had a notable impact, their use of the technology shows how bad actors are finding that generative AI allows them to increase their propaganda output. Content writing, translation and publishing can now be done more efficiently through the use of AI tools, lowering the bar when it comes to creating disinformation campaigns.

Over the past year, malicious actors have used generative AI in countries around the world to attempt to influence politics and public opinion. Deepfake audio, AI-generated images, and text campaigns have all been used to disrupt election campaigns, leading to increased pressure on companies like OpenAI to restrict the use of their tools.

OpenAI said it plans to periodically release similar reports on covert influence operations, as well as remove accounts that violate its policies.

News Source : www.theguardian.com
Gn world

Back to top button