Recent reports indicate that online influence operations from Russia, China, Iran, and Israel are leveraging artificial intelligence (AI) to manipulate public opinion. These operations, as outlined in a new report from OpenAI, have utilized AI tools, including ChatGPT, to generate social media comments in various languages, create fake account names and bios, produce images and cartoons, and debug code.
OpenAI took down AI-driven influence campaigns from Russia, China, Iran, and Israel aimed at manipulating public opinion.https://t.co/ZTBgmPothC
— Verity (@improvethenews) May 31, 2024
OpenAI’s report marks a significant step for the company, now a major player in the AI field. Since its public launch in November 2022, ChatGPT has amassed over 100 million users. Despite the proliferation of content made possible by AI tools, OpenAI found that these influence operations have not gained substantial traction with real audiences. In many instances, the limited engagement came from users who identified the posts as fake.
Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, highlighted the persistent challenge for these operations: convincing real people to engage with their content. This sentiment is echoed by the quarterly threat report from Facebook’s parent company, which also noted the use of AI in recent covert operations. However, the advanced technology has not hindered the company’s ability to counteract these efforts.
🚨OpenAI has exposed covert groups from Russia, China, Iran, and Israel using its technology to manipulate political discourse worldwide.
This alarming development, though long anticipated, underscores a shocking reality: despite the warnings, no substantial actions have been… pic.twitter.com/znGwYbYxYb
— Mario 🇺🇸🇵🇱🇺🇦🇪🇺 (@PawlowskiMario) May 31, 2024
The rise of generative AI, capable of quickly producing realistic audio, video, images, and text, is creating new opportunities for fraud, scams, and manipulation. The potential for AI-generated fakes to disrupt elections is a growing concern as numerous countries, including the U.S., India, and members of the European Union, prepare for upcoming elections.
In the last three months, OpenAI has banned accounts linked to five covert influence operations. These operations, defined as attempts to manipulate public opinion or political outcomes without disclosing the true identity or intentions of the actors, include well-known campaigns such as Russia’s Doppelganger and China’s Spamouflage. Doppelganger, associated with the Kremlin, is notorious for spoofing legitimate news websites to undermine support for Ukraine. Spamouflage, a vast Chinese network, operates across various social media platforms and internet forums, promoting pro-China messages and attacking Beijing’s critics.
🛑🇷🇺🇨🇳🇮🇱| OpenAI says it disrupted Chinese, Russian, Israeli influence campaigns
Artificial intelligence company OpenAI has announced that it disrupted covert influence campaigns originating from Russia, China, Israel and Iran.
The ChatGPT maker said on Thursday that it… pic.twitter.com/Hb7ZptYcmh
— Geo-Political Update (@Hyacinthedml) May 31, 2024
Both Doppelganger and Spamouflage utilized OpenAI tools to generate multilingual comments for social media. The Russian network also used AI to translate articles from Russian into English and French and to convert website content into Facebook posts. Spamouflage leveraged AI to debug code for a website targeting Chinese dissidents, analyze social media posts, and research current events. Many posts from fake Spamouflage accounts received interactions only from other fake accounts within the same network.
Another previously unreported Russian network, banned by OpenAI, concentrated on spamming the messaging app Telegram. It employed OpenAI tools to debug code for an automated posting program and to generate comments for its accounts. Similar to Doppelganger, this operation aimed to undermine support for Ukraine through posts about U.S. and Moldovan politics.
Additionally, OpenAI and Facebook’s parent company recently disrupted a campaign linked to a political marketing firm in Tel Aviv called Stoic. Fake accounts posed as Jewish students, African-Americans, and concerned citizens, posting about the Gaza conflict, praising Israel’s military, and criticizing antisemitism and the U.N. relief agency for Palestinian refugees. These posts targeted audiences in the U.S., Canada, and Israel. Stoic was banned from social media platforms and received a cease and desist letter.
The Israeli operation used AI to generate and edit articles and comments across various platforms and to create fictitious personas and bios. Some activities from this network also targeted elections in India. Notably, none of the disrupted operations relied solely on AI-generated content. Nimmo emphasized that while AI enhances the volume and quality of produced content, it does not solve the fundamental challenge of distribution.
The key takeaway from these findings is that AI-generated content alone cannot ensure successful influence operations. Effective distribution and credibility are crucial. Companies like OpenAI must remain vigilant, as influence operations that initially struggle can eventually break through if left unchecked. This vigilance is essential to prevent these operations from gaining ground and impacting public opinion on a larger scale.
Major Points
- Russia, China, Iran, and Israel are using AI tools like ChatGPT to manipulate public opinion through social media comments, fake account bios, and content creation.
- Despite the advanced technology, these influence operations have not significantly engaged real audiences, with most interactions being identified as fake.
- Operations like Russia’s Doppelganger and China’s Spamouflage have used AI to generate multilingual content and attack critics, but have struggled with credible distribution.
- OpenAI and Facebook’s parent company have recently disrupted several AI-driven campaigns, including a political marketing firm in Tel Aviv that targeted audiences with fake personas and biased content.
- The report underscores the necessity for continuous monitoring to prevent influence operations from gaining traction, despite their initial failures.
James Kravitz – Reprinted with permission of Whatfinger News