Back in February, OpenAI shut down accounts that were busy developing Chinese surveillance tools aimed at the West. These tools were designed to snoop on social media, look for anti-China sentiment and protests, and report back to Chinese authorities. Now, OpenAI has announced it has disrupted even more shady operations, and not just those tied to China.
In a report released Thursday, the company detailed how it recently dismantled ten different operations that were misusing its artificial intelligence tools. One of the China-linked groups, which OpenAI called "Sneer Review," used ChatGPT to churn out short comments for sites like TikTok, X, and Facebook. The topics varied, from U.S. politics to criticism of a Taiwanese game, where players work against the Chinese Communist Party. This operation even generated posts and then replied to its own posts to fake real discussions. What is particularly interesting is that the group also used ChatGPT to write internal performance reviews, describing how well they were running their influence campaign.
Another operation with ties to China involved individuals posing as journalists and geopolitical analysts. They used ChatGPT to write social media posts and biographies for their fake accounts on X, translate messages from Chinese to English, and analyze data. OpenAI mentioned that this group even analyzed correspondence addressed to a U.S. Senator. On top of that, these actors used OpenAI's models to create marketing materials, basically advertising their services for running fake social media campaigns and recruiting intelligence sources.
OpenAI also disrupted operations, probably originating in Russia and Iran. There was also a spam operation from a marketing company in the Philippines, a recruitment scam linked to Cambodia, and a deceptive job campaign that looked like something North Korea might orchestrate.
Ben Nimmo, from OpenAI's intelligence team, noted the wide range of tactics and platforms these groups are using. However, he also said these operations were mostly caught early and did not manage to fool large numbers of real people. According to Nimmo, "We didn't generally see these operations getting more engagement because of their use of AI. For these operations, better tools don't necessarily mean better outcomes."
2 Comments - Add comment