OpenAI takes down covert operations tied to China and other countries

1 day ago 11
Open AI CEO Sam Altman speaks during a conference in San Francisco this week. The company said it has recently taken down 10 influence operations that were using its generative artificial intelligence tools. Four of those operations were likely run by the Chinese government.

Open AI CEO Sam Altman speaks during a conference in San Francisco this week. The company said it has recently taken down 10 influence operations that were using its generative artificial intelligence tools. Four of those operations were likely run by the Chinese government. Justin Sullivan/Getty Images hide caption

toggle caption

Justin Sullivan/Getty Images

Chinese propagandists are using ChatGPT to write posts and comments on social media sites — and also to create performance reviews detailing that work for their bosses, according to OpenAI researchers.

The use of the company's artificial intelligence chatbot to create internal documents, as well as by another Chinese operation to create marketing materials promoting its work, comes as China is ramping up its efforts to influence opinion and conduct surveillance online.

OpenAI, the company behind generative artificial intelligence tools such as ChatGPT, announced Thursday that it had taken down influence operations tied to Russia, China and Iran.

"What we're seeing from China is a growing range of covert operations using a growing range of tactics," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said on a call with reporters about the company's latest threat report.

In the last three months, OpenAI says it disrupted 10 operations using its AI tools in malicious ways, and banned accounts connected to them. Four of the operations likely originated in China, the company said.

The China-linked operations "targeted many different countries and topics, even including a strategy game. Some of them combined elements of influence operations, social engineering, surveillance. And they did work across multiple different platforms and websites," Nimmo said.

One Chinese operation, which OpenAI dubbed "Sneer Review," used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook, and other websites, in English, Chinese, and Urdu. Subjects included the Trump administration's dismantling of the U.S. Agency for International Development — with posts both praising and criticizing the move — as well as criticism of a Taiwanese game in which players work to defeat the Chinese Communist Party.

In many cases, the operation generated a post as well as comments replying to it, behavior OpenAI's report said "appeared designed to create a false impression of organic engagement." The operation used ChatGPT to generate critical comments about the game, and then to write a long-form article claiming the game received widespread backlash.

The actors behind Sneer Review also used OpenAI's tools to do internal work, including creating "a performance review describing, in detail, the steps taken to establish and run the operation," OpenAI said. "The social media behaviors we observed across the network closely mirrored the procedures described in this review."

Another operation OpenAI tied to China focused on collecting intelligence by posing as journalists and geopolitical analysts. It used ChatGPT to write posts and biographies for accounts on X, to translate emails and messages from Chinese to English, and to analyze data. That included "correspondence addressed to a US Senator regarding the nomination of an Administration official," OpenAI said, but added that it was not able to independently confirm whether the correspondence was sent.

Voters line up to cast their ballots on Nov. 5, 2024 in Austell, Georgia. Americans cast their ballots today in the presidential race between Republican nominee former President Donald Trump and Democratic nominee Vice President Kamala Harris, as well as multiple state elections that will determine the balance of power in Congress. (Photo by Megan Varner/Getty Images)

"They also used our models to generate what looked like marketing materials," Nimmo said. In those, the operation claimed it conducted "fake social media campaigns and social engineering designed to recruit intelligence sources," which lined up with its online activity, OpenAI said in its report.

In its previous threat report in February, OpenAI identified a surveillance operation linked to China that claimed to monitor social media "to feed real-time reports about protests in the West to the Chinese security services." The operation used OpenAI's tools to debug code and write descriptions that could be used in sales pitches for the social media monitoring tool.

A woman walks in front of the Kremlin on September 23, 2024. U.S. intelligence officials say Russia has embraced artificial intelligence tools to try to sway American voters ahead of the November election.

In its new report published on Wednesday, OpenAI said it had also disrupted covert influence operations likely originating in Russia and Iran, a spam operation attributed to a commercial marketing company in the Philippines, a recruitment scam linked to Cambodia, and a deceptive employment campaign bearing the hallmarks of operations connected to North Korea.

"It is worth acknowledging the sheer range and variety of tactics and platforms that these operations use, all of them put together," Nimmo said. However, he said the operations were largely disrupted in their early stages and didn't reach large audiences of real people.

"We didn't generally see these operations getting more engagement because of their use of AI," Nimmo said. "For these operations, better tools don't necessarily mean better outcomes."

Do you have information about foreign influence operations and AI? Reach out to Shannon Bond through encrypted communications on Signal at shannonbond.01

Read Entire Article
Perlautan | Sumbar | Sekitar Bekasi | |