San Francisco — Meta on Thursday warned that deceptive online campaigns based in China were taking aim at 2024 elections in the United States and elsewhere.
The tech giant behind Facebook and Instagram said it has taken down five coordinated influence networks out of China this year.
“Foreign threat actors are attempting to reach people across the Internet ahead of next year’s elections, and we need to remain alert,” Meta global threat intelligence lead Ben Nimmo said during a briefing about its latest security report.
Meta said that it removed 4 789 Facebook fake accounts that were part of one campaign posting about national politics and relations with China.
The accounts in the network praised China; bashed its critics, and copy-pasted real online posts by US politicians with the potential to stoke partisan divisions, according to the threat report.
Facebook parent company Meta said Thursday it shut down a network of 4,789 fake Facebook accounts originating from China that were apparently designed to polarize voters ahead of the 2024 election. https://t.co/nFKtZeKcFX pic.twitter.com/82RpyWQxqz
— Forbes (@Forbes) November 30, 2023
“As election campaigns ramp up, we should expect foreign influence operations to try and leverage authentic parties and debate rather than creating original content themselves,” Nimmo said.
“We anticipate that if relations with China become an election topic in a particular country, we may see China-based influence operations pivot to target those debates.”
Meta tracked the source of the networks to China, but did not attribute them to the government.
The most prolific source of such networks continues to be Russia, with operations based in that country focusing primarily on undermining support for its war against Ukraine, according to Meta.
Websites associated with Russia-based campaigns recently shifted to using the war between Hamas and Israel to tarnish the image of the United States, the security report indicated.
Fictitious leaks
Meta’s security team expects efforts to sway coming elections to include bogus “leaks” of supposedly hacked material.
The company has already seen influence campaigns try to “hijack” heated political narratives, according to the security team.
“We hope that people will try to be deliberate when engaging with political content across the internet,” Nimmo said.
“For political groups, it’s important to be aware that heightened partisan tensions can play into the hands of foreign threat actors.”
Online deception campaigns extend beyond Meta to other social networks, blogs, chat forums, and websites, according to the security report.
Generative artificial intelligence (AI), which includes programs such as ChatGPT, is being used to crank out convincing bogus content for deception campaigns, head of security policy Nathaniel Gleicher said during the briefing.
“Threat actors can use generative AI to create larger volumes of convincing content, even if they don’t have the cultural or language skills to speak to their audiences,” Gleicher said.
“Combined with the range of elections worldwide in 2024, this means that we all need to prepare for a larger volume of synthetic content, and our defenses need to continue evolving to meet that challenge.”
Follow African Insider on Facebook, Twitter and Instagram
Source: AFP
Picture: Unsplash
For more African news, visit Africaninsider.com