Meta revealed that it has removed nearly 4,800 fake accounts associated with a China-based influence campaign. The campaign’s goal was to spread polarizing political content about United States politics, particularly in the lead-up to the 2024 presidential election.
One of two China-based campaigns detected by Meta in the third quarter of 2023, specifically targeted US politics and US-China relations.
Meta disclosed that the individuals behind this activity posted in English, addressing various aspects of US politics and criticizing both ends of the political spectrum.
The content shared by these fake accounts included copy-pasted partisan content from individuals on another social media platform, X.
The network of fake accounts, according to Meta, reposted content from both liberal and conservative sources, all while masquerading under fake identities.
The intention behind this approach remains unclear, whether it aimed to amplify partisan tensions, build audiences among specific political supporters, or lend an air of authenticity to the fake accounts sharing genuine content.
Meta’s report indicates that it has disrupted a total of five influence campaigns based in China throughout the year, making China as an actor in attempting to manipulate public opinion on social media.
However, Meta did not attribute the network to the Chinese government. The tech giant addressed that it had also dismantled a network based in Russia during the third quarter.
This network was found to be spreading content related to Russia’s invasion of Ukraine and operated under fictitious media brands. The nature of these networks highlights the global reach of disinformation campaigns.
Meta’s actions comes growing concerns that tech platforms, including Facebook and X, could be exploited to sow divisions and discord in the run-up to the 2024 presidential elections.
The US Department of Homeland Security had warned about foreign adversaries leveraging new technologies, including artificial intelligence.
The industry of online disinformation, with state-sponsored actors deploying sophisticated tactics. The use of fake accounts to reshare authentic content from diverse sources points to a nuanced strategy aimed at manipulating public perception and exacerbating existing partisan divisions.
Meta’s Chief Investigator, Ben Nimmo, noted that while these networks may still struggle to build large audiences, they serve as a warning of foreign threat actors attempting to reach people across the internet.
The company’s efforts to safeguard election integrity and democracy, as evidenced by its actions against fake social media networks, have faced criticism.
Critics argue that Meta’s focus on fake accounts deflects attention from its responsibility for the misinformation already present on its platforms.
The platform has been scrutinized for allowing paid advertisements that falsely claims about the 2020 US election.
Meta has announced a new artificial intelligence policy that will require political ads containing AI-generated content to bear disclaimers.
However, questions linger about such policies and the responsibility of social media platforms in curbing the spread of misinformation.
As the 2024 election approaches, the challenges for social media platforms are expected to intensify. Not only will many countries, including the US, hold national elections, but the emergence of AI programs concerns about the potential for creating lifelike audio and video content that could mislead voters.
While Meta’s actions against fake accounts are part of its commitment to protecting election integrity, the issue of regulating social media platforms remains a challenge.
Calls for laws addressing algorithmic recommendations, misinformation, deepfakes, and hate speech have come from both Democrats and Republicans.