Meta has deleted thousands of fake accounts being leveraged by a Chinese network to spread disinformation across its social media platforms, which has been dubbed “the largest known cross-platform covert influence operation in the world”.
The accounts – many of which have operated on Facebook and Instagram for years – were mainly targeting users in Australia. Meta said there was ample evidence that the campaign extended across YouTube, Medium, X/Twitter, and various other sites.
The operation has been tracked for over four years, referred to as “Spamouflage Dragon” or just “Spamouflage” by social media analytics firm Graphika and the Australian Strategic Policy Group since 2019, who were among the first to spot it.
Meta Deletes Thousands of Fake Accounts
Meta has deleted over seven thousand Facebook accounts and almost one thousand pages present on the social media platform, as well as 15 Instagram accounts and 15 Facebook groups, the company said in its Q2 Adversarial Threat Report, released this week.
Over half a million people followed at least one of these pages or groups, while around $3,500 was spent on advertising across the accounts.
The campaign primarily targeted users in Australia, as well as Taiwan, the US, Australia, the United Kingdom, Japan, and Chinese speakers living abroad, Meta told the Guardian.
This just in! View
the top business tech deals for 2024 👨💻
What Was Spamouflage Designed to Achieve?
Spamouflage has been running since around 2018, according to the research teams and analytics firms that have been tracking operations since then.
The content being posted by the spam network was incredibly low quality and largely consisted of flattering commentary of China, negative portrayals of US and UK policies, and attacks on journalists and activists who have publicly criticized China.
Graphika’s 2019 report – which includes extensive examples of YouTube accounts used by these sorts of spam networks – shows how propaganda was interspersed with waves of unrelated content, in an effort to “dilute” the posts.
In its own report, Meta notes that the method of releasing specific articles across a myriad of different platforms simultaneously decreased the chances that the article would be flagged for takedown.
It was after this reporting that the operation sought to diversify its channels, moving from larger platforms like Facebook to blogging networks like Medium, as well as platforms like SoundCloud.
Operations were carried out by a number of different actors across a range of locations in China. However, content direction – as well as internet infrastructure – seems to be coming from a more centralized location. The report also found at least one shared location where a lot of activity seemed to be originating.
Spamouflage Doesn’t Appear to Have Been Successful
Unlike other disinformation campaigns, the quality of the posts was extremely low. One, for instance, suggested that Queen Elizabeth II’s death had been quickened due to former UK Prime Minister Liz Truss’s appointment.
Another post attempted to spread the false accusation that Covid-19 actually originated in the US – which included a fake research paper riddled with spelling mistakes.
With this in mind, it’s hard to say that the campaign had any quantifiable impact on the social media users it was supposed to be targeting.
What’s more, Meta says that many of the 560,000 accounts following pages maintained by the spam network were themselves bots, which would likely have been the case when the pages were acquired. A lot of the engagement on the posts – such as comments – were often left by accounts linked to the network itself.
Although it seems the operation failed to have the desired effect, it’s a grim reminder of the ways that social media networks can be leveraged during disinformation operations by political actors and that these sorts of tactics will persist for years to come.