Meta Deletes Thousands of Facebook and Insta ‘Spamouflage’ Accounts

In a new report, the social media giant reveals that it has thwarted a huge covert influence operation linked to China.

Meta has deleted thousands of fake accounts being leveraged by a Chinese network to spread disinformation across its social media platforms, which has been dubbed “the largest known cross-platform covert influence operation in the world”.

The accounts – many of which have operated on Facebook and Instagram for years – were mainly targeting users in Australia. Meta said there was ample evidence that the campaign extended across YouTube, Medium, X/Twitter, and various other sites. 

The operation has been tracked for over four years, referred to as “Spamouflage Dragon” or just “Spamouflage” by social media analytics firm Graphika and the Australian Strategic Policy Group since 2019, who were among the first to spot it.

Meta Deletes Thousands of Fake Accounts

Meta has deleted over seven thousand Facebook accounts and almost one thousand pages present on the social media platform, as well as 15 Instagram accounts and 15 Facebook groups, the company said in its Q2 Adversarial Threat Report, released this week.

Over half a million people followed at least one of these pages or groups, while around $3,500 was spent on advertising across the accounts.

The campaign primarily targeted users in Australia, as well as Taiwan, the US, Australia, the United Kingdom, Japan, and Chinese speakers living abroad, Meta told the Guardian.

Surfshark logo🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.See deal button

What Was Spamouflage Designed to Achieve?

Spamouflage has been running since around 2018, according to the research teams and analytics firms that have been tracking operations since then.

The content being posted by the spam network was incredibly low quality and largely consisted of flattering commentary of China, negative portrayals of US and UK policies, and attacks on journalists and activists who have publicly criticized China.

Graphika’s 2019 report – which includes extensive examples of YouTube accounts used by these sorts of spam networks – shows how propaganda was interspersed with waves of unrelated content, in an effort to “dilute” the posts.

In its own report, Meta notes that the method of releasing specific articles across a myriad of different platforms simultaneously decreased the chances that the article would be flagged for takedown.

It was after this reporting that the operation sought to diversify its channels, moving from larger platforms like Facebook to blogging networks like Medium, as well as platforms like SoundCloud.

Operations were carried out by a number of different actors across a range of locations in China. However, content direction – as well as internet infrastructure – seems to be coming from a more centralized location. The report also found at least one shared location where a lot of activity seemed to be originating.

Spamouflage Doesn’t Appear to Have Been Successful

Unlike other disinformation campaigns, the quality of the posts was extremely low. One, for instance, suggested that Queen Elizabeth II’s death had been quickened due to former UK Prime Minister Liz Truss’s appointment.

Another post attempted to spread the false accusation that Covid-19 actually originated in the US – which included a fake research paper riddled with spelling mistakes.

With this in mind, it’s hard to say that the campaign had any quantifiable impact on the social media users it was supposed to be targeting.

What’s more, Meta says that many of the 560,000 accounts following pages maintained by the spam network were themselves bots, which would likely have been the case when the pages were acquired. A lot of the engagement on the posts – such as comments – were often left by accounts linked to the network itself.

Although it seems the operation failed to have the desired effect, it’s a grim reminder of the ways that social media networks can be leveraged during disinformation operations by political actors and that these sorts of tactics will persist for years to come.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is a Lead Writer at Tech.co. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today