Fake news and misleading political influencer campaigns were among the content that have been taken down so far this year by Meta.
The social media firm says that it is stopped around 20 covert influence operations around the world this year, with the majority of malicious activity coming from Russia.
Already this month, Meta released a report in which it revealed it has removed two million scam accounts this year alone, though in both cases, AI has played a surprisingly small role.
What Did Meta Remove?
According to Nick Clegg, the president of global affairs at Meta, the company was tackling more than one new operation every three weeks.
The “coordinated inauthentic behavior” incidents recorded in the company’s Adversarial Threat Report, included one Russian network using both fake news websites and Facebook accounts to target people in Georgia, Armenia, and Azerbaijan.
This just in! View
the top business tech deals for 2025 👨💻
The Guardian reports that a different operation used AI to create fake news websites “…using brands such as Fox News and the Telegraph to try to weaken western support for Ukraine, and used Francophone fake news sites to promote Russia’s role in Africa and to criticize that of France.”
AI Plays a Role
Clegg reports that it was “striking” how little AI had been used to try and influence Meta users. He said: “There were all sorts of warnings about the potential risks of things like widespread deepfakes and AI enabled disinformation campaigns. That’s not what we’ve seen from what we’ve monitored across our services. It seems these risks did not materialize in a significant way, and that any such impact was modest and limited in scope.”
It did play a small role though. The company had to remove just over 500,000 requests to generate AI images using Meta’s own tools and images subjects, included Donald Trump and Kamala Harris, JD Vance and Joe Biden in the month leading up to the US election.
Clegg warned against complacency, saying that Meta is fully expecting AI will play a larger role in the future and the present status-quo is “very, very likely to change.”
Elections Upcoming
With elections coming up next year in both Australia and Canada, the UK-based Centre for Emerging Technology and Security is warning that AI-enabled threats are already damaging democracies, and, like Clegg, has encouraged alertness.
Meta has been waging a war against Facebook and Instagram ‘Spamouflage’ accounts since around 2017 and has deleted thousands of fake accounts posting propaganda and fake content. The company has also changed its advertising rules so that AI-generated content now has to be disclosed.
However, Meta is pushing for its monitoring functionality to be backed up by regulatory guidance, especially as AI-generated content becomes harder to spot and more prolific.