Threat actors are using sponsored social media posts and ads offering enhanced versions of ChatGPT, Bard, and other AI tools to lure victims into downloading dangerous malware, a security firm has discovered.
Once downloaded from Facebook, the malware reportedly steals information saved in victims’ browsers, including passwords and crypto wallets.
A combination of sophisticated tactics employed by the threat actors to make their pages look legitimate – as well as unsuspecting users engaging with the sponsored posts – have helped the AI scam content spread rapidly across Facebook.
Dangerous Fake Chatbots Lurking
Cybersecurity research firm Checkpoint has uncovered a group of cybercriminals impersonating AI chatbots like ChatGPT, Jasper AI, Google’s Bard, and AI image generator MidJourney on Facebook.
Image Credit: CheckPoint
The threat actors are creating pages on the Meta-owned social media platform, which then reach the news feeds of victims organically through user engagement, or through sponsored advertisements.
Many of the posts made to these Facebook pages reference fake enhancements and upgrades, such as GPT-5, Smart Bard, and Bard Version 2 – none of which exist.
The majority of the Facebook pages, Checkpoint says, “lead to similar type landing pages which encourage users to download password-protected archive files that are allegedly related to generative AI engines”.
Fake Facebook Pages Are Worryingly Convincing
Checkpoint also revealed how the malicious actors have gone to great lengths to ensure that their victims don’t suspect they’re about to be preyed on.
The security firm uses the example of one of the pages – for AI image generator MidJourney – which has a huge 1.2 million likes. For many, this operates as a big trust signal.
The cybercriminals have also cleverly interspersed legitimate links to MidJourney’s actual websites with links to their own malicious landing pages.
Other tactics deployed to convey legitimacy include using bot accounts to leave positive comments on posts published by the fake pages.
Read our guide to the latest AI scams and how to avoid them
Protecting Yourself: Recognizing The Scams
If you ended up clicking through on one of these scams – as Checkpoint did during their investigation – you would have had infostealers such as Doenerium or libb1.py forcibly downloaded onto your device and your personal data immediately sent to a Discord server.
Even though this is quite a sophisticated scam and the perpetrators have taken a lot of steps to ensure that victims do not suspect a thing, there are still some elements of it that indicate it might not be what it seems – and being able to spot these things can save your skin.
The promise of “enhanced” versions of chatbots, for example, should ring alarm bells, regardless of how legitimate a page looks. A quick Google Search will tell you that these tools don’t actually exist.
It’s also good to remember what doesn’t confirm something online is legitimate. Links to legitimate companies can be added to any website, for instance. Bot accounts are pretty easy to acquire too, so a flood of positive comments doesn’t really mean anything.
If you want to use an AI chatbot like Bard or ChatGPT, your safest bet is to go to Google and find the real version of the website, rather than attempting to visit it through a link on social media or through a messaging app like Whatsapp, which is also a hotbed for scams.
When something as big as AI comes along, scammers are going to jump on it – so treat opportunities and offers that look like they’re jumping on the bandwagon with caution.