11 Convincing ChatGPT and AI Scams to Watch out for in 2024

ChatGPT and AI are proving a goldmine for scammers, offering plenty of new opportunities. We tell you how to spot the scams.

Since ChatGPT launched in November 2022, it’s proved endlessly useful, with workers all around the world finding innovative ways to apply artificial intelligence every day. Whether it's ChatGPT plugins helping you get stuff done or AI resume builder sprucing up CV, the power and usefulness of AI isn't really in doubt anymore.

Unfortunately, it can also be used for insidious ends, such as writing malware scripts and phishing emails. Along with utilizing artificial intelligence to orchestrate scams, over the past six to eight months, hackers have been spotted leveraging the hot topic to extort people out of money and steal their information via fake investment opportunities and scam applications.

AI scams are among the toughest to identify, but a lot of people don't invest in antivirus software to warn you about suspicious online activity at all. With that in mind, we’ve put together this guide of all the common tactics that have been observed in the wild recently, as well as some ruses that have recently emerged in 2024.

In this article, we cover:

AI Scams: What Are They, and How Common Are They?

As we alluded to in the intro to this article, “AI scams” can refer to two different genres of scams, examples of which have sprung up regularly during 2023.

In AI-assisted scams, artificial intelligence helps the scammer actually commit the scam, such as writing the text for a phishing email. In general AI scams, the hacker is leveraging the popularity and zeitgeisty nature of AI as a topic to intrigue curious targets, such as a fake ChatGPT app scam. This works because so many people are trying to either make money out of ChatGPT, or use it to save time in their jobs.

The California DFPI has charted a rise in AI investment scams, while cybersecurity firms such as McAfee have observed an uptick in AI voice cloning scams in recent months.

ChatGPT's explosive release also led to a wave of malicious domains being created, which will also be discussed in this article.

Recently, Apple co-founder Steve Wozniak – a recent signatory to a letter calling for a pause to AI development – warned that artificial intelligence will make scams much harder to spot, and allow malicious actors to sound increasingly convincing. The era of AI-assisted scams is here.

Much like ransomware-as-a-service lowered the level of technical ability needed to attack a company, AI tools like ChatGPT means pretty much anyone can sound convincing, so in theory, a larger demographic can now orchestrate them effectively.

1. AI-Assisted Phishing Scams

Phishing scams have been around for years – scammers will send out emails or text messages masquerading as a legitimate company, such as Microsoft, in an attempt to get you to click on a link, which will lead you to a malicious website.

From there, a threat actor can inject malware into your device or steal personal information such as a password. Historically, one of the easiest ways to spot them has been spelling and grammar errors that a company as prestigious as Microsoft would simply not make in an official email to its customers.

In 2023, however, with a simple prompt, you can prompt ChatGPT to generate clean, fluid copy that doesn’t contain any spelling mistakes. This makes it a lot harder to distinguish between legitimate correspondence and phishing attacks.

If you explicitly ask ChatGPT to create an email for the purpose of phishing, the chatbot refuses to do so. However, we asked ChatGPT to produce two different types of emails that could feasibly be used as a template for a phishing scam, and surprisingly, it seems these sorts of requests aren't blocked under its content rules:

Protecting yourself from AI Phishing Scams

If you receive an email that seems like it’s from a legitimate company, but it’s trying to inject a sense of urgency into your decision-making (like asking you to pay a fine, or log into your account to avoid it being deleted), treat it with extreme caution. This is a typical phishing tactic.

Remember, if you think the email is most likely genuine, you can always open a fresh line of communication with the person or the company.

For example, If you get a suspicious-looking email from your bank saying your account has been accessed by an unauthorized third party, don’t respond to the email – simply contact the bank’s customer service team yourself, using the number or address listed on their website.

2. Social Media AI Scams

Recently, cybersecurity firm Checkpoint spotted a new kind of AI scam doing the rounds on Facebook. They've observed threat actors creating fake Facebook pages purporting to advertise “enhanced” versions of AI tools, such as “Smart Bard”.  In reality, the product does not exist.

In the scam, threat actors use sponsored posts featuring these Facebook pages to target advertisements to unsuspecting users, which include links to malicious domains. Once users visit the domains, the scammers attempt to coax them into downloading ‘AI software' that's actually info-stealing malware.

Users that have fallen for the scam and downloaded it have their passwords, and other sensitive information, extracted from their devices and sent to a server maintained by the threat actors.

What's concerning about this scam is the great lengths that the threat actors have gone to to make their Facebook pages look legitimate, including using an army of bots to leave positive comments all over their posts. Some of the pages, Checkpoint says, had huge numbers of likes. At a glance, it all looks pretty legitimate – which is never a good sign.

Fake AI chatbots checkpoint research

(Image Credit: Checkpoint)

Protecting yourself from social media AI scams

Remember, just because a page on social media has lots of likes and lots of engagement on its post, doesn't necessarily mean it's legitimate – it's surprisingly easy to either purchase or artificially manufacture this air of legitimacy on social media.

Social media is a hotbed for scams, so interacting with pages you've never seen before, promising enhanced versions of existing software, should be treated with suspicion. On top of this, a quick Google search will reveal that programs like “GPT-5” and “Bard V2” don't actually exist (yet, anyway). These sorts of wild promises are telltale signs that something may just be too good to be true – and in this case, it is.

Similarly, the presence of legitimate-looking links – which can easily be inserted into page description boxes and posts – does not negate the possibility that other links might be malicious.  If you want to download a chatbot, go directly to the company or organization's website, rather than a link on social media.

3. AI Voice Cloning Scams

AI Voice scams are a type of AI-assisted scam that have been making the headlines in recent months. A global McAfee survey recently found that 10% of respondents had already been personally targeted by an AI voice scam. A further 15% reported that they knew someone who had been targeted.

11% of US victims who lost money during AI voice cloning scams were conned out of $5,000–$15,000.

In AI voice scams, malicious actors will scrap audio data from a target’s social media account, and then run it through a text-to-speech app that can generate new content in the style of the original audio. These sorts of apps can be accessed online for free, and have legitimate non-nefarious uses.

The scammer will create a voicemail, or voice note depicting their target in distress and in desperate need of money. This will then be sent out to their family members, hoping they’ll be unable to distinguish between the voice of their loved one and an AI-generated version.

Protecting yourself from AI voice scams

The Federal Trade Commission (FTC) advises consumers to stay calm if they receive correspondence purporting to be from a loved one in distress and to try ringing the number they've received the call from to confirm that it is in fact real.

If you think you're in this position and you can’t ring the number, try the person in question's normal phone number. If you don’t get an answer, attempt to verify their whereabouts by contacting people close to them – and check apps such as Find My Friends if you use them, to see if they’re in a safe location.

If you cannot verify their whereabouts, it's crucial that contact law enforcement immediately. If you work out it is a scam, ensure you report the same to the FTC directly.

4. Fraudulent ChatGPT App Scams

Just like any other major tech craze, if people are talking about it – and more importantly,  searching for it – scammers are going to leverage it for nefarious means. ChatGPT is a prime example of this.

A recent report from Sophos found a plethora of ChatGPT-adjacent apps that it has dubbed “fleeceware”. Fleeceware apps provide a free program with limited functionality and then bombard users with in-app adverts until they sign up for an overpriced subscription.

According to the cybersecurity firm, “using a combination of advertising within and outside of the app stores and fake reviews that game the rating systems of the stores, the developers of these misleading apps are able to lure unsuspecting device users into downloading them”.

One fake ChatGPT app called Genie, which offers $7 a week or $70 a year subscriptions, made $1 million over a monthly period, according to SensorTower. Others have made tens of thousands of pounds. Another, called “Chat GBT” on the Android store, was specifically named in Sophos's report:

Chat GBT app fleeceware

(Image Credit: Sophos)

According to the cybersecurity firm, the ”pro” features that users end up paying a hefty sum for are “essentially the same” as the free version. They also report that, before the app was taken down, the reviews section was littered with “comments from people who downloaded the app and found it didn’t work – either it only showed ads or failed to respond to questions when unlocked.”

Protecting yourself from fake ChatGPT app scams

The simplest way to ensure you don’t incur these sorts of subscription fees – or download unwanted malware – is to simply not download the apps. iPhone users can now download the official ChatGPT app, which has recently launched. It'll be intriguing to see whether this marks the demise of the fake ChatGPT apps currently populating the App Store.

Alternatively, both iOS and Android users can add a ChatGPT web link to their home screen, and if you're an iPhone user, you can create a Siri shortcut that will take you straight to ChatGPT on the web. There's little practical difference between the home screen shortcut and a native application in this context.

5. Fake ChatGPT Websites

Along with fake ChatGPT apps, there are also a bunch of fake ChatGPT websites out there, capitalizing on the huge search volume around the term.

In February 2023, Twitter user Alvosec identified four domains that were all distributing malware under a ChatGPT-related name:

Some reports have noted that fake ChatGPT websites have been presenting OpenAI’s chatbot as a downloadable Windows application, rather than an in-browser application, allowing them to load malware onto devices.

How to Protect Yourself from Fake ChatGPT Websites

Remember, ChatGPT is an OpenAI product, and the only way to access the chatbot is via the mobile app, or through their domain specifically. “ChatGPT[.]com”, for example, has nothing to do with the real, legitimate ChatGPT, and you can't download ChatGPT like it's a software client.

Strangely enough, The URL for the legitimate ChatGPT sign-up/login landing page doesn’t even have the word “ChatGPT” in it: https://chat.openai.com/auth/login.

You can also sign up via OpenAI's blog (https://openai.com/blog/chatgpt), but again, this is part of the OpenAI domain. If someone sends you a link to a ChatGPT site that doesn't lead to one of the above addresses, we’d advise not clicking on it, and navigating to the legitimate site via Google instead.

6. Celebrity Deepfake Scams

As deepfake technology becomes increasingly accessible, the number of misleading ads featuring AI-generated celebrities has exploded in recent months.

Take Taylor Swift’s recent endorsement of Le Cruset cookware, for example. The Facebook advertisement – which features an AI-generated Taylor Swift unveiling a cookware giveaway – has been generating a buzz online, and is convincing enough to dupe the shrewdest of Swifties.

But Taylor Swift isn’t alone. Oprah Winfrey, Joe Rogan, and Kevin Hart are just some examples of public figures who have had their identity artificially replicated by deep learning algorithms for bogus endorsements.

While a small minority of these ads are associated with actual companies, the ultimate aim of most deepfake ads is to convince users to enter their personal data – making it important for netizens to be able to spot their tell-tail signs.

Protecting yourself from celebrity deepfake scams

While the increasing sophistication of deepfake technology is making it harder and harder to call out dupes, there still are a number of cues to look out for. 

In videos, the movement of facial features tends to look unnatural. This can range from a lack of blinking and bad lip-syncing to more exaggerated signs like facial morphing. Other visual prompts include unnatural coloring – like unusual skin tones and misplaced shadows – and artificial-looking hair. 

Deepfaked audio can be slightly harder to spot, but warning signs include a robot-style voice, unnatural pausing, and a lack of background noise. Also as ever, if it seems too weird to be true, it probably is. 

Trying to decipher the legitimacy of stills? Read our guide to detecting AI images.

7. AI Investment Scams

Much like cryptocurrency, scammers are leveraging the hype around AI – as well as the technology itself – to create fake investment opportunities that seem genuine.

“TeslaCoin” and “TruthGPT Coin” have both been used in scams, piggybacking off the media buzz around Elon Musk and ChatGPT and portraying themselves as trendy investment opportunities.

California's Department of Financial Protection & Innovation alleges that a company called Maxpread Technologies created a fake, AI-generated CEO and programmed it with a script encouraging punters to invest (pictured below). The company has been issued a desist and refrain order.

Michael Vens, Alleged AI CEO

(Image Credit: coinstats.app)

Forbes reports that another investment company, Harvest Keeper – which the DFPI says collapsed back in March – hired an actor to masquerade as their CEO in order to reign in enthused customers. This illustrates the lengths some scammers will go to to ensure that their pitch is realistic enough.

Protecting yourself from AI investment scams

If someone you don’t know is reaching out to you directly with investment opportunities, treat their tips with extreme caution. Worthwhile investment opportunities do not tend to fall into people's laps in this way.

If it sounds too good to be true, and someone is offering you guaranteed returns, don't believe them. Returns are never guaranteed on investments and your capital is always at risk.

If you're someone that regularly invests in companies, then you'll know the importance of doing your due diligence before parting with your hard-earned cash. We'd recommend applying an even higher level of scrutiny to prospective AI investments, considering the buzz around related products and the prevalence of scams.

8. AI Romance Scams

Artificial intelligence is also being used to trick hopeless romantics out of their hard-earned cash. 

While romance scams are nothing new, bad actors have begun using AI-generated photos to lure in punters on dating apps. The technology is also being used to mimic human interaction in text exchanges and video messages – making it near-impossible for targets to call out the bots. 

Romance scammers use various methods to extort victims out of their money. Some cases involve bogus investment schemes, while other scammers claim to have financial difficulties, before asking targets to cover the cost of their expenses.  

“I'm asked literally everyday by two or three women online, for money,” – Jim told ABC News, Chigaco

According to a report from ABC Chigaco, a man called Jim was duped out of $2,000 to cover a wide range of false expenses from airfares to babysitters, while another victim transferred $60,000 to a catfish, after being convinced her uncle was on the board of the Hong Kong stock exchange.

AI generated romance scammer

Protecting yourself from AI romance scams

When trying to spot romance scammers on dating apps, watch out for unnatural or formulaic messages. More times than not, it’s pretty easy to notice when a conversation lacks a human touch. 

However, if you’re uncertain, try asking the match open-ended questions. If they reply with generic answers that lack nuance, then there’s a good chance you’re being wooed by a scammer. AI-generated profiles will also use pet names like ‘honey' and ‘babe' to make it easier to send messages en masse.

If someone you don’t know tries to reach out to you romantically on other platforms and offers you a proposition that seems too good to be true, these are other typical red flags of a romance scammer. If your suspicion is raised, we’d recommend blocking the profile and reporting it immediately.

9. AI Recruitment Scams

As if landing a new job isn’t hard enough, candidates have recently been facing another hurdle when jobseeking – fake AI recruiters. 

In 2023, the Federal Trade Commission (FTC) warned that scammers are using AI platforms like ChatGPT to create phantom job opportunities, attempting to lure money and personal information out of prospective applicants. In most cases, targets will be promised a position, on the condition that they pay a sum of money over to the ‘recruiters’ first. 

While bogus job ads aren’t new, the advancement of AI technology has made it easier for scammers to create a larger amount of job listings, and circulate them on social media. Expects believe the cybercriminals have also taken advantage of the recent wave of layoffs within the tech industry. 

Protecting yourself from AI recruitment scams 

If you see a job offer that seems too good to be true, do your homework. Check if the company exists by looking up its website and correlating social media profiles. Googling ‘scam’ or ‘review’ alongside the name of the company is another way to see if the ruse has caught people out before. 

Also, legitimate employers never ask applicants to pay for a job. So if a recruiter is demanding money before you’ve started your new job, this should ring alarm bells. 

Latest ChatGPT and AI Scams in 2024

While all the scams on this list currently pose very real risks to the public, here are some recently spotted tricks that are likely to gain traction throughout 2024.

10. Bank voice authentication scams

As deepfake technology makes replicating voices easier than ever, a new scam has been unveiled where scammers use fake audio clips to bypass voice-based security measures on online banking platforms. This raises concerns for users looking to safeguard banking information and banking institutions that are entrusted with keeping accounts secure.

11. Government account takeover scams

The rapid deployment of AI has also led to a new iteration of the government imposter scam. The racket, which involves swindlers reaching out to targets claiming to be government agents, has become much more sophisticated through the technology, with the Social Security Administration warning that chatbots are being used to “impersonate beneficiaries”, contact customer service representatives, and “change their direct deposit information”.

AI Scams: They’re Only Going to Get Worse

In 2022, US consumers lost a huge $8.8 billion to scams – and it’s unlikely that 2023 will be any different. Periods of financial instability often correlate with increases in fraud too, and globally, a lot of countries are struggling.

Currently, artificial intelligence is a goldmine for scammers. Everyone is talking about it and using it – more than 77% of consumers now use an AI-powered device – yet few are really clued in on what’s what, and companies of all shapes and sizes are rushing AI products to market. Ethical questions relating to AI are only starting to attract the attention they deserve.

Right now, the hype around AI makes it about the most downloadable, investible, and clickable subject on the internet. It provides the perfect cover for scammers.

It’s important to keep up to date with the latest scams doing the rounds, and with AI making them much harder to spot, this is all the more important. The FTC, FBI, and other federal agencies regularly put out warnings, so following them on social media for the latest updates is strongly advised. On top of this, with a huge range of legitimate AI training courses now available, it's important to keep your wits about you when signing up for classes or services.

However, we’d also recommend purchasing a VPN with malware detection, such as NordVPN or Surfshark. They’ll both hide your IP address like a standard VPN, but also alert you to suspicious websites lurking on Google Search results pages. Equipping yourself with tech like this is an important part of keeping yourself safe online.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is a Lead Writer at Tech.co. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today