A Quick Guide to Four Trending AI Scams to Avoid in 2024

From deepfakes to voice cloning, the new world of AI-powered scams has users worried for their personal and financial data.

The era of AI has ushered in a lot of changes in the online world. From AI-generated content on every social media platform to AI chatbots in every piece of business software, the technology is evolving at break-neck speed.

With that evolution, though, comes some serious risks. In fact, scammers have been quick to adopt the technology in their nefarious deeds, leading to a whole new aspect of internet fraud to keep an eye out for.

In this guide, you’ll learn about some of the newest scams that are powered by AI, as well as a few tips for how to avoid them in the future.

AI Scams in 2024

Scams have been notably on the rise in 2024, which means understanding what kind of threats are out there will be vital to protecting yourself online. Here are some of the AI scams you should be watching out for in the modern era:

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

Deepfake scams

Deepfake technology has been around for a while. It describes video content that shows a falsified image of another person. These digital manipulations have been used for a wide range of purposes, from scams to misinformation, but until recently, most of them were pretty easy to spot.

Unfortunately, that is very much not the case in the era of AI. Deepfake technology has been drastically improved by the evolution of the technology, leading to some troublingly accurate portrayals of individuals online.

Even worse, these likenesses are being used for a wide range of nefarious reasons. From celebrity lookalikes selling fake products to personal friends asking for money, the ways in which scammers are using deepfake technology are nothing if not expansive.

Examples of a deepfake scam

One of the most prominent deepfake scams that hit individuals this year was one involving arguably the biggest star in the world right now: Taylor Swift. The pop star was seen online to be giving away 3,000 Le Creuset kitchenware products in a video posted to social media. The problem? It wasn’t actually Taylor Swift and there were no Le Creuset kitchenware products to speak of.

Instead, the video was a deepfake scam featuring Taylor Swift, designed to steal personal details and financial information by requiring a small charge for shipping. Of course, the kitchenware never shows up, and your personal data is compromised forever.

Voice cloning scams

Much like deepfake videos, voice cloning is a popular and terribly unsettlingly means of scams people out of their money. The practice involves replicating the voice of someone you know, typically a loved one whose voice you would immediately recognize. Then, they can take advantage of your obvious trust by asking for money, personal information, or pretty much anything you would give that friend or family member.

The biggest risk of this scam is how accurate AI has made the voice replication process. Individuals that were victims of voice cloning scams noted afterwards that they “never doubted for one second” that they were talking to their loved one on the phone. And considering some surveys have found that very few can recognize the difference in voice cloning scams, this kind of thing should be on your radar at all times.

Example of a voice cloning scam

Obviously, celebrity sound-a-like scams wouldn’t play as well as deepfake videos, but voice cloning scams typically go a different route.

The most common is the family emergency angle, in which a family member, typically a son or daughter, will call someone and say they are in trouble. Either a car accident, arrest, or other crisis that would require some kind of financial solution. Once you’ve paid to have the non-existent emergency solved, you’ll have no resource for getting it back.

Phishing scams

Phishing scams have been a common problem on the internet for a long time. Hackers and scammers trying to get you to provide personal information or financial data with fake emails impersonating reputable businesses is a practice as old as time at this point.

Now, though, AI is making it a whole lot easier to get the job done. In most cases, these phishing emails and text messages are littered with spelling errors and other key indicators. However, these phishing scams are getting harder and harder to spot, thanks to AI providing a bit more power to generate more legitimate-seeming content.

Even worse, generative AI platforms are making it easier to create more and more phishing emails. Sure, many of these chatbots have safeguards to prevent that kind of things, but we’ve found that a few simple commands create a loophole that allows you to generate phishing content easily and effectively.

Example of a phishing scam

One of the most common phishing scams in circulation right now is a text message that says your package from UPS has been delayed and you need to confirm your details to get access. When you click on the provided link, you’re taken to a strikingly realistic looking UPS website that has you fill out a form to get access to your package. Of course, your package isn’t lost, but your financial data and personal information is.

AI hasn’t drastically changed phishing scams, but it has made them more prevalent and harder to spot. Content generators make them more accurate and easier to create, so if you see a significant uptick in the coming years, be wary where you input your information.

Listing scams

Like some of the other scams we’ve covered here, listing scams are nothing new, but AI has given scammers the ability to produce them at speed and volume on a scale that we’ve never seen before. Like most scams, it’s a game of numbers, and it only takes one person to respond to make it worthwhile.

Listing scams covers everything for sale from electronic goods to cars to homes. Generating fake listings is easy now, thanks to how convincing AI text summaries can be. Some scammers will even go so far as to generate AI images of the products in question, making a ‘unique image’, meaning it won’t show in a Google image search as being stolen from elsewhere (previously, a pretty good way to identify a scam). Once the scammer has your money, you’ll never see the product, and you could even open yourself up to further scams down the line from the ‘seller’.

It’s not just goods that are open to listing scams. Last year saw a huge rise in the number of job listing scams, with their increased prevalence being blamed on AI. These scams promise jobs that are too good to be true, before demanding that money is sent to secure the position, or for training purposes. AI helps create realistic company websites, staff headshots, and can even be used for fake interviews. Read our guide to avoiding WFH scams.

Examples of a listing scam

One of the most common AI listing scams is the sale or rental of property. Scammers create realistic ads for properties, usually in desirable areas at competitive prices. Then, they’ll ask for money upfront, such as a deposit or several months rent.  It’s a lucrative scam, as they’ll often be speaking to several victims at once.

What’s particularly nefarious about this scam is that AI enables the scammer to generate reasonably realistic property documents, which might convince the victim that the transaction is legitimate. They might also ask for personal information as well as money, leaving your data compromised.

How to Avoid AI Scams

You’ve already taken the first step towards avoiding AI scams, and that’s understanding what kind of scams are actually out there. That’s right, just by opening and reading this article, you’re on your way to a more secure online existence. Here are some other tips for avoiding AI scams:

  • Always confirm – Whether it’s a deepfake or a voice cloning scam, find a way to confirm the source of information. A simple Google search or call to a loved one can save you a lot of strife.
  • Be careful what you post – A lot of scammers uses data from social media to either target vulnerable individuals or train models on your voice.
  • Report the scam – It may not help you avoid it, but by reporting the scam to the proper authorities, you can be sure that future attempts fall flat, no matter how much AI they use.

Beyond that, it’s important to be a bit skeptical when it comes to providing financial information or personal data online or on the phone. AI is making these scams infinitely more convincing, so carrying a bit of healthy skepticism when approaching these transactions can go a long way in keeping you safe.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Conor is the Lead Writer for Tech.co. For the last six years, he’s covered everything from tech news and product reviews to digital marketing trends and business tech innovations. He's written guest posts for the likes of Forbes, Chase, WeWork, and many others, covering tech trends, business resources, and everything in between. He's also participated in events for SXSW, Tech in Motion, and General Assembly, to name a few. He also cannot pronounce the word "colloquially" correctly. You can email Conor at conor@tech.co.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today