ChatGPT Makes Spotting Fake News Impossible for Most People

Tweets generated by OpenAI's GPT-3 model are so convincing, people can't spot when they promote misinformation.

A new study has revealed that people find AI-generated tweets more convincing than those written by humans, even when they contain information that isn’t true.

Researchers studied the responses of nearly 700 people to a range of tweets on hot button issues like COVID-19, evolution, 5G, and vaccines. Some of the 220 total tweets were accurate, while others featured misinformation. The purpose of the analysis was to see the impact AI had on people’s ability to spot so-called “fake news.”

The survey found that not only were tweets generated by OpenAI’s GPT-3 large language model (LLM) easier to identify when they presented the correct information, but they were also better at duping people into believing them when they were false.

Related: Does ChatGPT Save My Data?

The “Double-Edged Sword” of GPT-3

Releasing their report in the journal Scientific Advances, the researchers dubbed GPT-3 a “double-edge sword” capable of producing “accurate information that is easier to understand, but also more compelling disinformation.”

The findings are all the more worrying, as they show that education is not always a sufficient defense against AI misinformation. Not only were the chosen subjects commonly associated with fake news, but most of the 697 people taking part in the study possessed a bachelor’s degree in the social sciences, humanities, natural sciences, or medical sciences.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

In general, the respondents couldn’t tell the difference between AI tweets and those composed by humans, regardless of if they were true or false. The test group were drawn from a range of English language countries including the US, Canada, UK, Ireland, and Australia.

The Bad Side of Generative AI

Generative AI platforms like ChatGPT have been used for a wide range of business purposes since its inception in November last year. From emails to coding, these tools are productivity machines, helping workers complete tasks in record speed across the business landscape.

Still, it’s not all peaches and cream when it comes to generative AI. In fact, there are a growing number of studies that have shown the dark side of the technology. For starters, one study found that Google Bard could be used to spread misinformation. Even worse, some reports have found that one lawyer attempted to fake a citation with ChatGPT.

This bad side of generative AI has led to some companies opting to ban the tech for employees, including Apple and Samsung. Heck, even Congress has limited the use of ChatGPT for fear of security breaches that could lead to cybersecurity gaps.

All this to say, ChatGPT and generative AI platforms are nothing more than technology, and how people use them will ultimately decide their impact on the world. But if meaningful regulations aren’t handed down soon, the flood of disinformation could be beyond our ability to fight back.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today