5 Precautions You Need to Take When Using ChatGPT

Plagiarism, copyright violations, and bad math: You need to check for them, because ChatGPT won't.

ChatGPT is reshaping the world. The new AI chatbot can hold entire conversations, speaking in the style of someone else, and play out nearly any imaginary scenario a user can ask it for.

Plenty of competitors have launched their own generative AI bots, too, from heavy-hitters like Google to services like Salesforce and its Einstein GPT tool. But if you want to get the most out of a chatbot, you need to know a few dos and don’ts.

Here are the biggest precautions you’ll need to keep in mind when trying to get a good answer out of ChatGPT.

1. Don’t Share Sensitive Data

Everything that a ChatGPT user shares with the bot is saved, and ChatGPT has the right to use this information in the future. Which means that nothing you say is private. Share sensitive data with ChatGPT and you’ve left it fully exposed to the whims of an algorithm.

Samsung learned this lesson the hard way on at least three different occasions, The Economist Korea found recently. Three separate employees submitted code, a meeting recording, and even more “confidential source code” to the chatbot, opening that data up to the program.

Needless to say, confidential data should stay confidential.

Some governments are taking a strong stance against the ways in which ChatGPT collects user data. Germany has said it may block ChatGPT over data security and privacy concerns, in the wake of Italy doing just that.

2. Double-Check Sources

ChatGPT’s output is a bigger concern than its input. Because the tool operates independently, there’s no vetting process for what it decides to tell you.

The same is true for similar generative chat-based AI programs, like Google’s Bard. For one new report, 100 different prompts were submitted to Bard, all asking it to explain certain types of online misinformation. In 76 incidents out of 100, Bard created fake content on those topics.

Double-check all sources that the chatbot cites, because sometimes it just feels like making them up.

3. Check Math and Formulas

The hard sciences aren’t exempt from generative AI’s habit for fibbing. In fact, even a standard calculator is better! That’s because this type of algorithm learns with a large language model, so it “thinks” and speaks in natural language rather than in mathematical formulas. More often than not, ChatGPT will give a natural-language response that’s clear, confident, and incorrect.

AI might well be able to handle the complexities of math in the future, but for now, don’t rely on it to handle your algebra. At the very least, take every solution it delivers with a grain of salt.

4. Be Wary of Copyrighted Material

Any machine learning program operates by pulling information from a set of existing information. In some cases, a chatbot might pull an entire sentence from a source. If that source is from any publication younger than 1927, it won’t be in the public domain in the United States. If your chatbot is not digesting its information properly, the final result can easily violate US copyright law.

Run your results through an online plagiarism checker in order to cut down on the odds that you’ll be violating copyright.

5. Learn Which Categories of Data ChatGPT Can’t Handle

ChatGPT can’t do everything.

Take ASCII art, for example. A human can easily tell what image is being represented by a series of computer-text symbols when they’re arranged in the right shape. However, ChatGPT regularly gets this category of art incorrectly, whether it’s reproducing gibberish ASCII art or falsely claiming that a depiction of the cartoon character Shrek is actually the Mona Lisa.

In many cases, ChatGPT can’t handle a complex version of a task, even if it completes the simpler version just fine. Coding is one example: ChatGPT can change the color of a website, but it might not be able to figure out which color scheme will make sense to the human eye.

If you can take a little trial-and-error time to work through what ChatGPT does best, you’ll have a handy tool. Ask too much — or fail to factcheck all data, sources, math, or plagerism — and you’ll just be worse off than before.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Adam is a writer at Tech.co and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today