Does ChatGPT Save My Data? OpenAI’s Privacy Policy Explained

We take a closer look at what data ChatGPT saves, how it's used, and OpenAI's privacy policy.

Does ChatGPT save data? The short answer is yes – and quite a lot of it. In fact, ChatGPT saves all of the prompts, questions, and queries users enter into it, regardless of the topic or subject being discussed. As a result, hundreds of millions of conversations are likely stored by ChatGPT owner OpenAI at present.

Just as you can review your previous conversations with ChatGPT, so can the chatbot. You can delete specific conversations with ChatGPT, but your data may have already been extracted by OpenAI to improve the chatbot’s language model and make its responses more accurate.

In this guide, we take a closer look at the kinds of user information that ChatGPT and creators OpenAI store and whether it’s safe for businesses to use the generative AI tool. We also run through some of the main concerns around the ChatGPT privacy policy and discuss the precautions you should take while using it.

What Data Does ChatGPT Collect About Me?

In a nutshell, ChatGPT will save all of the conversations you have with it and store them as chat history. It then uses this data to improve its language model. There’s every chance that your conversations and queries will be seen by some of the human trainers working behind the scenes at OpenAI.

OpenAI states in its FAQs that, “for non-API consumer products like ChatGPT and DALL-E, we may use content such as prompts, responses, uploaded images, and generated images to improve our services.” The company will store your data as long as your account is open.

chatgpt data controls

OpenAI’s Privacy Policy: What User Data Does OpenAI Collect?

Conversations aren’t the only data you’re handing over when you use ChatGPT. When you sign up, for instance, OpenAI collects your email address and phone number. Other information OpenAI gathers includes:

  • Geolocation data
  • Network activity information
  • Commercial information e.g. transaction history
  • Identifiers e.g. contact details
  • Device and browser cookies
  • Log data (IP address etc.)

OpenAI says in its privacy policy that the company discloses the types of information listed above to its “affiliates, vendors and service providers, law enforcement, and parties involved in Transactions.”

OpenAI will also collect “device information”, which includes “the name of the device, operating system, and browser you are using”, as well as any conversations you have with the company’s customer support services.

In addition, the company can extract information about you if you interact with its social media pages, based on the aggregated user analytics made available to businesses by the likes of Facebook, Instagram, and Twitter.

Some people suggest using a VPN to enhance your privacy when using ChatGPT, though this has its limits. Granted, using one of the best VPNs can reduce your digital footprint elsewhere on the internet, but you hand so much other data over to OpenAI (like your phone number) that it’ll have very little impact on your overall privacy.

What Does ChatGPT Do With My Data?

ChatGPT uses the data included in prompts and questions to improve its ability to answer future user queries. However, OpenAI says that it “take steps to reduce the amount of personal information in our training datasets before they are used to improve our models”.

Your data is also used to enrich ChatGPT’s contextual memory and improve future conversations with you specifically.

In theory, this means that if you tell ChatGPT that you didn’t enjoy a vacation you took to Mexico and that you don’t want to go again, then you ask it to plan a vacation for you, it shouldn’t pick Mexico as a location.

Remove Your Data From the Web Today

Use Incogni to take on the data brokers storing and selling your personal information

Humans working for OpenAI could very well analyze your data when they’re training the chatbot. This is one of the reasons why it’s really important to manage and control what information you’re inputting into the chatbot.

ChatGPT Privacy Concerns

There are a number of legitimate privacy concerns surrounding ChatGPT and how it’s used. For one, there’s very little information about precisely what ChatGPT does with your prompts and inputs when you start using the chatbot. We know they’re used for training purposes, but precisely what this entails isn’t laid out by OpenAI.

What’s more, by some estimations around 11% of data inputted into ChatGPT can be considered “sensitive”, even though users may not realize it. There’s also the risk of inputting sensitive data about other people, such as a lawyer drafting an adoption agreement.

Other privacy concerns relating to ChatGPT include:

  • The collection of data used to train ChatGPT: There was no oversight over the data that was collected to use ChatGPT, nor was anyone asked if their data could be used to train the chatbot when it was being built and taught.
  • Phone number collection: Although phone numbers help verify that users are in fact humans, it means every question you ask ChatGPT is linked to a piece of personal information that’s quite difficult to “make up” for the signup process if you wanted to, like a name or an email address. 
  • Questionable aspects of OpenAI’s privacy policy: In one part of OpenAI’s privacy policy, it says that the company may “provide your Personal Information to third parties without further notice to you” unless they’re legally required to do so.

Is it Safe for My Company To Use ChatGPT?

Yes and no. It’s probably fine for your staff to use ChatGPT to generate Excel queries, paraphrase text for presentations, and complete other mundane day-to-day tasks (unless you consider ChatGPT to constitute plagiarism). Just ensure they’re protecting their individual accounts with sufficiently strong, unique passwords. Equipping them with a quality password manager is one easy way to encourage this.

Employees shouldn’t be entering sensitive company data, or personally identifiable information, into ChatGPT.

We’d strongly advise that you set some company guidelines about what employees should and shouldn’t be inputting into ChatGPT. Sensitive company information falling into the wrong hands is often catastrophic for businesses, and at times even financially fatal. Data breaches are a regular occurrence in 2023 and sensitive company data is a valuable asset for hackers – the average cost of a data breach in the US is thought to be around $10 million.

Some companies are so worried about the privacy risks posed by ChatGPT that they’ve banned it altogether. These include multi-national banks like Goldman Sachs, Bank of America, Citigroup, JP Morgan, Wells & Fargo, and Deutsche Bank. Elsewhere, Apple and Samsung are among the many big tech companies to ban ChatGPT.

If your company is planning on using ChatGPT in the near future, stick to these four, golden rules, which should be shared with staff:

  • Never enter sensitive company data or information into ChatGPT
  • Provide staff with clear guidelines on what they can use ChatGPT for
  • Demand oversight over the tasks staff are using chatbots for
  • Instruct staff to secure their OpenAI accounts with strong, unique passwords and two-factor authentication

There are still a lot of uncomfortable questions to be asked regarding the privacy and safety of ChatGPT. Despite its seemingly endless utility, it comes with a number of risks and you should take appropriate precautions when using it.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is a Lead Writer at Tech.co. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today