OpenAI has moved to quell rising fears over privacy by removing a feature from ChatGPT, it has emerged.
The feature in question allowed users to make their conversations with the chatbot discoverable in search engines. To do so, they would select a check box which gave the platform permission to “make this chat discoverable.”
The news was first broken by Fast Company, which revealed that nearly 4,500 conversations had been indexed by Google and showed up as results. It’s thought that in many of these instances, the user involved mistakenly opted to share their private conversations, as the above disclaimer appears in small text.
With concerns over the use of AI at work growing, business leaders will be delighted by this news. A recent report from Gusto shed light on employees’ AI habits, revealing that 45% of staff have used AI tools without first consulting their boss. The OpenAI announcement ensures that sensitive company information won’t be appearing in search results anytime soon.
OpenAI Removes Feature to Make Conversations Discoverable in Search
OpenAI has officially removed a feature from ChatGPT that made it possible for users to share their conversations with others — and be discovered in web searches.
Users had the opportunity to select a check box to “make this chat discoverable,” followed by a disclaimer, in smaller text, that read “allows it to be shown in web searches.”
This just in! View
the top business tech deals for 2025 👨💻
An investigation by Fast Company revealed that nearly 4,500 ChatGPT conversations had surfaced in Google search results. It’s thought that, in many of these cases, users were actually unaware of what they were enabling, as the disclaimer is written in small font.
Writing on X, an OpenAI representative explained: “This was a short-lived experiment to help people discover useful conversations… Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t want to, so we’re removing the options.”
Personal Information Unwittingly Shared with AI Chatbot
The Fast Company report makes reference to a publicly viewable user chat that “described in detail their sex life and unhappiness living in a foreign country, claiming they were suffering from post-traumatic stress disorder (PTSD) and seeking support.” Reportedly, the chat also includes details of family history, friends, and relationships.
It’s also likely that the true number of ChatGPT conversations visible in search is much higher, as Google may not index all conversations.
This will be a cause for concern among the growing number of people using ChatGPT for emotional support. According to a recent study, three-quarters of respondents have tapped the chatbot to help with anxiety.
As we’ve covered in this guide, that practice incurs significant privacy concerns.
Safety Boon for Workplace Bosses
With AI usage in the workplace growing, senior leaders will be pleased that OpenAI has removed a feature that could have spelled disaster for their business privacy.
As mentioned in the above guide, a growing demographic of workers is using tools, such as ChatGPT, without first seeking the permission of their boss. This means that it’s impossible to regulate their usage — as well as what they might be sharing.
In order to ensure that this doesn’t become a problem further down the line, companies need to educate their staff on how to use AI safely. With cybersecurity one of the biggest issues facing the business world today, practicing good date hygiene is absolutely essential to deterring cyberattacks.
As this piece demonstrates, the consequences are often severe.