While many experts consider AI to be the future of productivity gains at work, new research has warned that careless use of chatbots by workers may leave their employers exposed to costly data leaks.
It says that nearly one in 10 prompts entered into generative AI tools by workers include sensitive data, with a large number comprising customer data such as billing information.
With the majority of these prompts being inputted into the free versions of ChatGPT and other AI chatbots like Copilot, Gemini, Claude and Perplexity, businesses are potentially more exposed than they would be with paid tiers.
Sensitive Customer Information
In a report produced by Harmonic – a cybersecurity startup specializing in artificial intelligence – the company says it analyzed tens of thousands of prompts made by businesses to the planet’s biggest AI chatbots during Q4 2024.
It found that, while the vast majority of usage is harmless (e.g. summarizing text, editing blogs, writing documentation for code), 8.5% of all prompts included sensitive information.
This just in! View
the top business tech deals for 2025 👨💻
Of that sensitive data, more than 45% instances included information such as customer reports and profiles, payment transactions, billing information, and credit card details.
Harmonic gives an employee entering the particulars of an insurance claim, which will often contain private, detailed information, as an example of using AI to save time.
“While efficient, this practice risks exposing sensitive customer information like billing details, authentication credentials, and payment transactions.” – Harmonic spokesperson
Intellectual Property Concerns
The research revealed that around a quarter of the prompts investigated contained employee data (e.g. personally identifiable information, payroll, employment records), which could leave individuals exposed.
The remainder concerned legal and finance, security, and sensitive code. That will be an even greater concern for businesses, as the leaking of such information could give competitive advantage to other companies, bring about regulatory compliance issues, and “provide attackers with a blueprint for exploiting vulnerabilities.”
“When we think of data leakage, our minds often immediately go to PII, social security numbers, credit cards, and API keys,” says an accompanying blog from Harmonic. “However, for many businesses, the concerns are more often around IP – source code, intellectual property.”
Problem with Free Tiers
Harmonic says that the data it has discovered is even more concerning when considered alongside the fact that most of the prompts are entered using the free tiers of chatbots; more than 50% of all the ChatGPT, Gemini, and Claude entries it looked at were on their respective free tiers.
That’s because, the company says, using a free product rather than a paid-for enterprise-level plan means that the business doesn’t get the benefit of extra security features. Plus, many free-tier tools explicitly state they train on customer data, meaning sensitive information entered could be used to improve models, further compounding risks.
It’s not the first time that businesses have been warned against the risks of their staff using AI tools. Consumer tech giant Samsung restricted their use after sensitive code was leaked in 2023, while a study last year revealed that 77% of businesses have faced AI security breaches.