Microsoft-backed startup OpenAI will be probed by FTC, in investigation into claims the company misused personal data.
The Federal Trade Commission (FTC), the US’s top competition watchdog, is investigating growing suspicion that the Artificial Intelligence company, OpenAI, broke the law by scraping public data and publishing false and defamatory information.
In what is the biggest regulatory stumbling block since the launch of ChatGPT in late 2022, the 20-page letter demands OpenAI produce the records about how they address risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers.
The FTC is reportedly concerned about the potential for ChatGPT to be used for harmful purposes, such as spreading misinformation or creating deepfakes. There have been multiple reports of the chatbot producing inaccurate and libelous statements about real individuals.
An Effort to Clean Up the Wild West of the Internet?
According to a report by The Guardian, the FTC’s investigation into ChatGPT is part of a larger effort by the agency to regulate new and emerging technologies. The FTC has been particularly concerned about the potential for AI to be used for harmful purposes. In 2022, the FTC issued a report warning about the risks of AI-powered deep fakes.
The FTC’s investigation into ChatGPT is still in its early stages, but it highlights the growing concerns about the potential for AI to be used for harmful purposes.
The FTC has demanded to know whether OpenAI used an illegal process called “web scraping” to obtain the information from the internet or purchased it from third parties. It also asks for the names of the websites that data has been taken from, as well as any steps taken to prevent personal information from being included in the training data.
The Washington Post was first to report the investigation. The FTC declined comment. OpenAI has also been contacted for comment.
What Does Investigation Mean for Consumers and Business?
The FTC’s investigation into ChatGPT is a reminder that AI is a powerful tool that can be used for both good and bad. Consumers and businesses alike need to be aware of the potential risks of AI and take steps to mitigate those risks.
Here are some tips for using AI safely and responsibly:
- Be aware of the potential for AI to be used for harmful purposes.
- Do your research before using any AI-powered product or service.
- Make sure that you understand the terms of service for any AI-powered product or service that you use.
- Be transparent about your use of AI with your customers and employees.
- Don’t share personal or sensitive data
What’s Next for ChatGPT?
It is too early to say what the future holds for ChatGPT. However, the FTC’s investigation is likely to have a significant impact on the development of the chatbot AI.
If the FTC finds that ChatGPT poses a significant risk to consumers, OpenAI may be required to make changes to the chatbot AI. These changes could include limiting the types of conversations that ChatGPT can have, or requiring users to provide more information about themselves before using the chatbot AI.
The FTC’s investigation is also likely to raise awareness of the potential risks of AI. This could lead to more businesses and consumers being cautious about using AI-powered products and services.