ChatGPT maker OpenAI has warned it might be forced to pull the incredibly popular artificial intelligence (AI) tool out of European Union (EU) countries, if proposed regulations go ahead in their current form.
The EU has drafted a far-reaching AI Act that would force generative AI companies to disclose what copyrighted materials were used in the development of their systems and the content they create. If passed, the legislation will likely represent the world’s first law governing AI models and how they are used.
With businesses using ChatGPT and similar tools in new and exciting yet still legally untested ways, the EU vs OpenAI looks like a key early battleground following the emergence of AI as a hot button tech issue.
OpenAI Boss Hopeful ChatGPT Has Future in Europe
As first reported by Reuters, OpenAI CEO Sam Altman has warned against the threat posed by the EU ‘over-regulating’ generative AI models like ChatGPT.
However, speaking at a recent event in London, he appeared hopeful that future amendments to the legislation might make it easier for his company and its tools to comply with Europe’s demands.
“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it,” he told Reuters.
What is the EU AI Act and Why Does It Matter?
As we've said, the EU's AI Act is currently draft legislation that will seek to govern how AI is used in the body's 27 member countries, which include the likes of France, Italy and Spain. In the main, the AI Act is being devised to safeguard European countries against the use of AI in “biometric surveillance, emotion recognition, predictive policing AI systems” and more, according to the EU's own website.
The AI Act will now be debated by the European Parliament, Council and Commission. Followers of European politics will tell you that few laws pass through the EU uncontested and in their original form.
The importance of the debate around AI was underlined by the fact that Mr Altman also met with UK Prime Minister Rishi Sunak as part of his trip to speak at the University College London event. AI was also on the agenda at the recent G7 summit in Hiroshima, Japan, where leaders of some of the world’s largest economies spoke of the importance of ensuring AI remained ‘trustworthy’.
A Balancing Act to Protect Creative Content
At the heart of the EU AI Act including ‘tailor-made regimes' to govern generative AI models like ChatGPT and Google Bard is a desire to protect the creative industries and the original content they produce.
In this light, forcing AI companies to reveal the copyrighted works drawn on by their systems is a way to ensure that artists, musicians and writers continue to be recognized for their work – both reputationally and, perhaps more importantly, financially.
On the other hand, the vast amount of digital information that now exists and contributes to the AI knowledge pool makes robust attribution of sources a technically difficult feat to achieve. As ever, it appears man and machine may have to think slightly outside the box as we continue to learn to get along.