OpenAI CEO Sam Altman has headed to Washington, testifying before the US Senate that AI needs regulation, even for his own company’s ChatGPT.
While in front of the Senate subcommittee for privacy, technology and the law, Altman called out AI’s impact on election integrity, stating that it is a “significant area of concern.”
His solution? A national or international agency with the authority to license the biggest AI systems, complete with the power to “take that license away and ensure compliance with safety standards.” Critics are calling this a move to limit further innovation within the space.
Sam Altman’s Senate Panel Remarks
Altman focused on the difference between AI and social media in order to argue that an entirely new regulatory agency is needed, whether in the US or globally, the New York Times reports.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.” ~Sam Altman
Despite calling for a new regulatory agency, Altman remained fairly optimistic about the benefits of AI, downplaying certain concerns such as the potential for a reduction in jobs as a result of the greater AI-powered automation.
While AI will automate some jobs, he claims, it will at the same time create new ones that “we believe will be much better.”
AI and Regulatory Capture: Squashing Innovation?
Is Altman’s perspective the wisdom of an insider who cares for a slow, steady process? It may well be. Some critics have a different explanation.
They think that Altman’s call for regulation is a ploy to create an environment that could keep potential rivals from growing powerful enough to knock OpenAI off its perch as one of the biggest AI companies. In effect, these critics say, Altman would be pulling the ladder up behind himself.
An industry leader calling for regulation within that industry is rare, but regulatory capture — the term refering to a company gaining control over the regulatory group designed to keep it in check — is not.
Regulatory capture in action: Facebook, Palantir, OpenAI all say, "regulate us," for it 1) absolves them of their own decision-making and 2) acts as a barrier to entry to new, small competitors. https://t.co/2T3r6w3UeR
— Jeff (Gutenberg Parenthesis) Jarvis (@jeffjarvis) May 16, 2023
Privacy expert Elizabeth M. Renieris tweeted her own concerns, too, arguing that Altman “wants a new agency because it buys him lots of time to pretend they’re unregulated.”
AI Could Be Regulated in a Lot of Ways
One final question remains: How and in what ways should AI be regulated? Plenty of concerns about the technology can all compete with each other for attention: Copyright issues, training data vetting, job protections, and even the possibility of a universal basic income have all been raised.
Should the use of AI require an acknowledgement? For the record, I wrote this article without any AI help.
If we don’t have the correct people in charge of regulation, we might wind up missing the forest for the machine-learning trees.