OpenAI has unveiled its plans to ensure ChatGPT doesn’t turn into a disruptive influence in the elections that are taking place across the globe this year, including in the US and UK.
One of the steps the company is taking to ensure US users are presented with legitimate voting information is to direct them to an “authoritative” voting site called CanIVote.Org, where users can find information relating to election procedures, voting locations, as well as early and absentee voting.
The startup also announced a raft of other important clarifications and changes relating to political content and abuse prevention on its platforms, which remain the most popular AI tools used by businesses.
ChatGPT to Direct Users to Page Containing Voting Information
“In the United States, we are working with the National Association of Secretaries of State (NASS), the nation’s oldest nonpartisan professional organization for public officials,” OpenAI revealed in a blog post this week.
As part of its approach to the 2024 US election, ChatGPT will send users directly to CanIVote.org, when asked certain questions about voting issues.
CanIVote.org is a webpage where all eligible US voters can find out their voter registration status, the location of their nearest polling station, and information on what constitutes valid identification for voting.
There’s also information relating to absentee and early voting available through the site, via links to state-based election authorities.
This just in! View
the top business tech deals for 2024 👨💻
If this sort of action is successful in the US, OpenAI says, then they will repeat the approach in other regions during election cycles.
Other Steps ChatGPT is Taking During the Election Year
Along with directing voters to legitimate sources of voting information, there are several other steps OpenAI says it will take to ensure ChatGPT is not misused during election year.
For example, you won’t be able to build a GPT (a custom chatbot) that mimics a political figure or institution, and then pretend to speak on their behalf. “Applications” that try to deter people from participating are also off the menu.
“We work to anticipate and prevent relevant abuse—such as misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates,” Open AI says.
“Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm,” the company adds.
In terms of image generation, early this year, ChatGPT is implementing the Coalition for Content Provenance and Authenticity’s digital credentials – which involves attaching a cryptographic marker to all images generated using DALL-E.
OpenAI also says it has seen some “promising” results from a new provenance classifier the company is experimenting with. The technology is designed to detect whether an image has been created with DALL-E even if it has been modified in some way.
Will ChatGPT Be a Good or a Bad Influence on Elections?
ChatGPT is widely used by businesses across the United States and the chatbot will be expected to respond cogently to an avalanche of political and legal queries as the election joins closer.
Although the prospect of ChatGPT – as well as other AI tools – being used to spread disinformation has been spoken about at great length, whether ChatGPT specifically will have a particularly negative impact on elections is far from clean-cut.
Of course, there’s the risk that ChatGPT provides incorrect political information unintentionally – although OpenAI seems to be addressing this possibility head-on with its approach.
Indeed, plans to direct voters to a site with legitimate voting information – and ensuring ChatGPT doesn’t misrepresent democratic processes – would suggest, at least on the surface, that OpenAI may have enforced tighter regulations on itself than many of the other channels we use to source information, where disinformation often festers.
Disinformation – as well as other persuasive political content – already flows quickly through social media sites, search engines, and even some traditional media channels at a rapid pace.
What’s more, OpenAI isn’t managing millions of users producing an endless stream of human-generated content like social media sites, and there are no equivalent First Amendment considerations. ChatGPT’s output isn’t “uncontrolled” in quite the same way, nor is it protected.
In theory, OpenAI could even enforce a “no-politics” rule if it really wanted to. That sort of broad-brush principle is already being applied to GPTs, which can’t be made to mimic figures or institutions.
The threat AI poses to democratic elections domestically – and it is a very real one – is more likely to come from generative AI tools that aren’t under the same level of public pressure to publicize their “approach” to the elections taking place in 2024, or from someone who’s built a malicious AI tool, and therefore won’t implement any guardrails.
Of course, nation-state actors are already using AI tools to spread disinformation, and this is probably the most likely way we’ll see AI used to disrupt democratic processes if the last few years of election meddling are anything to go on.
It’s also important to remember this is an AI system that continues to surprise us, and continues to prove itself both fallible and open to manipulation. And as soon as OpenAI makes any changes to ChatGPT, there’s an army of people waiting to push it to its logical limits. A spate of elections simply creates an additional, political motivation to do this.
Whatever happens, the way ChatGPT and other AI tools are being used by voters will be a story that will dominate the headlines throughout the year.