US Backs First Global Agreement on Safe AI Development

In a world first, the non-binding guidelines address every stage of the AI development lifecycle.

The US, the UK, and 16 other countries have announced new guidelines for the development of artificial intelligence (AI), based on the principle that these systems need to be secure by design.

The international agreement, which is the first of its kind, represents a leap forward in AI safety, by prioritizing the well being of consumers over the rapid development of commercial AI products.

However, with OpenAI already developing an artificial general intelligence (AGI) system that researchers fear may be a “threat to humanity” and other companies playing catch-up with their own AI models, it’s unclear whether the non-binding guidelines go far enough.

Global Push to Make AI Systems ‘Secure By Design’

The US, the UK, and dozens other countries including Germany, Italy, Australia and Singapore have released new guidelines for the development of AI technology.

The 20-page agreement which was published by the UK’s National Cyber Security Centre (NCSC) emphasizes that the systems should be “secure by design”, and is broken down into four main areas: secure design, secure development, secure deployment, and secure operational and maintenance.

Surfshark logo🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.See deal button

“The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority,” – The Cybersecurity and Infrastructure Security Agency (CISA)

The directive encompasses each stage of the AI development lifecycle, and intends to protect the public from misuse by ensuring “the technology is designed, developed, and deployed in a secure manner” according to CISA.

Aside from advocating for safe AI development, the agreement also requires companies to report vulnerabilities in the technology through a bug bounty program, so exploits can be identified and stamped out quickly.

Biden Takes Europe’s Lead on AI Safety

This agreement is the first time in history that countries from across the globe have banded together to create recommendations for secure AI development. However, they’re the latest in a string of actions the US government has been taking to manage risks associated with the technology on home soil.

At the end of October, the White House launched its first-ever executive order on AI development, addressing a number of concerns like algorithmic bias, data privacy, and job displacement.

The crackdown represented a hardened stance the government is taking towards AI. But while White House Chief of Staff Bruce Reed claimed these were the “strongest set of actions” any government has taken to safeguard AI, the US’s pace of change still lags behind its European neighbors.

First proposed in 2021, the EU passed its AI Act in June of this year, which classifies different AI systems by the potential risk they could pose to users. The framework aims to turn Europe into a global hub for trustworthy AI, and unlike the recent agreement, mandates changes through legislature, instead of non-binding guidelines.

Do these Guidelines Go Far Enough?

With AI creating lucrative opportunities for Silicon Valley’s top dogs – exemplified in OpenAI’s pending AGI model ‘Project Q*’ – it’s unclear whether these non-binding recommendations will convincing enough to generate real change.

What’s more, with research revealing that there are “virtually unlimited” ways to evade ChatGPTs and Google Bard’s security features, the new guardrails also fail to address the security concerns of AI models already in the wild.

However, any global effort to ramp up safeguards in the AI wild-wild west undoubtedly marks a step in the right direction.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Isobel O'Sullivan (BSc) is a senior writer at Tech.co with over four years of experience covering business and technology news. Since studying Digital Anthropology at University College London (UCL), she’s been a regular contributor to Market Finance’s blog and has also worked as a freelance tech researcher. Isobel’s always up to date with the topics in employment and data security and has a specialist focus on POS and VoIP systems.
Explore More See all news
Back to top
close Step up your business video conferencing with GoToMeeting, our top rated conferencing app – try it free for 14 days Try GoToMeeting Free