Google Goes Back on Promise to Not Use AI in Weapons

Employees alarmed as Google draws a line through principles on AI use for the development of weapons and surveillance tools.

In a decision that has alarmed employees, Google plans to drop its pledge to not use the AI it creates for weapons or surveillance.

In 2018, the company laid out a set of principles governing how it would develop and use its AI technology. This included four statements that promised the company would not “design or deploy” AI when it could “cause overall harm,” in “weapons” or for “surveillance violating internationally accepted norms.”

But a line has now been slashed through these original principles, and updated guidelines have been published, which leave the company far more scope to work with the governments of the world.

Roll Back of Principles

The original principles were a little vague – not specifying, for example, which “widely accepted principles of international law and human rights” they were referring to. However, the new principles echo this vague statement, doing away with the references to surveillance and weapons.

Instead, the company says it will implement “appropriate human oversight, due diligence, and feedback mechanisms” but doesn’t detail what this will look like. It also, interestingly, says its focus is going to be on “developing and deploying models and applications where the likely overall benefits substantially outweigh the foreseeable risks.” Note the use of the word “likely”.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2025 👨‍💻
See the list button

Changing Landscape

The changes were accompanied by a blog piece, written by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika. In it, the duo say that the new principles reflect how AI is “becoming as pervasive as mobile phones and the internet itself.” With this widespread uptake has come a deeper “understanding of AI’s potential and risks,” they write. And working with the government is one area of huge potential.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” – Hassabis and Manyika

Alarm in the Ranks

This is not the first time that employees have flagged – and reacted to – what they see as morally questionable behavior from the company.

The 2018 principles were published after workers wrote an open letter descrying Google’s contract with the Pentagon, which allowed the company’s computer vision algorithms to analyze drone footage. The letter stated: “We believe that Google should not be in the business of war.”

Google opted not to renew its contract what was called Project Maven, detailed The Washington Post.

However, this latest decision comes at a time when President Trump has taken the brakes off AI development, chucked away previous safety measures and is pumping $500 billion into ‘Project Stargate’ – a new company that aims to be the “largest AI infrastructure project in history.”

Already, OpenAI has unveiled a super agent for government use and Meta is pushing its Llama AI model with the US Government to “support the prosperity and security of the United States.”

All of the tech bros are falling over themselves to get their AI wares in front of President Trump and it seems that safety – and moral conundrums – have been pushed aside in a bid for bucks.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Katie has been a journalist for more than twenty years. At 18 years old, she started her career at the world's oldest photography magazine before joining the launch team at Wired magazine as News Editor. After a spell in Hong Kong writing for Cathay Pacific's inflight magazine about the Asian startup scene, she is now back in the UK. Writing from Sussex, she covers everything from nature restoration to data science for a beautiful array of magazines and websites.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today