In a decision that has alarmed employees, Google plans to drop its pledge to not use the AI it creates for weapons or surveillance.
In 2018, the company laid out a set of principles governing how it would develop and use its AI technology. This included four statements that promised the company would not “design or deploy” AI when it could “cause overall harm,” in “weapons” or for “surveillance violating internationally accepted norms.”
But a line has now been slashed through these original principles, and updated guidelines have been published, which leave the company far more scope to work with the governments of the world.
Roll Back of Principles
The original principles were a little vague – not specifying, for example, which “widely accepted principles of international law and human rights” they were referring to. However, the new principles echo this vague statement, doing away with the references to surveillance and weapons.
Instead, the company says it will implement “appropriate human oversight, due diligence, and feedback mechanisms” but doesn’t detail what this will look like. It also, interestingly, says its focus is going to be on “developing and deploying models and applications where the likely overall benefits substantially outweigh the foreseeable risks.” Note the use of the word “likely”.
This just in! View
the top business tech deals for 2025 👨💻
Changing Landscape
The changes were accompanied by a blog piece, written by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika. In it, the duo say that the new principles reflect how AI is “becoming as pervasive as mobile phones and the internet itself.” With this widespread uptake has come a deeper “understanding of AI’s potential and risks,” they write. And working with the government is one area of huge potential.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” – Hassabis and Manyika
Alarm in the Ranks
This is not the first time that employees have flagged – and reacted to – what they see as morally questionable behavior from the company.
The 2018 principles were published after workers wrote an open letter descrying Google’s contract with the Pentagon, which allowed the company’s computer vision algorithms to analyze drone footage. The letter stated: “We believe that Google should not be in the business of war.”
Google opted not to renew its contract what was called Project Maven, detailed The Washington Post.
However, this latest decision comes at a time when President Trump has taken the brakes off AI development, chucked away previous safety measures and is pumping $500 billion into ‘Project Stargate’ – a new company that aims to be the “largest AI infrastructure project in history.”
Already, OpenAI has unveiled a super agent for government use and Meta is pushing its Llama AI model with the US Government to “support the prosperity and security of the United States.”
All of the tech bros are falling over themselves to get their AI wares in front of President Trump and it seems that safety – and moral conundrums – have been pushed aside in a bid for bucks.