Google Promises Not To Use Its AI in Weapons

Google has promised that its future AI tech will never be used in weaponry. The company has set out a new set of principles governing the development and use of its artificial intelligence softwares, following backlash for its involvement in Pentagon-led drone programs.

Google is hoping that this new set of eleven principles will add some clarity to its burgeoning AI business. In effect, these principles should act as a guide for all future AI development by the California-based company.

However, Google has already attracted criticism for recently contradicting one of its principles – that its AI won’t be used for weaponry or other technology designed to harm or facilitate the harm of people – following its work with the Pentagon on its drone program.

So are these new principles hollow? Or, will they have a lasting effect on the way that AI is developed?

What Are the New Google AI Principles?

Google’s eleven commandments are split into two parts. The first section states the objectives for all Google AI applications.

Google AI Principles

Without sounding too cynical (these objectives are, without doubt, a legitimately good idea) the principles all feel somewhat predictable. There are commitments to transparency, social benefits and sensitivity to local cultures and privacy.

What’s novel is that Google has codified the way it will evaluate the uses for its AI tech. In principle seven, it states the evaluation factors:

  • Primary purpose and use – is it being used for the right reasons?
  • Nature and uniqueness – does this tech exist already?
  • Scale – will the use of AI have a significant impact?
  • The nature of Google’s involvement – is it providing general-purpose tools, putting tools into place for customers or developing unique tools for customers?

These evaluation questions should, according to Google, “Limit potentially harmful or abusive applications”.

Avoiding AI Bias

In the second principle, Google states it would look to “Avoid creating or reinforcing unfair bias”. This is, of course, a noble aim. But in the days of Fake News, it’s unclear how Google is going to go about doing this.

Moreover, how can Google, whose shareholders recently rejected an employee-backed diversity proposal and is still reeling from James ‘Fire4Truth’ Damore’s Diversity Memo, seek to be the arbiters of bias and cultural sensitivity?

Accountability and Privacy

Now, we’re not going to conflate Facebook’s recent, erm, issues with accountability and privacy with Google’s aims here.

But, in in the fourth and fifth principles, Google provides some pretty non-specific promises about opportunities for feedback and appeal with the implementation of its AI tech, and giving opportunities for consent, and “appropriate” transparency and control over the data the AI gathers or analyses.

This kind of non-specific language isn’t going to help regular Joe Public work out what their rights are in relation to AI. Nor can it help them work out whether they can truly opt-out of any of this stuff.

AI No-Gos

As we mentioned above, there are four principles that prohibit the use of AI in certain contexts.

Google principles 2These principles may seem pretty obvious, but again, they’re laden with non-specific language. This includes Google stating it won’t design or deploy AI in technologies that contravene “Widely accepted principles of international law and human rights”.

That phrase “widely accepted” seems open to a risky amount of interpretation, for example. Accepted by who? The UN? The US? Even those two bodies aren’t agreed on human rights laws. For example, the United States hasn’t ratified the UN Conventions on the Rights of the Child. So what principles would Google decide on when it comes to potential AI surveillance of children, versus of adults?

The caveat in the second point seems slightly concerning as well. Google won’t let its AI be used where its principal purpose is to cause injury to people. So why was it working with the Pentagon to make its drones more effective?

Project Maven

It was recently revealed, via a number of employee resignations, that Google had been working with the Pentagon to analyse drone footage using Google’s AI. The purpose of the AI tech was to classify objects filmed using drones, to speed-up analysis and, essentially, work at a larger scale than human-only analysis would allow.

Google has since cancelled its work with the Pentagon, eerily named Project Maven. It’s also also removed it’s “Don’t be evil” slogan from its offices. Now, we’re not these saying there’s a connection between these two events but the timing is remarkable.

All of this raises a question. If Google believes in openness and transparency, as it stated in the first seven principles, but it took employees resigning and selling their stories to Gizmodo for Project Maven to become common knowledge, how can we expect the company to make good on the second set of AI principles?

Should You Be Worried About AI?

AI and its potential does, we must admit, sounds scary. Particularly when you throw drones and the Pentagon into the mix.

But, for most of us, AI should help make our lives better. For example, translations will become faster, more accurate and more context-specific. There are also vast implications for AI in the transport sector – from driverless cars to improving the flow of traffic or train lines. Agricultural industries are also expected to harvest the benefits of AI, being able to monitor the health of their livestock or the yield of their plants.

However, ensuring the responsible use of AI tech is imperative, especially after the recent revelations of US law enforcement using Amazon’s facial recognition technology. However, leaving the policing and prosecution of AI tech to the very companies that produce and market it is wrong – Google cannot be both poacher and gamekeeper.

Having said that, the world needs a charter on AI. Google’s principles might just be a perfect starting point to get there.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Tom Fogden is a writer for Tech.co with a range of experience in the world of tech publishing. Tom covers everything from cybersecurity, to social media, website builders, and point of sale software when he's not reviewing the latest phones.
Explore More See all news
Back to top