September 23, 2019
In a recent blog post, Google has confirmed it will resume using human reviewers again, for audio recordings captured from Google Assistant users. It had put the practice on hiatus in July of this year.
The decision to cease using humans in the audio review process was borne from a consumer backlash, after several high profile news stories criticized the company, along with Microsoft, Apple and Amazon, for listening to audio that users believed was private.
While Google has pledged to continue with its human verification process, it has announced an opt-in program, as well as tighter restrictions on how long it holds users' data and what it can do with it. Here's what's new.
Why Did Google Stop Using Human Reviewers?
Google's decision to stop using staff to listen in on audio recordings came off the back of a somewhat turbulent time earlier in the year, when a flood of stories appeared in the press stating that Google, as well as many other tech companies, were actively listening in on voice assistant commands. The reason all the accused-organizations gave was the same: staff were verifying the queries being asked by users, and using this data to hone and train the system.
While this might not have been a secret, it's hardly something that the companies involved were shouting from the roof tops about either, and the public reception was somewhat disastrous. Hence Google's decision at the time to stop relying on human verification, as well as Apple and others.
Google had always been clear that audio captured by Assistant was being recorded, and also stated that it masked the data so that anyone listening to the recordings would be unable to pinpoint the user it had come from. However, some investigations, including one carried out by Belgian organisation VRT, found that people were easily identifiable if they mentioned private details such as their home addresses or names. It also discovered that Google Assistant could be activated, even when the ‘Hey Google' activation phrase wasn't used: In some cases, a noise that sounded close enough to the phrase would start the system recording, meaning that private conversations, intimate activities, and even violence, were all unwittingly captured.
What Has Changed?
With a posting on the official Google blog, the company wants to assure its users that it has listened to the complaints, and made some changes. Yes, it has started using human reviewers again, but, its states, only with a number of caveats:
Audio won't be stored by default: Google has promised that it will not keep any audio recordings, without permission.
Users will be asked to opt in: Google Assistant users, both new and old, will be asked if they wish to opt into Voice and Audio Activity (VAA), which will be used to train better voice recognition. This will store your data, and it may be listened to by human reviewers.
Interactions with Assistant: Users are able to view any interactions they have had when using Google Assistant, and delete them. This isn't actually a new feature, but it seems Google wants to remind its users that it's there.
Extra privacy features: Google has stated that recordings are anonymous, and always have been, but that they will add more privacy filters. No word on what these will actually look like.
Adjust sensitivity: Google plans to tackle the accidental activation of Assistant, with user-adjustable sensitivity settings. These should give users more control over those times when the ‘Hey Google' phrase is not used, but the assistant springs to life anyway- reducing the chance of this happening.
New data storage policy: Google will implement a new policy in which it promises to delete some data associated with accounts, if it is over ‘a few months old'.
We believe in putting you in control of your data, and we always work to keep it safe. We’re committed to being transparent about how our settings work so you can decide what works best for you. – Google blog post
How Are Other Companies Earning Back Trust?
Google isn't alone in being called out and taking action. Users were equally upset when similar news stories started to surface about how Facebook, Amazon and Microsoft used audio recordings. The good news is that the public's outpouring of negativity does appear to have served as a wake up call for these companies, in the way they handle our data.
Amazon's Alexa now has a command that lets users delete all data, or opt out of data sharing entirely. It also has a dedicated ‘Alexa Privacy Hub' where settings can be changed. Facebook has stopped using human reviewers for its messaging service, and Microsoft has done the same for its Skype and Cortana platforms. Apple also temporarily paused the practice of listening to Siri recordings, and while it has also started up again, the process has been brought in-house and no longer relies on third party contractors.
Training and developing voice recognition AI has always leaned somewhat on a human element – it's near impossible to expect these systems to improve when left to their own devices. However, with many users feeling hoodwinked into believing that their commands were private, and public trust in voice assistants at an all time low, Google and fellow companies are certainly taking the right steps by promising more transparency in the way they handle audio recordings.
Did you like this article?
Get more delivered to your inbox just like it!
Sorry about that. Try these articles instead!