Is Artificial Intelligence Doomed To Reflect The Worst In All Of Us?

An artificial intelligence that dislikes women or people of color sounds like a concept straight out of a Twilight Zone

Amazon’s AI gurus scrapped a new machine-learning recruiting engine earlier this month. Why? It transpired that the AI behind it was sexist. What does this mean as we race to produce ever-better artificial intelligence, and how can we understand the risks of machines learning the worst of our own traits?

Trained on the past decade’s worth of data about job applicants, the Amazon model began to penalize CVs that included the word “women.” The incident calls to mind another experiment in AI bias, Microsoft’s “Tay” project, which the company pulled from the web after the bot learned racism from users it was chatting with on GroupMe and Kik. In Amazon’s case, however, rogue users weren’t to blame. The AI was learning from the historical data of tech’s largest global company.

An artificial intelligence that dislikes women or people of color sounds like a concept straight out of a Twilight Zone episode. But sadly, it’s reality.

How did we get to this situation? And is it possible to build an AI that won’t reflect the bone-deep prejudices that are – knowingly or unknowingly – built into our social systems? To answer that second question, it’s crucial to address the first one.

How a Sexist AI Happens

Okay, the first point to make is that sexist or racist AI doesn’t emerge from nowhere. Instead, it reflects the prejudices already deeply held within both society at large, and the tech industry specifically.

Don’t believe us about sexism in tech? One study from earlier this year found that 57 out of 58 major U.S. cites paid women in tech less than men. Last year, two female tech cofounders demonstrated tech sexism at work by proving they could make better connections once they invented a fictional male cofounder.

And as long as tech companies continue overlooking sexism, they’ll keep perpetuating a system that prioritizes male applicants and promotes male staff.

Sexist AIs Start With a Blinkered Industry…

The tech world loves rapid growth above all else. But this year, it’s finally begun to come to terms with the impact that its culture can make, and a sense of responsibility is finally taking root.

Few sum it up better than former Reddit product head Dan McComas, whose recent New York Magazine interview (titled ‘I Fundamentally Believe That My Time at Reddit Made the World a Worse Place’) includes this insight:

“The incentive structure is simply growth at all costs. There was never, in any board meeting that I have ever attended, a conversation about the users, about things that were going on that were bad, about potential dangers, about decisions that might affect potential dangers. There was never a conversation about that stuff.”

…And Machine Learning Perpetuates Them

It’s this attitude that’s at the core of prejudiced AI, which perpetuates the system just as clearly, if a little more mechanically. As Lin Classon, director of public cloud strategy at Ensono, puts it, the process of machine learning is the issue.

“Currently, the most common application of AI is based on feeding the machine lots of data and teaching it to recognize a pattern. Because of this, the results are as good as the data used to train the algorithms,” she tells me.

Ben Dolmar, director of software development at the Nerdery, backs her up.

“Almost all of the significant commercial activity in Artificial Intelligence is happening in the field of Machine Learning,” explains Ben. “It was machine learning that drove Alpha Go and it’s machine learning that is driving the leaps that we’re making in natural language processing, computer vision and a lot of recommendations engines.”

Machine learning begins by providing a model with a core data set. The model trains on this before producing its own outputs. Any historical issues in the core data are then reproduced. Translation? Sexist data turns into sexist outputs.

“It’s not unlike painting with watercolors, where the brush has to be clean or it taints the colors,” says Classon. And, in modern society, sexism turns up everywhere, Classon says, whether it’s in “recruiting, loan applications or stock photos.” Or even in the emptiness of women’s restrooms at major tech conferences, as Classon has pointed out to Tech.Co before.

How to Combat AI Prejudice

How do we solve a problem like AI prejudice? Classon boils it down to a key guiding principle: conscientious and collective vigilance. And that begins with ensuring the community behind the AI developments is equipped to spot the issues. Which takes us right back to the core problem of ensuring that a diverse developer community is in place, to find issues faster and address them more quickly.

Practically speaking, Classon has further suggestions:

Increased Transparency

Right now, machine learning algorithms function like black boxes: Data goes in, trained models come out.

“DARPA has recognized [that this leaves users unaware of how the system came to any decision] and is working on Explainable Artificial Intelligence so future AI will be able to explain the rationale behind each decision,” says Classon.

In other words, the AI itself should need to explain its decision, opening up a conversation about how it got there.

Routine Audits

 “An auditing approach can help mitigate the effect of inherent and unconscious biases. The machine learning and AI communities have an obligation to contribute to the larger data science,” Classon says, pointing me to one open source tool for bias testing that already exists.

It’s not just a tech problem: Innovative AI companies are responsible for auditing their work and developing standards that can improve the entire industry. One example is Microsoft’s multidisciplinary FATE group. The accronym stands for “fairness, accountability, transparency and ethics,” and it’s examining how an ethical AI could work.

A Feedback Loop

Instituting audits and transparency is a step, but we’ll need to keep taking more and more steps to reduce a problem that will likely never be fully solved.

“The biggest way to ensure a fair model would be to require a large number of users providing feedback from diverse backgrounds and perspectives,” Pamela Wickersham, senior global partner solutions manager at Docusign + SpringCM, tells me. “It’s unfortunately never going to be perfect, since user input and decision making will always be required. All models need a constant feedback loop, so that if sexism or prejudiced data is found to be prevalent, the model can be re-trained.”

Track Down the Subtleties

So, find the issues and you’ll solve the problem, right? Not so fast.

Let’s go back to our most recent example of AI bias. Amazon was likely aware of the potential sexism problem and still couldn’t correct it.

“It appears like Amazon actively tried to address the bias in the dataset by removing features that would identify women. So, in theory, the output would be gender-neutral and select the best candidates,” Ben says. “However, machine learning models can key in on subtle differences, and what ended up happening is that the model basically found other, subtler ways to identify women and the bias returned in the output.”

Even “highly reliable algorithms,” Ben notes, can turn up the wrong answer. Picture a model that’s 99 percent accurate reviewing one million records. It’ll get 10,000 incorrect results, and it won’t be clear which ones those are.

One method to address bias is to adjust a model’s goals rather than the data. Data scientists and platform vendors, Ben says, are actively working to fight fire with fire. One demo from a major machine learning vendor at the Gartner ITXPO earlier this month debuted a model designed to monitor other machine-learning models, identifying their biases.

“In a bit of irony,” Ben says, “the platform uses its own machine learning to generate the changes for the biased machine-learning models. Tools like this platform are what will ultimately help us address the bias that shows up in data when we accurately represent our current society and history. The problems here are hard, but they are tractable.” 

Where Next For AI?

The bottom line? AI’s reflection of our biases can be addressed, but only through a slow process that emphasises feedback, audits, fact-checks, and transparency.

It’s not unlike how prejudices must be addressed across the rest of society, too.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Adam is a writer at Tech.co and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today