Tech.co logo

Amnesty International Study Highlights Misogyny and Racism on Twitter

December 20, 2018

7:24 am

A groundbreaking study by human rights group Amnesty International has revealed the scale of online abuse directed at women on Twitter. The huge piece of research also found that women of color were disproportionately likely to be targeted for abuse.

The study, which sorted through over 200,000 tweets sent to 778 British and American female journalists and politicians sent in 2017, found damning evidence that Twitter has an endemic problem with gender and racist abuse. In particular, the study revealed that black women are 84% more likely to be abused on Twitter than white women.

Amnesty International, working with Element AI, a global AI software company, discovered that over a million abusive tweets were sent to the women in the study over the course of 2017 — that’s one every 30 seconds.

Amnesty Report on Twitter Abuse: Key Findings

The Amnesty International study analysed 228,000 tweets for signs of abusive messages (tweets that promote violence based on race, ethnicity, national origin, sexual orientation, gender, religion, age, disability or serious disease) or problematic messages (tweets that are hurtful or hostile, especially if repeated on multiple occasions, but not as intense as an abusive tweet), according to Twitter’s own guidelines.

Amnesty International Survey Example abusive tweet

One of Amnesty International’s examples of an abusive tweet

Amnesty International Survey Problematic Tweet

One of Amnesty International’s examples of a problematic tweet

Some of the key findings in the Amnesty study include:

  • Black women were 84% more likely than white women to be mentioned in abusive or problematic tweets.
  • Women of color were 34% more likely to receive abusive or problematic tweets than white women.
  • Online abuse cut across the political spectrum, with both right- and left-leaning women receiving broadly similar levels of abuse.

Amnesty International Survey Mentions by Ethnicity

Amnesty International Survey mentions by politics

How Was the Study Conducted?

The study was conducted by over 6,500 volunteers from 150 countries. These volunteers sorted through some 228,000 tweets, marking offensive ones as abusive or problematic. Collectively, these volunteers, known as the ‘Troll Patrol’ dedicated 2,500 hours analyzing the tweets — the equivalent of one person working non-stop for one-and-a-half-years.

Milena Marin, Senior Adviser for Tactical Research at Amnesty International said:

“With the help of technical experts and thousands of volunteers, we have built the world’s largest crowdsourced dataset about online abuse against women. Troll Patrol means we have the data to back up what women have long been telling us – that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked.

“Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices.”

Amnesty International also published a selection of the tweets used in the study. The idea of receiving this sort of abuse every 30 seconds is sobering, to say the least:

Images of tweets

Why Did Amnesty International Run This Study?

Amnesty International has been calling on Twitter to clean up its platform and be more transparent about how it conducts moderation of abuse on its platform.

The group also wanted to demonstrate that accusations that Twitter is a partisan platform and censors users of distinct political persuasion are false. The study’s findings confirmed Amnesty International’s belief that most abuse is racist or misogynistic, rather than political.

Politicians included in the sample came from across the US and UK political spectrums. The journalists included were from a diverse range of US and UK publications. These included The Daily Mail, The New York Times, Guardian, The Sun, GalDem, Pink News and Breitbart.

Element AI, for its part, is using the study’s data set to help construct a machine-learning model that could automatically detect abusive tweets.

However, Amnesty International made it clear that its data-gathering and analysis was not an attempt to police Twitter or force it to remove content:

“We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on”.

In response, Twitter requested clarification on the definition of “problematic”. It argued that this was “in accordance with the need to protect free expression and ensure policies are clearly and narrowly drafted.”

Yet again, Jack Dorsey’s blind deference to ‘free speech’ makes Twitter seem woefully out of touch.

Read more about social media on Tech.co

Did you like this article?

Get more delivered to your inbox just like it!

Sorry about that. Try these articles instead!

Tom Fogden is a writer for Tech.Co covering everything from website builders to mobile phones. He also loves soccer, probably too much.