Imani Gandy during her nine years on Twitter has received a “litany of racist and sexist slurs” from “anonymous keyboard warriors whose only purpose on Twitter is to disrupt, harass, and abuse.”
Gandy, senior legal analyst at Rewire.News, has been thrown sexist slurs and called “nigger” by trolls so many times that it “barely registers as an insult anymore,” she wrote in an article titled #TwitterFail: Twitter’s Refusal to Handle Online Stalkers, Abusers, and Haters.
Gandy is not alone. A unique crowdsourced Twitter study involving more than 6,500 volunteers across 150 countries reveals a shocking scale of online abuse against women, according to Amnesty Decoders’ Troll Patrol project.
The findings show that an abusive or problematic tweet was sent every 30 seconds. This amounts to 1.1 million tweets mentioning 778 women politicians and journalists in the United Kingdom and the United States in 2017.
Get the facts, direct to your inbox.
Subscribe to our daily or weekly digest.
Problematic content involves content that is hurtful or hostile, especially if targeting an individual multiple times, but not as intense as an abusive tweet.
Troll Patrol, the world’s largest crowdsourced data set about online abuse against women, shows what women have said for years—“that Twitter is a place where racism, misogyny, and homophobia are allowed to flourish basically unchecked,” Milena Marin, senior advisor for tactical research at Amnesty International, said in a statement.
The data shows Black women were 84 percent more likely than white women to be disproportionately targeted. One in ten tweets mentioning Black women was abusive or problematic, compared to one in 15 for white women. Women of color were 34 percent more likely to be targeted.
While Black women received more abusive tweets compared to white women, Latinx women are more likely to get threats of physical threats; Asian women faced more ethnic, racial, and religious slurs; and mixed race women faced abuse across all categories including sexism, racism, physical and sexual threats, the study found.
Like other women in the media world, Katelyn Burns, Rewire.News’ federal policy reporter, has faced “day-to-day harassment” from Twitter users.
“That usually alternates from misogynistic comments about my period or that I’ll never attract a man with my attitude if they don’t read my profile as trans, to misgendering and comments about my ‘manly’ appearance in pictures if they do clock me as trans,” said Burns, who identifies as trans.
“I also get bombarded with direct messages from men who want to sleep with me but those are usually sprinkled with DMs from men who think I’m a degenerate who deserves to be dead. Sometimes I get both from the same guy,” she said. “I’m sort of desensitized to it all by now. Those comments don’t penetrate my emotional armor anymore.”
Many women have become similarly immune and just ignore these tweets. Some block the perpetrators. Others tone down on or quit the platform.
“But ‘just block them’ is not a solution for those of us who are relentlessly assailed by Twitter users who have nothing better to do than to slap us in the face with their hate-filled filth, and who, upon account suspension, create new accounts in order to continue their campaign of harassment,” wrote Gandy, who uses apps like Block Together to share a list of blocked users (over 5,000 users for her) with other Twitter users (and vice versa) “so that I can avoid those users that people whom I trust have already deemed unworthy of my time.”
“My advice for others who face this is to find a way to come to terms with the fact that anything you put out on the internet could be weaponized and turned against you. Your selfies, your relationships, your family, your address, anything they can get their hands on. Be very, very careful,” said Burns, who has become much more private about her personal life.
Many, like Gandy, have pointed out Twitter’s lack of responsibility in dealing with this form of online abuse against women. Twitter’s diversity data shows that it’s failing and continues to be largely run by white or Asian men. The company’s “underrepresented minorities” (nonwhite and non-Asian) make up 12.5 percent of the company’s total workforce, up from 11 percent in 2016, Recode reported this year.
“And because, for the most part, straight white guys don’t endure the same level of online harassment that women do, they simply cannot understand what it’s like to be on the receiving end of relentless abuse,” Gandy wrote in her 2014 piece.
The Amnesty study released Tuesday points out that abusive content violates Twitter’s rules and includes prohibitions against tweets that promote violence against or threaten people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
Amnesty International has repeatedly asked Twitter to publish data regarding the scale and nature of abuse on their platform, but the company has failed to do so. The study was shared with Twitter and the company reportedly asked for clarification on the definition of problematic tweets, “in accordance with the need to protect free expression and ensure policies are clearly and narrowly drafted.”
“Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” Marin said. “Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on.”