To truly target hate speech, moderation must go beyond civility

0

We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Register today!


Many Americans decry the decline in civility online, and platforms generally ban profane language. Critics of the technology say the focus on “civility” alone is dangerous and that such thinking helps fuel the white supremacist movement, especially on social media.

They’re right.

Big Tech errs in treating content moderation as just content matching. Polite speech diverts attention from the substance of what white supremacists are saying and redirects it back to tone. When content moderation relies too heavily on profanity detection, it ignores how hate speech targets people who have been historically discriminated against. Content moderation neglects the underlying purpose of hate speech – to punish, humiliate and control marginalized groups.

Prioritizing civility online not only allowed civil but hateful speech to thrive and normalized white supremacy. Most platforms analyze large bodies of speech with small amounts of hate rather than known samples of extremist speech – a technological limitation. But platforms don’t recognize this white supremacist discourse, even when it’s not directly used to harass, is hate speech – a political problem.

My team at the University of Michigan used machine learning to identify patterns in white supremacy discourse that can be used to improve platforms’ detection and moderation systems. We set out to teach algorithms to distinguish white supremacist discourse from general discourse on social media.

Our study, published by the Anti-Defamation League (ADL), finds that white supremacists avoid using profane language to spread hatred and weaponize civility against marginalized groups (especially Jews, immigrants and people of color). Automated moderation systems miss most white supremacist speech when they combine hate with vulgar and toxic language. Instead, we analyzed how extremists differentiate and exclude racial, religious and sexual minorities.

White supremacists, for example, frequently center their whiteness by adding “white” to many terms (white children, white women, the white race). Keyword searches and automated detection do not reveal these language patterns. By specifically analyzing known samples of white supremacist speech, we were able to detect such speech — sentiments such as “we should protect white children” or accusing others, especially Jews, of being “anti-white.”

Extremists are active on multiple social media platforms and quickly recreate their networks after being caught and banned. White supremacy, according to sociologist Jessie Daniels, is “algorithmically amplified, accelerated, and disseminated through networks to other white ethnonationalism movements around the world, ignored all the time by a tech industry that” doesn’t see not race” in the tools she creates. ”

Our team developed computational tools to detect white supremacist speech on three platforms from 2016 to 2020. Despite its outsized harms, hate speech represents only a small proportion of the vast amount of speech online. It is difficult for machine learning systems to recognize hate speech based on large language models, systems trained on large samples of general online speech. We turned to a known source of explicit white supremacist discourse: the far-right white nationalist website Stormfront. We collected 275,000 posts from Stormfront and compared them to two other samples: user tweets in a census of “alt-right” accounts and typical social media discourse from Reddit’s r/all (a collection of discussions on Reddit). We trained algorithms to study the sentence structure of posts, identify specific phrases, and spot broad, recurring themes and topics.

White supremacists are surprisingly polite across all platforms and settings. In addition to adding “white” to many words, they often referred to racial or ethnic groups with plural nouns (black, white, Jewish, gay). They also racialized Jews through their modes of speech, presenting them as racially inferior and appropriate targets for violence and erasure. Their conversations about race and Jews overlapped, but their conversations about church, religion, and Jews did not.

White supremacists have frequently spoken out about the decline of white people, conspiracy theories about Jews and Jewish power, and pro-Trump messaging. The specific topics they discussed have changed, but those broader grievances have not. Automated detection systems should search for these themes rather than specific terms.

White supremacist discourse does not always involve explicit attacks on others. On the contrary, the white supremacists in our study were just as likely to use distinctive speech to signal their identity to others, to recruit and radicalize, and to build solidarity within the group. Branding one’s speech as white supremacist, for example, may be necessary for inclusion in these extremist online spaces and communities.

Platforms claim that large-scale content moderation is too difficult and expensive, but our team detected white supremacist discourse with affordable tools available to most researchers – much cheaper than those available for platforms. -forms. By “affordable” we mean laptops and central computing resources provided by our university and open source Python code available for free.

Once white supremacists enter online spaces – as well as offline spaces – they threaten the safety of already marginalized groups and their ability to participate in public life. Content moderation should focus on proportionality: the impact it has on those who are already structurally disadvantaged, compounding the harm. Treating all offensive language equally disregards the inequalities that underpin American society.

Ultimately, research shows that social media platforms would do well to focus less on politeness and more on justice and fairness. To hell with civility.

Libby Hemphill is an associate professor at the University of Michigan School of Information and the Institute for Social Research.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers

Share.

About Author

Comments are closed.