Serving the GW Community since 1904

The GW Hatchet

AN INDEPENDENT STUDENT NEWSPAPER SERVING THE GW COMMUNITY SINCE 1904

The GW Hatchet

Serving the GW Community since 1904

The GW Hatchet

NEWSLETTER
Sign up for our twice-weekly newsletter!

PAUL closes in Western Market
By Ella Mitchell, Staff Writer • April 22, 2024

Research team sets recommendations for removing online hate groups

Courtesy+of+Neil+Johnson
Courtesy of Neil Johnson

An interdisciplinary research team released four policy suggestions for how social media companies can counteract the spread of hate groups on social media platforms.

Neil Johnson, a professor of physics and the lead author of the study, collaborated with seven researchers from GW and the University of Miami to track online hate groups – or “hate clusters” – and outline steps social media companies can take to mitigate online hate. Researchers and extremism experts said social media companies must enforce stricter standards for content moderation to prevent online hate from manifesting into real-world violence.

Johnson said hate clusters relocate to other social media sites, change geographic locations and translate their posts into other languages to avoid detection from social media content moderators. He said some hate groups, like the Ku Klux Klan, translate their groups’ names into Cyrillic script – a writing system used in Slavic languages – to avoid detection from platforms like Facebook.

“They will follow Darwin to the letter in the sense that they will adapt, they will mutate and they will do anything they can to just keep driving,” he said.

He said the team primarily examined the social media platforms Facebook and Vkontatke – a social media site popular in Eastern Europe – for the study and mapped the spread of online hate clusters. The research was affiliated with the Institute for Data, Democracy, and Politics and was mostly unfunded but received a small amount of funding from the U.S. Air Force to support the team’s mathematical analysis, Johnson said.

Johnson said social media companies should focus on removing smaller hate clusters from their platforms because the influence of less dominant hate groups will grow over time. He said dampening the influence of larger hate groups – like the KKK – is more difficult because the groups’ leaders are “powerful” people who have the resources to sue social media companies for infringing upon their right to free speech.

He added that social media companies should randomly ban individual members of hate clusters to weaken the overall cluster and to avoid making hate group leaders into “martyrs” for the organizations.

Johnson said the companies can also adjust their content algorithms to ensure members of hate groups come into contact with anti-hate groups in the form of a group suggestion. He said exposure to the content that anti-hate groups post can neutralize the extremist opinions of members of hate groups.

Most hate groups entertain different social and political ideologies, and engaging hate groups with one another can prompt members to question their worldview, he said. Johnson said that when hate group members are presented with ideas that differ from their organization’s ideology, they are more likely to lose interest in hateful causes.

“Just as Facebook spends a lot of time promoting that they’re bringing people together that have similar likes, what about making suggestions to some of these groups that look like they’re hate groups, make suggestions to one that’s kind of opposite?” he said.

Johnson said the prevalence of online hate is one of the most “pressing” issues facing society because of the mental “suffering” incurred by the targets of online hate and bullying. He said the research is especially relevant in light of recent shootings in Gilroy, Calif.; El Paso, Texas; and Dayton, Ohio as public concern over the relationship between online hate and violence grows.

“Hate destroys lives,” Johnson said.

Extremism experts said social media platforms should enforce stricter standards for removing hateful content from their websites to ensure online hate does not spark real-world violence.

Vegas Tenold, an investigative researcher in the Anti-Defamation League’s Center on Extremism, said social media platforms are the “primary” organizational tool for far-right white supremacist and neo-Nazi groups. Tenold said hate groups thrive online because social media companies’ algorithms bring users with similar interests together to increase engagement within the platforms.

“Social media is designed to push you ever further into whatever interests you have, so they need to look at the way that that propagates and reinforces extremism and racism,” he said.

Tenold added that social media platforms are “extremely hesitant” to remove hate speech from their platforms because companies want to avoid the perception that they’re infringing on users’ rights to free speech. He said many social media companies currently rely on artificial intelligence and users to report hate speech found on their platforms.

Kristine Hoover, the director of the Institute for Hate Studies at Gonzaga University, said that users’ latent resentment against racial, religious and other groups is amplified when they’re constantly exposed to hateful online content, which can ultimately lead to members of online hate groups engaging in acts of violence.

“It provides an avenue for people to begin moving through that pyramid of hate,” she said.

Peter Simi, an associate professor of sociology at Chapman University who studies extremist groups, said social media platforms serve as mediums for hate groups to provoke and promote violence against targeted groups. He said law enforcement officers must also be involved in tracking online hate to ensure that planned acts of violence do not come to fruition.

“It may or may not take on a specific nature, but it’s intended to inform or influence offline activity or behavior,” Simi said.

Ed Prestera contributed reporting.

More to Discover
Donate to The GW Hatchet