Skip to main content Accessibility
The Intelligence Report is the SPLC's award-winning magazine. Subscribe here for a print copy.

Social Media Platforms’ Anti-Hate Efforts Inch Ahead

Google aspires “to organize the world’s information and make it universally accessible and useful.” Facebook’s mission is to “make the world more open and connected.” Twitter, while occasionally billing itself as “the free speech wing of the free speech party,” prohibits “hateful conduct,” telling users: “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.”

In December, all three companies, together with Microsoft, announced an ambitious plan to curb the online spread of extremism through the creation of a joint industry database of “content that promotes terrorism” — a move that has been widely interpreted to mean that the companies will focus on propaganda created by jihadist terrorist groups like the Islamic State.

Yet when it comes to tackling right-wing extremist content and hate speech, the companies seem mysteriously helpless.

Twitter is probably the worst offender: Despite explicitly banning “accounts whose primary purpose is inciting harm towards others on the basis of these categories,” it has done remarkably little in the way of enforcement. In late November, it won applause from progressives when it strengthened reporting tools and banned numerous “alternative right” accounts that pumped out racist, anti-Semitic, anti-Muslim and misogynistic content, most notably that of prominent white nationalist leader Richard Spencer.

But in December, Spencer’s account was back up, and it emerged that Twitter had only banned him for violating a policy that prohibits individuals from running multiple accounts with overlapping uses. (The social media platform has not restored the account of Milo Yiannopoulos, an Alt-Right “troll” whose vicious online attacks even Spencer acknowledged as “harassment.”)


Neo-Nazi Andrew Anglin has led his “Troll Army” in some of the more vicious online attacks seen in recent years. But he is far from alone among haters who are increasingly using social media to intimidate and harass their enemies.

Facebook and Google, meanwhile, have done little to counter the use of their platforms to spread hateful, false “information,” from conspiracy theories accusing various minority groups of plotting against America to websites promoting Holocaust denial and false “facts” about Islam, LGBT people, women, Mexicans and others. Facebook’s hate speech policy, as leaked to the German newspaper Süddeutsche Zeitung, prohibits attacks on individuals based on their race, national origin, religion, or sexual orientation, but seemingly permits attacks on religions and nationalities more generally. And a recent report by The Guardian found that Google’s autocomplete function, which predicts what people mean to ask when they start typing, filled in “evil” after a user typed “Are Jews” and “Are women.”

The platform altered these particular autocomplete answers following The Guardian’s article, and appeared to take steps to assure that the top results no longer included hate sites. But if past is precedent, these Whac-A-Mole efforts are unlikely to significantly reduce the spread of hate online.

As Elon University communications professor Jonathan Albright, who studies the spread of online hate, told The Guardian, promoters of hateful false information “have created a web that is bleeding through on to our web. This isn’t a conspiracy. There isn’t one person who’s created this. It’s a vast system of hundreds of different sites that are using all the same tricks that all websites use. They’re sending out thousands of links to other sites and together this has created a vast satellite system of rightwing news and propaganda that has completely surrounded the mainstream media system … like an organism that is growing and getting stronger all the time.”

Indeed. Even as Reddit, a major platform for online hate, announced that it would ban some of its most vicious users, Andrew Anglin, neo-Nazi editor of the viciously racist online Daily Stormer, was rallying his followers for their next attack. Urging them to create “fake black person accounts,” he wrote: “We wish to create a state of chaos on twitter, among the black twitter population, by sowing distrust and suspicion. … Chaos is the name of the game.”