The crackdown on alleged hate speech is intensifying as social media platforms either expand their policies or step up enforcement of their terms of service.
Reddit as part of a focus on what it deemed hate speech, including The_Donald as well as the subreddit for the leftist podcast Chapo Trap House. Twitch President Trump. Facebook a “boogaloo” group (part of a loose affiliation of anti-government forces that vie for a second civil war), citing its promotion of violence. And YouTube a group of far-right content creators, including white nationalists such as David Duke.
The actions seem spurred by a variety of factors, including rising internal pressure from tech employees, the, Twitter against President Trump and growing . The moves ratchet up the volume on a longstanding debate and raise important questions about free speech in the modern internet era, including what constitutes hate speech, whether platforms are obligated to allow hateful content and, most of all, who should get to make decisions about the nature of content.
“I defend the companies’ power and right to make these business decisions, as I defend the right of individuals and organizations to ‘pressure’ them to do so,” said Nadine Strossen, a law professor at New York University and the former president of the American Civil Liberties Union (ACLU), in an email.
But she is convinced any speech restrictions that go beyond what’s consistent with the U.S. Constitution’s First Amendment and International Human Rights principles will be at best ineffective and at worst counterproductive.
The application of social media company standards may not mitigate the potential harms of the speech at issue, according to Strossen. The standards for describing the targeted speech are overly vague and broad, meaning they give full power of discretion to those that enforce them, she said. Giving individuals that power means they’ll enforce them in accordance with their personal views and may mean that speech by minority views and voices is disproportionately censored, she said.
This has been the case previously when platforms such as Instagram as “inappropriate.” Facebook reportedly trained its moderators to take down curses, slurs and calls for violence against “protected categories” such as white males, but such as black children or female drivers. Facebook’s formulaic approach to what qualified as a protected category is what allowed some vulnerable subsets to fall through the cracks.