Vasily Fedosenko / Reuters
The Daily Stormer’s unceremonious booting from large swathes of the internet has made plenty of headlines; tech companies, the story goes, are “joining the resistance.” Silicon Valley is conducting a “Nazi purge,” and Charlottesville is “reshaping the fight against online hate.”
But the demise of this hateful website has also raised a new debate about an old problem: Silicon Valley’s control of our online speech.
Companies like Facebook and Twitter have been making hard decisions about hate speech for a long time. These platforms, as well as web-hosting companies and other intermediaries, are not governed by the First Amendment. Instead, they must obey 47 U.S.C. § 230, known colloquially as “CDA 230.” This gives them immunity from liability for most of the content they host, and says they are free to host (or not host) whatever they want.
Those rights are important, but they also come with great responsibility. And I believe these companies are failing to live up to that responsibility.
The truth is companies get these decisions wrong a lot of the time. And because they’re not transparent about how their rules are enforced or about how much content is taken down, we only hear about the bad decisions when they make headlines. That is happening increasingly often these days, as those in media circles take more interest in the issue.
Just this summer, Facebook used its hate speech policies to censor queer artists and activists for using words like “dyke” and “fag”; Twitter booted several leftist activists, apparently for engaging in uncivil counterspeech; and YouTube’s algorithms deleted masses of videos from the Syrian civil war that activists had archived for use in war crimes investigations.
This is nothing new. Over the years, I’ve watched as Silicon Valley companies have made globally important decisions that have stirred less debate than this week’s Daily Stormer episode. Last year, when Twitter boasted that it deleted 235,000 “terrorism-related” accounts from their service, hardly anyone blinked. But in that case, as in this one, we need to ensure that these companies are accountable to their users, and that people have a path of recourse when they are wronged.
I’m not so worried about companies censoring Nazis, but I am worried about the implications it has for everyone else. I’m worried about the unelected bros of Silicon Valley being the judge and jury, and thinking that mere censorship solves the problem. I’m worried that, just like Cloudflare CEO Matthew Prince woke up one morning and decided he’d had enough of the Daily Stormer, some other CEO might wake up and do the same for Black Lives Matter or antifa. I’m worried that we’re not thinking about this problem holistically.
In the case of the Daily Stormer, companies were undoubtedly very aware of the site’s presence on their platforms and made not just a moral decision, but a business one as well. But that’s not how content moderation typically works: In most instances, companies rely on their users to report one another. The reports enter a queue that is then moderated either by humans — often low-wage workers abroad whose job requires them to look at horrible images so you don’t have to — or by algorithms. A decision is made and the content is either left up or removed.
Some platforms, like Facebook, mete out punishment to their users, temporarily suspending them for up to 30 days; others may boot users for their first or second infraction. Users are only able to appeal these corporate decisions in certain circumstances.
How comfortable you are with this kind of set up depends on your view of speech and who should police it. There are different kinds of free speech advocates — some believe that a pluralistic, democratic society is nothing without freedom of expression, and that we must protect the rights of all if we want to protect the most vulnerable. The “slippery slope” argument is popular, although it’s not always convincing.
There are others who will fight to the death for your right to be a hateful Nazi, although I’m not one of them.
Rather, my belief in the need for freedom of expression has strengthened over time as I’ve watched governments and corporations restrict the speech rights of vulnerable groups, and questioned their ability — and sometimes their intent — to get it right. I’m also deeply unconvinced that censorship, especially when decoupled from education, is an effective means of change. Finally, even if one platform bans a user, another might be there to welcome them with open arms.
Cloudflare’s CEO, fresh from making what he admits was a dangerously arbitrary decision to cut off a neo-Nazi site, says we need to have a conversation about this, with clear rules and clear frameworks. I agree. What we need is for companies to choose openness, transparency, and due process.
In practice, that means giving experts and the public greater input into speech policies and their implementation — something that, right now, companies only do behind the secrecy of a nondisclosure agreement, if at all.
It means offering users a way to consent to policies and policy changes, and being open about how rules are made and enforced. And it means ensuring that every user has the ability to appeal decisions made against them. Even assholes.
Quelle: <a href="Silicon Valley's "Nazi Purge" Shows Who Really Controls Our Online Speech“>BuzzFeed
Published by