Last week, Facebook announced that it would be banning any content that denies or distorts the Holocaust from its platform. Doing so takes Facebook one more step down the road to curbing misinformation on its platform.
But this precedent is worrisome for two reasons. First, it’s a new kind of censorship—one of ideas rather than concepts—and second, the censor is a corporation, not a government. Both are deeply problematic.
On the first, social media platforms have long censored certain concepts. Terms like ‘White Supremacy’ and racial, ethnic, or religious slurs are verboten. Bots constantly hunt for a list of banned words and delete posts containing the linguistic contraband quickly. Professors in a good college classroom would do the same. While students should be free to speak and express ideas, some ways of doing so are inimical to the goals of a classroom. It’s not problematic to ask students to refrain from using racial slurs or hateful language.
But the new Facebook ban isn’t one of concepts. It’s one of ideas. No one is welcome to express content like “the total number of deaths in Holocaust is lower than widely believed.” Telling people what they can say is much different than telling people how they must say it. Imagine a college professor telling students from the outset that there are some ideas that they will not be allowed to mention or defend in class no matter how careful their wording, evidence, or approach. That’s a very different kind of censorship.
There are two problems with this escalation from concepts to content. First—as pointed out by John Stuart Mill and others—humans can’t just tell at a glance whether something is true. As Kathryn Schultz puts it in a recent TED talk, being right feels just like being wrong. Instead, we use reasons, experience, and evidence to build a case to sort the true from the false. Those cases are rarely air-tight. Yet administrators at Facebook make a call for everyone on where the best evidence lies. Those same administrators in 1550 would have thought that the earth is the center of the solar system (look around—what could be more obvious?) and that European people are more intelligent, on average, than others. Thankfully for Galileo and Humanists, Facebook wasn’t around back then to adjudicate the truth.
The second problem with the escalation from concepts to content is that it creates a false sense of security. Censoring misinformation doesn’t make it go away. It just pushes it onto another platform. We complain about the micro-targeting aspects of modern political campaigning that allow messages to be tailored to people’s age, race, location, browsing history, etc. Ads filled with misinformation fly below the radar because they are not out in the public space where they can be refuted. And then we turn around and do the very same thing by forcing those with unorthodox views to express them private enclaves away from the public eye. That only allows the misinformation to fester.
You might disagree with me about the question of censoring content rather than concepts. You might think that some content is just so mistaken or so heinous or so harmful that it can be rightly censored in a public forum. OK. Suppose you're right.
This raises a second concern: who gets to make the call on what counts as mistaken, heinous, or harmful? This isn't a question of WHAT gets censored. This is a question of WHO makes that call. In this case, the call is being made by the owners of the social media landscape: corporations. And that raises two further problems even if you grant that some content can be rightly censored from the public sphere.
First, corporations have incentives to make profits. To put it mildly, those incentives don’t always align with the truth or with the interests of political minorities. Instead, corporate incentives are more likely to be aligned with the content that makes the most subscribers happy. Sometimes that will be the truth. Sometimes it will not. In either case, corporations have a perverse incentive to regulate content in ways that help the bottom line.
Second, there’s more than one corporation in the social media space. Each has carved out a particular slice of the social media landscape, and each has a somewhat different business model and clientele (think of Twitter vs. Parler). There’s no antecedent reason to think that they will judge what’s true and what’s false, what’s harmful and what’s not, in the same way. Further, there's good empirical reason to think that they will make the call differently when doing so helps them to build a particular customer base (why else do you think Fox News and MSNBC draw the lines on what's true in different ways?). When we put corporations in charge of censorship, we’re purposefully fragmenting our social interaction. Facebook bans content on Holocaust denial. Perhaps Gab bans content on human-induced climate change. Then Parler bans content defending Hamas. And so on. We need to talk to one another, not past one another. Letting corporations make the call on what's true or false ensures that we'll do the latter.
In sum, Americans should be concerned about the rise of censorship by social media bosses. It’s beyond that of concepts or language. We’re moving into the censorship of ideas. No free society will survive that.
コメント