Forcing You to Listen: The Supreme Court vs. Content Moderation

It’s about forcing speech on you, and forcing platforms to host speech they otherwise wouldn’t want to.

Forcing You to Listen: The Supreme Court vs. Content Moderation
via Dall E

This week, we've invited an expert who literally wrote the book on building online communities. I couldn’t help but think of Patrick O’Keefe as I cringed through the Supreme Court arguments on state laws that would effectively ban common forms of content moderation on social media. Patrick’s wealth of experience advising major brands and media organizations on community management lays bare the stakes for the Internet at-large if these laws are ever enforced. – Bassey 

As the U.S. Supreme Court heard arguments last month about whether individual states should be able to legislate how online platforms moderate content, a quote from the transcripts hit my radar on social media, care of Leah Litman, a Professor of Law at the University of Michigan.

Justice Samuel Alito:

There's a lot of new terminology bouncing around in these cases, and just out of curiosity, one of them is "content moderation." Could you define that for me?

Paul Clement, a lawyer for NetChoice who represents the social media sites, replied: 

So, you know, look, content moderation to me is just editorial discretion. It's a way to take the – the – the – all of the content that is potentially posted on the site, exercise editorial discretion in order to make it less offensive to users and advertisers.

Justice Alito: 

Is it – is it anything more than a euphemism for censorship? Let me just ask you this. If somebody in 1917 was prosecuted and thrown in jail for opposing U.S. participation in World War I, was that content moderation? 

While I will decline to explain why banning someone from my tiny internet forum is not the same as them serving time in jail, the part that really caught my attention was describing “content moderation” as “new terminology.” 

When does something cease to be new? I’m not sure, but the Supreme Court is 235 years old. For more than 50 of those, we have been moderating content in digital spaces.

In 1973, a computer terminal was installed at Leopold’s record store in Berkeley, California. It was a single terminal digital community, where locals would visit and post messages. One person could use it at a time, and that they did – asking for local recommendations, exchanging bagel recipes, and talking about whatever. Because the idea of using a computer was so foreign to passersby, they needed to station a human next to the machine just to tell people they could use it.

That attendant served not just as promoter, but as a cross between tech support and a bodyguard. They were there to make sure people used the machine appropriately, but also to make sure the machine survived the interaction.

“I thought that we would have to defend the machine,” Community Memory co-founder Lee Felsenstein told me in 2021. “How dare you bring a computer into our record store? I like to say that we opened the door to cyberspace and determined that it was hospitable territory and, of course, it took more to open the door than just a greeting.”

In modern parlance, that person might be called a content moderator. 

That was 51 years ago, when Justice Alito was 23 years old. I have been moderating content for 26 years — beginning with small independent communities, then eventually for major brands like Forbes and CNN  — and we had the terminology before then.

Was the “terminology” there 51 years ago? Perhaps not, but the part of Alito’s question about throwing people in jail betrays the unfortunate truth: Alito doesn’t really know how old this work is.

The future of the Internet will pivot on a few decisions by people like Alito who have a lot of power and little context.

I don’t worry about Facebook’s ability to operate if these laws stand. While I sympathize with their excuse of scale, that is a beast of their own making. They don’t do moderation particularly well. Bathed in red tape, they knowingly allowed drug cartels and human traffickers to use their platform. According to whistleblower Sophie Zhang, they permitted a fake network of accounts to operate because it was tied to a member of the Indian Parliament. Former Facebook moderator Chris Gray has gone into great detail about the lack of care the company has shown toward the poorly paid army of hastily-trained people hired to sift through a mountain of abuse reports.

Instead, I worry about the impact on a teenager starting an online community and doing the best they can to make it a nice, safe place. Because that was me, just two years after Section 230 passed, which protected me against the bullying of adults and large companies who could use their money to intimidate me into taking a piece of content down or closing altogether. This is because Section 230 places the legal liability for speech squarely on the author of the speech, except in specific circumstances, such as federal criminal law, copyright infringement, and sex trafficking.

Both sides of the aisle have chosen to make content moderation and Section 230 a target of ire. Donald Trump and Elon Musk teamed to encourage such anger toward Yoel Roth, Twitter’s former trust and safety head, that he had to flee his home. His story is not unique, and Roth’s work is not dissimilar to my own. I would have banned Trump, too. I would have just done it earlier.

Many prominent conservatives say they want “political neutrality” online. Even if they could clearly define that term, you shouldn’t be required to host Trump knitting patterns to qualify for legal immunity. 

Liberals often blame 230 for an assortment of the world’s ills: Child safety, misinformation, hate speech, and the list goes on. This sort of thinking led to the near unanimous passing of SESTA-FOSTA, a law that eliminates 230 protection when a platform is determined to be engaged in the “promotion of prostitution and reckless disregard of sex trafficking.” Sounds good, right? But it’s rarely even used, and while the National Center for Missing and Exploited Children claims it has been an effective deterrent, sex workers have consistently said the law makes them less safe.

While Democrats often demand more moderation in a specific way, Republicans often want less. The irony is that most politicians are critical of platforms with less moderation. Even Trump’s Truth Social has content moderation that is reportedly pretty strict. Many Republicans are on Truth Social. Section 230 empowers this. It empowers conservatives to start online platforms and moderate as they see fit. Do you find nudity objectionable? How about burning the American flag? Or the Bible? You can ban those things. No problem. But just as you can ban those things, the law also empowers other platforms to allow them.

But that’s not enough for many Republicans, which underscores the point that this debate isn’t really about moderation or free speech. It’s about forcing speech on you, and forcing platforms to host speech they otherwise wouldn’t want to. It’s about ensuring that all platforms look the same, serving the preferences of a relatively small group of Americans. 

While that may be a good way for politicians to make noise and collect a few votes, it is no way to build a diverse internet. When liability from the speech moves from the author to the host, the platforms that benefit the most are the ones who can financially afford that burden. The rest of us just have to bend to the whims of those currently in power and hope no one sues us. Ultimately, that could mean the next teenager interested in making the internet a little better, as I once was, will simply decide not to bother.

Patrick O’Keefe is an online community, product, and program leader, with more than 20 years of experience building safe, inclusive platforms and services that drive action, grow revenue, and increase retention.

Comments

Sign in or become a Machines on Paper member to join the conversation.
Just enter your email below to get a log in link.