Trusting Twitter and Facebook to regulate speech is a mistake.

It’s their right to regulate their own platforms as they see fit. Just don’t expect them to do it well.

Image for post
Image for post

It was the fact-check heard ‘round the world. On May 26th, 2020, Twitter slapped a big ol’ “Get the facts about mail-in ballots” on the above tweet from President Trump, and the social media world exploded into a tizzy of pearl-clutching partisanship.

It was an “absolutely batshit crazy” move! No, it was the right thing to do — “finally!” They have to do more! Wait — no! You how dare you try to “deplatform the President of the United States!”

Look, you want rank partisan outrage mongering about whether or not Twitter should or shouldn’t have stuck that fact check on Trump’s tweet? Go get it. There’s plenty of it out there; it takes about a minute to digest the hot takes from either side, and even less time and brainpower to write them.

Either way, the American media outrage machine is — as usual — missing the larger point, so allow me to volunteer.

Here it is: trusting billion-dollar technology companies like Facebook and Twitter to regulate speech, protect the pillars of our democracy, and uphold the integrity of the information circulating on their platforms is an ill-begotten folly.

The “social media companies need to regulate speech more aggressively” drumbeat started sometime during the 2016 election in light of the revelation that Russian trolls were posting memes and ads and fake news articles in hopes of meddling in the race. But the speed with which the decision-making class and left-of-center establishment embraced the idea continues to baffle me.

Twitter and Facebook will not protect us from bad speech, be it inflammatory, misleading, unsubstantiated, out-of-context, whatever. They are not good at it and have no vested interest in doing it; they are, in fact, specifically designed to promulgate and profit off of those very types of speech.

As I wrote nearly two years ago:

Chamath Palihapitiya — one of Facebook’s earliest hires — now refers to social media websites as “short-term dopamine-driven feedback loops that…are destroying how society works.” Psychologists have argued that Facebook deteriorates mental health, increases jealousy and anxiety and sews political division

Facebook also undoubtedly makes us more susceptible to bias confirmation, exposes us to literal fake news and falsehoods and, by constantly presenting us with the most outrageous takes on any given news story, encourages us to stay outraged at each other over political issues. At its absolute worst, a Facebook newsfeed is a zoo of moral outrage and virtue signaling; a place where people talk past each other and brandish their political opinions and personal virtues as weapons. The political “conversations” on Facebook are rarely conversations, and the rank negativity, judgement, hostility and anger Facebook has empowered, more than anything, have convinced me that it’s a bad deal.

As is often said among the social media skeptics, “If you’re not paying for it, you’re the product.” Facebook is funded at least in part by advertising and the sale of its users’ data; for that model to function, Facebook needs users to use the platform. Indeed, Facebook wants you to be on Facebook as much as possible.

To get you to do that, Facebook calibrates their algorithm to bring you the content it thinks you will like, share, or comment on. The entire business model is designed to exploit human psychology; as an author at Quartz so eloquently put it, “Facebook is using an old drug dealer tactic to keep its users hooked.”

Why would a company that’s designed to run like this have any interest in limiting misinformed or inflammatory rhetoric — especially if a given user demonstrates that they like engaging with that content?

It wouldn’t; when it comes to going viral, “nothing is speedier than rage.”

Back in April of 2018, Dianne Feinstein asked Mark Zuckerberg during a Senate hearing, “Mr. Zuckerberg, what is Facebook doing to prevent foreign actors from interfering in U.S. elections?”

Zuckerberg gave a rehearsed, unsurprising, uninteresting, and unconvincing answer — but what he should have said was, “With all due respect, Senator, that’s your job, not mine.” Facebook is a for-profit technology company that was started by a 19 year old as a website where college kids could rate the looks of their classmates. Yes, it’s evolved in the years since then, but like all successful companies, it’s done so in response to market incentives, not to some great noble obligation to defend American’s minds, democracy, or access to bankable information.

Facebook today is no better positioned to serve as a guardian of American democracy or to protect its users brains from false, useless, distracting, or exploitative information than it was fifteen years ago. To the contrary; it is specifically designed to fill your newsfeed that exact kind information, every day, if that’s the kind of information it thinks you want to engage with.

These companies are not interested in fact checking politicians or doing their part to protect American democracy, and their platforms are not set up for it. They’re set up for making money, and that means doing whatever it takes to keep users coming back for more. From a purely logical standpoint: if that means letting inflammatory rhetoric slip into their newsfeeds, then so be it. If that’s the kind of rhetoric a given user repeatedly demonstrates that they’re interested in — and millions do — then Facebook and Twitter have no incentive to give them anything other than what they want.

Here’s the last point I’ll leave you with: even if these companies make did, in fact, make wholehearted earnest efforts to fact check, censor, or otherwise take down misleading, unsubstantiated, or inflammatory posts, they would still regularly fail.

Why?

Because they are technology companies run by flawed human beings. They have no more capacity for being unbiased, all-knowing, and unfailingly fair than you or I; it is an entitled wish of technocratic authoritarianism to demand that they somehow overcome their fallibility and faults in order to protect us all from exposure to misinformation. They’d be overwhelmed with the deluge, their personal biases would blind them, and their human capacity for error would hobble even their noblest efforts.

The answer to the misinformation crisis isn’t demanding more accountability from these companies. It will never come, at least not in full. Rather, it’s learning to regard the all of content we encounter on their platforms — all of it, not just the stuff we dislike or think should be regulated out of existence — with a significant degree of skepticism.

It’s not Facebook’s job to protect you from bad stuff so you can browse you timeline with complete credulity and trust at your leisure. The point of Facebook is the bad stuff; you’ve made the decision to browse the timeline, so you’ve already invited the bad stuff in. It’s your job to recognize the risk you’re taking in exposing your mind to these open sewers of engaging but often spurious information.

The Cronkite days are done; today’s news landscape is a fractured and frenzied social media-driven wild wasteland of partisanship, tribalism, and rage-mongering that silos us off from one another. If you’re not interested in that, you cannot rely on the gods of Silicon Valley to protect you. They don’t care about you. They care about money.

And that is entirely their prerogative; they went into business to get rich, not to purify the American information ecosystem. If you truly want to keep your brain safe from misinformation, get used to doing it yourself — despite social media, not with its help. The tech giants will not save you.

Writer + Director

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store