Long before the pandemic crisis, there was widespread concern over the impact that tech was having on the quality of our discourse, from disinformation campaigns to influence campaigns to polarization.

It's true that the way we talk to each other and about the world has changed, both in form (thanks to the migration of discourse to online platforms) and in kind, whether that's the rise of nonverbal elements in our written discourse (emojis, memes, ASCII art and emoticons) or the kinds of online harassment and brigading campaigns that have grown with the Internet.

A common explanation for the change in our discourse is that the biggest tech platforms use surveillance, data-collection, and machine learning to manipulate us, either to increase "engagement" (and thus pageviews and thus advertising revenues) or to persuade us of things that aren't true, for example, to convince us to buy something we don't want or support a politician we would otherwise oppose.

There's a simple story about that relationship: by gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism.

This story originates with Big Tech itself. Marketing claims for programmatic advertising and targeted marketing (including political marketing) promise prospective clients that they can buy audiences for their ideas through Big Tech, which will mix its vast data-repositories with machine learning and overwhelm our cognitive defenses to convert us into customers for products or ideas.

We should always be skeptical of marketing claims. These aren't peer-reviewed journal articles, they're commercial puffery. The fact that the claims convince marketers to give billions of dollars to Big Tech is no guarantee that the claims are true. After all, powerful decision-makers in business have a long history of believing things that turned out to be false.

It's clear that our discourse is changing. Ideas that were on the fringe for years have gained new centrality. Some of these ideas are ones that we like (gender inclusivity, racial justice, anti-monopolistic sentiment) and some are ideas we dislike (xenophobia, conspiracy theories, and denial of the science of climate change and vaccines).

Our world is also dominated by technology, so any change to our world probably involves technology. Untangling the causal relationships between technology and discourse is a thorny problem, but it's an important one.

It's possible that Big Tech has invented a high-tech form of mesmerism, but whether you believe in that or not, there are many less controversial, more obvious ways in which Big Tech is influencing (and distorting) our discourse.

Locating Precise Audiences

Obviously, Big Tech is incredibly good at targeting precise audiences, this being value proposition of the whole ad-tech industry. Do you need to reach overseas students from the Pacific Rim doing graduate studies in Physics or Chemistry in the midwest? No problem. Advertisers value this feature, but so does anyone hoping to influence our discourse.

Locating people goes beyond "buying an audience" for an ad. Activists who want to reach people who care about their issues can use this feature to mobilize them in support of their causes. Queer people who don't know anyone who is out can find online communities to help them understand and develop their own identities. People living with chronic diseases can talk about their illnesses with others who share their problems.

This precision is good for anyone who's got a view that outside of the mainstream, including people who have views we don't agree with or causes we oppose. Big Tech can help you find people to cooperate with you on racist or sexist harassment campaigns, or to foment hateful political movements.

A discourse requires participants: if you can't find anyone interesting in discussing an esoteric subject with you, you can't discuss it. Big Tech has radically altered our discourse by making it easy for people who want to talk about obscure subjects to find discussants, enabling conversations that literally never could have happened otherwise. Sometimes that's good and sometimes it's terrible, but it's absolutely different from any other time.

Secrecy

Some conversations are risky. Talking about your queer sexuality in an intolerant culture can get you ostracized or subject you to harassment and violence. Talking about your affinity for cannabis in a place where it isn't legal to consume can get you fired or even imprisoned.

The fact that many online conversations take place in private spaces means that people can say things they would otherwise keep to themselves for fear of retribution.

Not all of these things are good. Being caught producing deceptive political ads can get you in trouble with an election regulator and also send supporters to your opponents. Advertising that your business discriminates on the basis of race or gender or sexuality can get you boycotted or sued, but if you can find loopholes that allow you to target certain groups that agree with your discriminatory agenda, you can win their business.

Secrecy allows people to say both illegal and socially unacceptable things to people who agree with them, greatly reducing the consequences for such speech. This is why private speech is essential for social progress, and it’s why private speech is beneficial to people fomenting hatred and violence. We believe in private speech and have fought for it for 30 years because we believe in its benefits—but we don't deny its costs.

Combined with targeting, secrecy allows for a very persuasive form of discourse, not just because you can commit immoral acts with impunity, but also because disfavored minorities can whisper ideas that are too dangerous to speak aloud.

Lying and/or Being Wrong

The concentration of the tech industry has produced a monoculture of answers. For many people, Google is an oracle, and its answers— the top search results—are definitive.

There's a good reason for that: Google is almost always right. Type "How long is the Brooklyn Bridge" into the search box and you'll get an answer that accords with both Wikipedia, and its underlying source, the 1967 report of the New York City Landmarks Preservation Commission.

Sometimes, though, Google is tricked into lying by people who want to push falsehoods onto the rest of us. By systematically exploring Google's search-ranking system (a system bathed in secrecy and subjected to constant analysis by the Search Engine Optimization industry), bad actors can and do change the top results on Google, tricking the system into returning misinformation (and sometimes, it's just a stupid mistake).

This can be a very effective means of shifting our discourse. False answers from a reliable source are naturally taken at face value, especially when the false answer is plausible (adding or removing a few yards from the Brooklyn Bridge's length), or where the questioner doesn't really have any idea what the answer should be (adding tens of thousands of miles per second to the speed of light).

Even when Google isn't deliberately tricked into giving wrong answers, it can still give wrong answers. For example, when a quote is widely misattributed and later corrected, Google can take months or even years to stop serving up the misattribution in its top results. Indeed, sometimes Google never gets it right in cases like this, because the people who get the wrong answer from Google repeat it on their own posts, increasing the number of sources where Google finds the wrong answer.

This isn't limited to just Google, either. The narrow search verticals that Google doesn't control—dating sites, professional networking sites, some online marketplaces—generally dominate their fields, and are likewise relied upon by searchers who treat them as infallible, even though they might acknowledge that it's not always wise to do so.

The upshot is that what we talk about, and how we talk about it, is heavily dependent on what Google tells us when we ask it questions. But this doesn't rely on Google changing our existing beliefs: if you know exactly what the speed of light is, or how long the Brooklyn Bridge is, a bad Google search result won't change your mind. Rather, this is about Google filling a void in our knowledge.

There's a secondary, related problem of "distinctive, disjunct idioms." Searching for "climate hoax" yields different results from searching for "climate crisis" and different results still from "climate change." Though all three refer to the same underlying phenomenon, they reflect very different beliefs about it. The term you use to initiate your search will lead you into a different collection of resources.

This is a longstanding problem in discourse, but it is exacerbated by the digital world.

"Sort by Controversial"

Ad-supported websites make their money from pageviews. The more pages they serve to you, the more ads they can show you and the more likely it is that they will show you an ad that you will click on. Ads aren't very effective, even when they're highly targeted, and the more ads you see, the more inured you become to their pitches, so it takes a lot of pageviews to generate a sustaining volume of clicks, and the number of pageviews needed to maintain steady revenue tends to go up over time.

Increasing the number of pageviews is hard: people have fixed time-budgets. Platforms can increase your "engagement" by giving you suggestions for things that will please you, but this is hard (think of Netflix's recommendation engine).

But platforms can also increase engagement by making you angry, anxious, or outraged, and these emotions are much easier to engender with automated processes. Injecting enervating comments, shocking images, or outlandish claims into your online sessions may turn you off in the long term, but in the short term, these are a reliable source of excess clicks.

This has an obvious impact on our discourse, magnifying the natural human tendency to want to weigh in on controversies about subjects that matter to you. It promotes angry, unproductive discussions. It's not mind control—people can choose to ignore these "recommendations" or step away from controversy—but platforms that deploy this tactic often take on a discordant, angry character.

Deliberate Censorship

Content moderation is very hard. Anyone who's ever attempted to create rules for what can and can't be posted quickly discovers that these rules can never be complete—for example, if you class certain conduct as "harassment," then you may discover that conduct that is just a little less severe than you've specified is also experienced as harassment by people on its receiving end.

As hard as this is, it gets much harder at scale, particularly when services cross-cultural and linguistic lines: as hard as it is to decide whether someone crosses the line when that person is from the same culture as you and is speaking your native language, it's much harder to interpret contributions from people of differing backgrounds, and language barriers add another layer of incredible complexity.

The rise of monolithic platforms with hundreds of millions (or even billions) of users means that a substantial portion of our public discourse is conducted under the shadow of moderation policies that are not—and cannot— be complete or well-administered.

Even if these policies have extremely low error rates—even if only one in a thousand deleted comments or posts is the victim of overzealous enforcement— systems with billions of users generate hundreds of billions of posts per day, and that adds up to many millions of acts of censorship every day.

Of course, not all moderation policies are good, and sometimes, moderation policies are worsened by bad legal regimes. For example, SESTA/FOSTA, a bill notionally aimed at ending human sexual trafficking, was overbroad and vague to begin with, and the moderation policies it has spawned have all but ended certain kinds of discussions of human sexuality in public forums, including some that achieved SESTA/FOSTA's nominal aim of improving safety for sex workers (for example, forums where sex workers kept lists of dangerous potential clients). These subjects were always subject to arbitrary moderation standards, but SESTA/FOSTA made the already difficult job of talking about sexuality virtually impossible.

Likewise, the Communications Decency Act's requirement for blacklists of adult materials on federally subsidized Internet connections (such as those in public schools and libraries) has foreclosed on access to a wealth of legitimate materials, including websites that offer information on sexual health and wellbeing, and on dealing with sexual harassment and assault.

Accidental Censorship

In addition to badly considered moderation policies, platforms are also prone to badly executed enforcement errors, in other words. Famously, Tumblr installed an automatic filter intended to block all "adult content" and this filter blocked innumerable innocuous images, from images of suggestive root vegetables to Tumblr's own examples of images that contained nudity but did not constitute adult content and would thus be ignored by its filters. Errors are made by both human and automated content moderators.

Sometimes, errors are random and weird, but some topics are more likely to give rise to accidental censorship than others: human sexuality, discussions by survivors of abuse and violence (especially sexual violence), and even people whose names or homes sound or look like words that have been banned by filters (Vietnamese people named Phuc were plagued by AOL's chat-filters, as were Britons who lived in Scunthorpe).

The systematic nature of this accidental censorship means that whole fields of discourse are hard or even impossible to undertake on digital platforms. These topics are the victims of a kind of machine superstition, a computer gone haywire that has banned them without the approval or intent of its human programmers, whose oversights, frailties and shortsightedness caused them to program in a bad rule, after which they simply disappeared from the scene, leaving the machine behind to repeat their error at scale.

Third-Party Censorship

Since the earliest days of digital networks, world governments have struggled with when and whether online services should be liable for what their users do. Depending on which country an online provider serves, they may be expected to block, or pre-emptively remove, copyright infringement, nudity, sexually explicit material, material that insults the royal family, libel, hate speech, harassment, incitements to terrorism or sectarian violence, plans to commit crimes, blasphemy, heresy, and a host of other difficult to define forms of communication.

These policies are hard for moderation teams to enforce consistently and correctly, but that job is made much, much harder by deliberate attempts by third parties to harass or silence others by making false claims about them.

In the simplest case, would-be censors merely submit false reports to platforms in hopes of slipping past a lazy or tired or confused moderator in order to get someone barred or speech removed.

However, as platforms institute ever-finer-grained rules about what is, and is not, grounds for removal or deletion, trolls gain a new weapon: an encyclopedic knowledge of these rules.

People who want to use platforms for good-faith discussions are at a disadvantage relative to "rules lawyers" who want to disrupt this discourse. The former have interests and jobs about which they want to communicate. The latter's interest and job is disrupting the discourse.

The more complex the rules become, the easier it is for bad-faith actors to find in them a reason to report their opponents, and the harder it is for good-faith actors to avoid crossing one of the ruleset's myriad lines.

Conclusion

The idea that Big Tech can mold discourse through bypassing our critical faculties by spying on and analyzing us is both self-serving (inasmuch as it helps Big Tech sell ads and influence services) and implausible, and should be viewed with extreme skepticism.

But you don't have to accept extraordinary claims to find ways in which Big Tech is distorting and degrading our public discourse. The scale of Big Tech makes it opaque and error-prone, even as it makes the job of maintaining a civil and productive space for discussion and debate impossible.

Big Tech's monopolies—with their attendant lock-in mechanisms that hold users' data and social relations hostage—remove any accountability that might come from the fear that unhappy users might switch to competitors.

The emphasis on curbing Big Tech's manipulation tactics through regulatory measures has the paradoxical effect of making it more expensive and difficult to enter the market with a Big Tech competitor. A regulation designed to curb Big Tech will come with costs that little tech can't possibly afford, and becomes a license to dominate the digital world disguised as a regulatory punishment for lax standards.

The scale—and dominance— of tech firms results in unequivocal, obvious, toxic public discourse. The good news is that we have tools to deal with this: breakups, merger scrutiny, limits on vertical integration. Perhaps after Big Tech has been cut down to size, we'll still find that there's some kind of machine-learning mesmerism that we'll have to address, but if that's the case, our job will be infinitely easier when Big Tech has been stripped of the monopoly rents it uses to defend itself from attempts to alter its conduct through policy and law.

Related Issues