EFF Special Advisor Cory Doctorow presents at Royal Holloway's 31st HP/HPE Colloquium on Information Security, on the theme of "Monopolies, Not Mind Control."
Long before the pandemic crisis, there was widespread concern over the impact that tech was having on the quality of our discourse, from disinformation campaigns to influence campaigns to polarization. It’s true that the way we talk to each other and about the world has changed, both in form (thanks to the migration of discourse to online platforms) and in kind, whether that’s the rise of nonverbal elements in our written discourse (emojis, memes, ASCII art and emoticons) or the kinds of online harassment and brigading campaigns that have grown with the Internet. A common explanation for the change in our discourse is that the biggest tech platforms use surveillance, data-collection, and machine learning to manipulate us, either to increase “engagement” (and thus pageviews and thus advertising revenues) or to persuade us of things that aren’t true, for example, to convince us to buy something we don’t want or support a politician we would otherwise oppose.
There’s a simple story about that relationship: by gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism. This story originates with Big Tech itself. Marketing claims for programmatic advertising and targeted marketing (including political marketing) promise prospective clients that they can buy audiences for their ideas through Big Tech, which will mix its vast data-repositories with machine learning and overwhelm our cognitive defenses to convert us into customers for products or ideas.
We should always be skeptical of marketing claims. These aren’t peer-reviewed journal articles, they’re commercial puffery. The fact that the claims convince marketers to give billions of dollars to Big Tech is no guarantee that the claims are true. After all, powerful decision-makers in business have a long history of believing things that turned out to be false. It’s clear that our discourse is changing. Ideas that were on the fringe for years have gained new centrality. Some of these ideas are ones that we like (gender inclusivity, racial justice, anti-monopolistic sentiment) and some are ideas we dislike (xenophobia, conspiracy theories, and denial of the science of climate change and vaccines).
Our world is also dominated by technology, so any change to our world probably involves technology. Untangling the causal relationships between technology and discourse is a thorny problem, but it’s an important one.It’s possible that Big Tech has invented a high-tech form of mesmerism, but whether you believe in that or not, there are many less controversial, more obvious ways in which Big Tech is influencing (and distorting) our discourse.