Imagine if your boss made up hundreds of petty rules and refused to disclose them, but every week, your pay was  docked based on how many of those rules you broke. When you’re an online creator and your “boss” is a giant social media platform, that’s exactly how your compensation works.

Algospeak” is a new English dialect that emerged from the desperate attempts of social media users to “please the algorithm”: that is, to avoid words and phrases that cause social media platforms’ algorithms to suppress or block their communication. 

Algospeak is practiced by all types of social media users, from individuals addressing their friends to science communicators and activists hoping to reach a broader public. But the most ardent practitioners of algospeak are social media creators, who rely—directly or indirectly—on social media to earn a living.

For these creators, accidentally blundering into an invisible linguistic fence erected by social media companies can mean the difference between paying their rent or not. When you work on a video for days or weeks—or even years—and then “the algorithm” decides not to show it to anyone (not even the people who explicitly follow you or subscribe to your feed), that has real consequences. 

Social media platforms argue that they’re entitled to establish their own house rules and declare some subjects or conduct to be off-limits. They also say that by automating recommendations, they’re helping their users find the best videos and other posts. 

They’re not wrong. In the U.S., for example, the First Amendment protects the right of platforms to moderate the content they host. Besides, every conversational space has its own norms and rules. These rules define a community. Part of free speech is the right of a community to freely decide how they’ll speak to one another. What’s more, social media—like all human systems—has its share of predators and parasites, scammers and trolls and spammers, which is why users want tools to help them filter out the noise so they can get to the good stuff.

But legal issues aside, the argument is a lot less compelling when the tech giants are making it. Their  moderation policies aren’t “community norms”—they’re a single set of policies that attempts to uniformly regulate the speech of billions of people in more than 100 countries, speaking more than 1,000 languages. Not only is this an absurd task, but the big platforms are also pretty bad at it, falling well short of the mark on speech, transparency, due process, and human rights.

Algospeak is the latest in a long line of tactics created by online service users to avoid the wrath of automated moderation tools. In the early days of online chat, AOL users used creative spellings to get around profanity filters, creating an arms race with a lot of collateral damage. For example, Vietnamese AOL users were unable to talk about friends named “Phuc” in the company’s chat-rooms.

But while there have always been creative workarounds to online moderation, Algospeak and the moderation algorithms that spawned it represent a new phase in the conflict over automated moderation: approaching moderation as an attack on new creators that help these platforms thrive.

The Online Creators’ Association (OCA) has called on TikTok to explain its moderation policies. As OCA cofounder Cecelia Gray told the Washington Post’s Taylor Lorenz: “People have to dull down their own language to keep from offending these all-seeing, all-knowing  TikTok gods.”

For TikTok creators, the judgments of the service’s recommendation algorithm are hugely important. TikTok users’ feeds do not necessarily feature new works by creators they follow. That means that you, as a TikTok user, can’t subscribe to a creator and be sure that their new videos will automatically be brought to your attention. Rather, TikTok treats the fact that you’ve explicitly subscribed to a creator’s feed as a mere suggestion, one of many signals incorporated into its ranking system.

For TikTok creators—and creators on other platforms where there’s no guarantee that your subscribers will actually be shown your videos—understanding “the algorithm” is the difference between getting paid for your work or not.

But these platforms will not explain how their algorithms work: which words or phrases trigger downranking. As Lorenz writes, “TikTok creators have created shared Google docs with lists of hundreds of words they believe the app’s moderation systems deem problematic. Other users keep a running tally of terms they believe have throttled certain videos, trying to reverse engineer the system” (the website Zuck Got Me For chronicles innocuous content that Instagram’s filters blocked without explanation).

The people who create the materials that make platforms like YouTube, Facebook, Twitter, Snap, Instagram, and TikTok valuable have dreamed up lots of ways to turn attention into groceries and rent money, and they have convinced billions of platform users to sign up to get their creations when they’re uploaded. But those subscribers can only pay attention to those creations if the algorithm decides to include them, which means that creators only get to eat and pay the rent if they please the algorithm.

Unfortunately, the platforms refuse to disclose how their recommendation systems work. They say that revealing the criteria by which the system decides when to promote or bury a work would allow spammers and scammers to abuse the system.

Frankly, this is a weird argument. In information security practice, “security through obscurity” is considered a fool’s errand. The gold standard for a security system is one that works even if your adversary understands it. Content moderation is the only major domain where “if I told you how it worked, it would stop working” is considered a reasonable proposition. 

This is especially vexing for the creators who won’t get compensated for their creative work when an algorithmic misfire buries it: for them, “I can’t tell you how the system works or you might cheat” is like your boss saying “I can’t tell you what your job is, or you might trick me into thinking you’re a good employee.” 

That’s where Tracking Exposed comes in: Tracking Exposed is a small collective of European engineers and designers who systematically probe social media algorithms to replace the folk-theories that inform Algospeak with hard data about what the platforms up- and down-rank.

Tracking Exposed asks users to install browser plugins that anonymously analyze the recommendation systems behind Facebook, Amazon, TikTok, YouTube, and Pornhub (because sex work is work). This data is mixed with data gleaned from automated testing of these systems, with the goal of understanding how the ranking system tries to match the inferred tastes of users with the materials that creators make, in order to make this process legible to all users. 

But understanding the way that these recommendation systems work is just for starters. The next stage—letting users alter the recommendation system—is where things get really interesting. 

YouChoose is another plug-in from Tracking Exposed: it replaces the YouTube recommendations in your browser with recommendations from many services from across the the internet, selected according to criteria that you choose (hence the name).

Tracking Exposed’s suite of tools is a great example of contemporary adversarial interoperability (AKA “Competitive Compatibility” or “comcom”). Giving users and creators the power to understand and reconfigure the recommendation systems that produce their feed—or feed their families—is a profoundly empowering vision.

The benefits of probing and analyzing recommendation systems doesn’t stop with helping creative workers and their audiences. Tracking Exposed’s other high-profile work includes a study of how TikTok is promoting pro-war content and demoting anti-war content in Russia and quantifying the role that political disinformation on Facebook played in the outcome of the 2021 elections in the Netherlands.

The platforms tell us that they need house rules to make their conversational spaces thrive, and that’s absolutely true. But then they hide those rules, and punish users who break them. Remember when OCA cofounder Cecelia Gray said that her members tie themselves in knots “to keep from offending these all-seeing, all-knowing TikTok gods?” 

They’re not gods, even if they act like them. These corporations should make their policies legible to audiences and creators, adopting The Santa Clara Principles

But creators and audiences shouldn’t have to wait for these corporations that think they’re gods  to descend from the heavens and deign to explain themselves to the poor mortals who use their platforms. Comcom tools like Tracking Exposed let us demand an explanation from the gods, and extract that explanation ourselves  if the gods refuse.