This post is part of a series on Mastodon and the fediverse. We also have a post on what the fediverse issecurity and privacy on Mastodon, and how to make a Mastadon account. You can follow EFF on Mastodon here.

Something remarkable is happening. For the past two weeks, people have been leaving Twitter. Many others are reducing their reliance on it. Great numbers of ex-Twitter users and employees are making a new home in the “fediverse,” fleeing the chaos of Elon Musk’s takeover. This exodus includes prominent figures from civil society, tech law and policy, business and journalism.  It also represents a rare opportunity to make a better corner of the internet…if we don’t screw it up.

The fediverse isn’t a single, gigantic social media platform like Facebook or Twitter. It’s an expanding ecosystem of interconnected social media sites and services that let people interact with each other no matter which one of these sites and services they have an account with. 

That means that people can tailor and better control their experience of social media, and be less reliant on a monoculture sown by a handful of tech giants. 

The major platforms have already screwed it up, but now we have the chance to get it right and build something better. 

Today’s most popular fediverse service is called Mastodon. Mastodon is a Twitter-like service anyone can host and alter to suit their needs. Each server (or “instance”) can experiment and build its own experience for users, and those users aren’t stuck using services they don’t like just because their contacts are on that service. 

Mastodon is just one corner of the fediverse, and it might not be for everyone. More importantly, Mastodon runs on an open protocol called ActivityPub– a powerful and flexible way to link up all kinds of services and systems. This means all the features of Mastodon are just a sliver of a vast universe of interoperable services. 

The fediverse is an evolving project, and it won’t solve all of the challenges that we’ve seen with big social media platforms. Like other distributed systems, it does have some drawbacks and complications. But a federated social media ecosystem represents some possible escape hatches from some of the more serious problems we have been experiencing in the centralized, platform social media world.  

To be clear: no technology can save us from ourselves, but building a more interoperable social media environment may be a chance to have a do-over on the current lock-in model. It could be awesome, if we don’t screw it up.  

Social Media Platforms Already Screwed It Up.

Up until a few weeks ago, most social media users were trapped in a choice between fiefdoms. Many of the problems users contend with online are downstream of this concentration.

Take privacy: the default with incumbent platforms is usually an all-or-nothing bargain where you accept a platform’s terms or delete your account. The privacy dashboards buried deep in the platform’s settings are a way to tinker in the margins, but even if you untick every box, the big commercial services still harvest vast amounts of your data. To rely on these major platforms is to lose critical autonomy over your privacy, your security, and your free expression.

This handful of companies also share a single business model, based upon tracking us. Not only is this invasion of privacy creepy, but also the vast, frequently unnecessary amount and kinds of data being collected – your location, your friends and other contacts, your thoughts, and more – are often shared, leaked, and sold. They are also used to create inferences about you that can deeply impact your life.  

Even if you don’t mind having your data harvested, the mere act of collecting all of this sensitive information in one place makes for a toxic asset. A single bad lapse in security can compromise the privacy and safety of hundreds of millions of people. And once gathered, the information can be shared or demanded by law enforcement. Law enforcement access is even more worrisome in post-Dobbs America, where we already see criminal prosecutions based in part upon people’s social media activities. 

We’re also exhausted by social media’s often parasitic role in our lives. Many platforms are optimized to keep us scrolling and posting, and to prevent us from looking away. There’s precious little you can do to turn off these enticements and suggestions, despite the growing recognition that they can have a detrimental effect on our mental health and on our public discourse.  Dis- and misinformation, harassment and bullying have thrived in this environment. 

There’s also the impossible task of global content moderation at scale. Content moderation fails on two fronts: first, users all over the world have seen that platforms fail to remove extremely harmful content, including disinformation and incitement that is forbidden by the platforms’ own policies. At the same time, platforms improperly remove numerous forms of vital expression, especially from those with little social power. To add insult to injury, users are given few options for appeal or restoration. 

These failures have triggered a mounting backlash.  On both sides of the U.S. political spectrum, there’s been a flurry of ill-considered legislation aimed at regulating social media moderation practices. Outside of the U.S. we’ve seen multiple “online harms” proposals that are likely to make things worse for users, especially the most marginalized and vulnerable, and don’t meaningfully give everyday people more say over their online life. In some places, such as Turkey, bad legislation is already a reality.  

How the Fediverse Can Get it Right

You don’t fix a dictatorship by getting a better dictator. You have to get rid of the dictator. This moment offers the promise of moving to a better and more democratic social media landscape.

Instead of just fighting to improve social media, digital rights advocates, instance operators, coders and others have an opportunity to build atop an interoperable, open protocol, breaking out of the platform lock-in world we’re in now. This fresh start could spur more innovation and resilience for communities online—but only if we make careful choices today.  

To be clear: there is nothing magical about federated worlds. If a federated social media is better than the centralized incumbents, it will be because people made a conscious choice to make it better - not because of any technological determinism. Open, decentralized systems offer new choices towards a better online world, but it’s up to us to make those choices.

Here are some choices we hope that the operators and users of federated systems will make: 

  1. Adopt the Santa Clara Principles on content moderation:  The shift to smaller federated instances creates more opportunities for better transparency, due process and accountability for content moderation. EFF, along with a broad international coalition of NGOs, has developed a set of principles for content hosts that support the basic human rights of users. We hope that most of the fediverse makes these recommendations for user protections the baseline and even exceeds them, especially for larger hosts in the network.  
  2. Community and local control: The fediverse is set up to facilitate community and local control, so we’ll be watching to see how that develops. Mastodon instances already have very different political ideologies and house rules. Though not part of the ActivityPub network, Gab and TruthSocial are built on forks of Mastodon. We’re already seeing the fediverse self-sorting, with some services choosing to connect to - or block - others, based on their users’ preferences. There are bold experiments in democratic control of instances by users. A social internet where users and communities get to set their own rules can give us better outcomes than resting our hopes on the whims of shareholders or one rich guy with a bruised ego.
  3. Innovation in content moderation: The fediverse itself can’t ban users, but the owners of each server have moderation tools and shared blocklists to cater to the experience their users expect. We’re already seeing these approaches improve. Collaboration in moderation tools can facilitate both cooperation and healthy competition among instances based on what rules they set and how they enforce them. When it comes to protecting their users from bad actors, operators shouldn’t need to start from scratch, but they should preserve the option to make their own choices, even if those are different from other instances or blocklist maintainers. And users can then choose which operators to rely upon.
  4. Lots of application options: Mastodon is the current application of choice for many, but it’s not the only possibility. ActivityPub supports many different application strategies. We’ll be watching to see if innovation continues both on Mastodon and beyond it, so communities can develop different ways of using social media, while still maintaining the ability to connect through federation, as well as disconnect when those relationships break down. 
  5. Remixability: Competitors, researchers, and users should be able to use platforms in creative and unexpected ways, and shouldn’t face legal repercussions for building an add-on just because a service does not approve of it or because it competes with it. The free, open nature of ActivityPub gives us a running start, but we should be on the lookout for efforts to fence in users and fence out innovators. Importantly, this tinkering shouldn’t be a free-for-all: it should be limited by laws protecting users’ privacy and security. We still need strong privacy laws when we move to the fediverse and we still don’t have them. 
  6. Lots of financial support models: The current fediverse mostly runs on volunteer creativity, time, and love. That’s terrific, but as many more people join, the burdens on system operators increase. We want to see a wide range of support models develop: community supported models; academic institutions; municipal and other governmental supported models; and philanthropic support, just to name a few. Business models could include subscriptions, contextual ads, or something else entirely— but (no surprise) we’d like to see behavioral tracking ads bite the dust entirely.
  7. Global accessibility: A new global social media paradigm needs to be open to everyone. This means putting an emphasis on all kinds of accessibility. This includes people with visual or other disabilities, but also must include communities in the global south who are often overlooked by developers in the north. Making it easy to implement new languages or features which serve a particular accessibility or cultural need is essential, as are operators making the choice to offer these accessible features.  
  8. Resisting government interference/Have Your Users' Backs: When governments, local or abroad, crack down on instances, users should know what to expect from site operators and have a clear contingency plan. This could be a seamless portability process to let users easily move between instances as they’re blocked or removed, or implementing solutions which allow servers to collaborate in resisting these governmental forces when necessary. EFF’s 2017 Who Has Your Back safeguards are a good place to start.   
  9. True federation: Users of similar services should be able to communicate with one another across platform borders. Services that choose to plug in to the fediverse should make it possible for their users to interact with users of competing services on equal terms, and should not downgrade the security or privacy or other essential features for those coming in from outside. 
  10. Interoperability and preventing the next lock-in: The flip side of supporting true federation is avoiding lock-in. Let’s not abolish today’s dictators only to come under the thumb of future ones. Even more so, let’s be sure the current tech giants don’t use their market power to take over the fediverse and lock us all into their particular instance. We must be ready to fight “embrace, extend, and extinguish” and other legal and technical lock-in strategies that companies like Microsoft and Facebook have tried in the past. Here are some principles to stand up for:
    • Stopping Anticompetitive Behavior:  New services should not lock out competitors and follow-on innovators simply to avoid competition. They must not abuse the CFAA, DMCA Section 1201, or overreaching terms of service. Today’s tech giants like to cloak their anticompetitive bullying in the language of concern over privacy, content moderation and security. Privacy-washing and security-washing have no place in a federated, open internet.  
    • Portability:  It should be simple for a user to migrate their contacts, the content they’ve created, and other valuable material from one service to a competing service, and to delete their data from a platform that they’ve chosen to leave. Users should not be locked into accounts.
    • Delegability: Users of a service should be able to access it using custom tools and interfaces developed by rivals, tinkerers, or the users themselves. Services should allow users to delegate core functions—like reading news feeds, sending messages, and engaging with content—to third-party clients or services.

Up until very recently, it was easier to imagine the end of the internet than it was to imagine the end of the tech giants. The problems of living under a system dominated by unaccountable, vast corporations seemed inescapable. But growth has stagnated for these centralized platforms, and Twitter is in the midst of an ugly meltdown. It won’t be the last service to disintegrate into a thrashing mess. 

Our hearts go out to the thousands of workers mistreated or let go by the incumbent players. The major platforms have already screwed it up, but now we have the chance to get it right and build something better.