Billionaire Elon Musk says Twitter can be an “incredibly valuable service to the world,” a global forum where ideas and debates flourish. Yet, much of what he has done since taking over the company suggests that he doesn’t understand how to accomplish this, doesn’t appreciate the impact of his decisions on users—especially the most vulnerable—or doesn’t care.

Step by step, from the firing of top trust and safety executives and content moderation staff to the disastrous rollout and rollback of the $8 blue check program, Musk’s reign at Twitter has already increased risks to users—especially those in crisis zones around the world who flocked to Twitter for expression during unrest—by unraveling guardrails against misinformation, harassment, and censorship.

Hate speech will remain on the platform, but will be “max deboosted,” Musk said, presumably meaning it will not be promoted in users’ feeds by Twitter’s algorithms. It’s not clear how and what speech is being categorized as hateful.

“Elon has shown that his only priority with Twitter users is how to monetize them,” said an unidentified company lawyer on the privacy team, in a Twitter Slack post obtained by The Verge. “I do not believe he cares about the human rights activists. the dissidents, our users in un-monetizable regions, and all the other users who have made Twitter the global town square you have all spent so long building, and we all love.”

We’ve seen an exodus of Twitter executives on the front lines of protecting safety, security, speech, and accessibility. Some were fired, others resigned. Gone are Yoel Roth, Head of Trust and Safety, Lea Kissner, Chief Information Security Officer, Damien Kieran, Chief Privacy Officer, Marianne Fogart, Chief Compliance Officer, Raj Singh, Human Rights Counsel, and Gerard K. Cohen, Engineering Manager for Accessibility. Half of Twitter’s 7,500 employees were let go, with trust and safety departments hit the hardest. A second wave of global job cuts reportedly  hit over 4,000 outside contractors, many of whom worked as content moderators battling misinformation on the platform in the U.S. and abroad. As many as 1,200 staffers resigned late last week after Musk gave employees a deadline to decide whether to stay or leave.

Along the way, Musk fired the entire human rights team, a group tasked with ensuring Twitter adheres to the UN Guiding Principles on Business and Human Rights. That team’s work was crucial to combating harmful content, platform manipulation, and the targeting of high-profile users in conflict zones, including in Ethiopia, Afghanistan, and Ukraine. These teams were also involved in ensuring that Twitter resisted censorship demands from authoritarian countries that don’t comport with human rights standards. Such demands are on the rise; in the latter half of 2021, the company received a record 50,000 legal content takedown demands.

What is worse, Musk’s vow to “hew close to the laws of countries in which Twitter operates” could mean that the company will begin complying with censorship policies and demands for user data that it has previously withstood.

For example, Qatar—whose government is one of Musk’s financial backers—has a law that threatens imprisonment or fines to “anyone who broadcasts, publishes, or republishes false or biased rumors, statements, or news, or inflammatory propaganda, domestically or abroad, with the intent to harm national interests, stir up public opinion, or infringe on the social system or the public system of the state.” The broad law, which has been condemned by Amnesty International, is ripe for abuse and creates a chilling environment for speech.

And while Twitter’s moderation policies have been far from perfect, it has often stood up for its users. For example, when authorities in India have pressured Twitter to block accounts that have criticized the government, including activists, journalists, and politicians, the company has pushed back, including filing a lawsuit challenging the government’s demand to remove 39 tweets and accounts. Given that Musk fired 90 percent of Twitter’s 200 staffers in India earlier this month, will Twitter will continue to defend the case?

Even before the layoffs, access to internal controls used to moderate content and policy enforcement was reportedly cut off for some employees, raising questions about whether moderators could fend off misinformation ahead of the November 8 U.S. midterm elections. It’s no coincidence that the platform experienced a surge in racists slurs in the first few days after Musk’s $44 billion acquisition.

Another problem is Twitter Blue, a revamped subscription service that gives users a blue check mark and early access to new features for $7.99 a month. Pre-Musk, a blue check mark indicated that Twitter had independently verified the account as belonging to the person or organization—celebrities and journalists, but also activists and artists—it claimed to represent. It was a way to combat fake accounts and misinformation and garner trust in the platform. Musk wants to make it available to anyone willing to pay for it.

The new Twitter Blue blew up, as people, governments, and companies used it to impersonate others at will. Some of those were funny, some not so much, such as fake airline customer support accounts that tried to lure in Twitter users seeking help from real airlines.

Twitter suspended many of those accounts, but not before anti-trans trolls, far-right extremists, and conspiracy mongers, some of whom had been kicked off Twitter in the past for hateful content and misinformation, purchased blue check marks and picked right up where they left off. The program was temporarily suspended following the wave of abuse.

Whenever it’s resurrected, and even if it’s not actively abused, the Twitter Blue pay-to-play model will disproportionately affect people and groups that can’t afford $96 a year and undercut their ability to be heard. Blue checks were a sign of trustworthiness for journalists, human rights defenders, and activists, especially in countries with authoritarian regimes where Twitter has been a vital source of information and communication. Even worse, people who don’t pay will be harder to find on the platform, according to Musk. Paid accounts will receive priority ranking, and will appear first in search, replies, and mentions.

Also in the works is a content moderation council that will represent “widely diverse views.” In early November Musk met with officials from civil rights organizations, including the National Association for the Advancement of Colored People, Color of Change, and the Anti-Defamation League, saying he would restore content moderation tools that had been blocked from staff and inviting them to join the council. But marginalized communities outside of the United States have also relied on Twitter to get their voices heard. Will anyone on the council have the expertise and credibility to speak on their behalf?

Musk said in late October that no major content decisions or account reinstatements would occur until the council was formed. He hasn’t announced such a council yet exists, but on Nov. 20 reinstated high-profile people kicked off the platforms for hate speech and misinformation, including former President Donald Trump—who was let back on Twitter after Musk polled users—Kanye West, and the Babylon Bee, which was banned for anti-trans comments.

There is one potential positive development for Twitter users around the world: it appears Musk might be making good on his promise that Twitter direct messages will be end-to-end-encrypted. That would enable Twitter users to communicate more safely without leaving the platform.

But it doesn’t overshadow or outweigh the potential harms to the sites’ most vulnerable users. Prioritizing the monetization of users will inevitably leave behind millions of Twitter users in unmonetizable regions, and ensure that they their voices will be relegated to the bottom of the feed, where few will be able to find them.