This past weekend, Facebook CEO Mark Zuckerberg took to the pages of the Washington Post to ask governments and regulators to play a more active role in policing the Internet, and to offer some ideas for how they should do so. As the New York Times noted, Zuckerberg’s comments were doubtless intended to stave off ideas Facebook would like even less—but that doesn’t make them good ones.

Here we look at two of Zuckerberg’s ideas for “standardized” or “global” rules for the entire Internet: platform censorship, and data privacy laws.

Calling in the Speech Police

Let’s start with his first idea: a “standardized approach” to “harmful content” online, whereby third-party bodies – let’s call them speech police—decide what content is OK and what is not, and companies are required to “build systems” to shut down as much of the latter category as possible. Facebook is already inviting government regulators to help it do so on its own platform—and apparently thinks everyone else should do the same.

There are at least four fundamental problems with this idea.

First, it is extremely difficult to define “harmful content,” much less implement standards consistently and fairly for billions of users, across the entire spectrum of contemporary thought and belief. Mark Zuckerberg's own company's efforts to do so show how fraught that is.

All of the major platforms already set forth rules for their users. They tend to be complex, covering everything from terrorism and hate speech to copyright and impersonation. Most platforms use a version of community reporting. Violations of these rules can prompt takedowns and account suspensions or closures. And we have well over a decade of evidence about how these rules are used and misused.

If governments and regulators want to explore new rules for the Internet, Mark Zuckerberg is the last person they should ask for advice.

We’ve seen prohibitions on hate speech used to shut down conversations among women of color about the harassment they receive online; rules against harassment employed to shut down the account of a prominent Egyptian anti-torture activist; and a ban on nudity used to censor women who share childbirth images in private groups. Museums have had works of art taken down for “suggestive content.” And we've seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech.

Platform censorship has included images and videos that document atrocities and make us aware of the world outside of our own communities. Regulations on violent content have disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly been used to take down a famous Vietnam war photo.

If individual companies, some with massive resources, can’t get this right, we have no reason to imagine that an independent body will do much better.

Second, as the above would suggest, requiring companies to build systems to take down only “harmful content” is a dangerous exercise in magical thinking. No algorithm and group of moderators can perfectly differentiate between speech that should be protected and speech that should be erased, not least because a great deal of problematic content sits in the ambiguous territory between disagreeable political speech and abuse, or between fabricated propaganda and legitimate opinion, or between things that are legal in some jurisdictions and not others. Or they’re simply things some users want to read and others don’t.

Third, while the free and open Internet has never been fully free or open, at root, the Internet still represents and embodies an extraordinary idea: that anyone with a computing device can connect with the world, anonymously or not, to tell their story, organize, educate, and learn. Moderated forums can be valuable to many people, but there must also be a place on the Internet for unmoderated communications, where content is controlled neither by the government nor a large corporation. Mandating a standardized approach across all sharing services would eliminate that possibility—and with it a core promise of the Internet.

Last but not least, as Zuckerberg should know given the phalanx of smart lawyers he employs, regulations along the lines he suggests would violate the First Amendment in the U.S. They could also run afoul of an existing international standard for freedom of expression: Article 19 of the International Declaration of Human Rights. Article 19 states that “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Internationally imposed “harmful content” standards, however well-intentioned, will inevitably shut down the sharing of a wide range of opinion and information.

That's why the United Nations Special Rapporteur on freedom of expression reminded governments last year that they should "only seek to restrict content pursuant to an order by an independent and impartial judicial authority, and in accordance with due process and standards of legality, necessity and legitimacy." Zuckerberg's idea of cozy agreement between multiple governments and the giant platforms does not reach that standard.

States, Zuckerberg is Coming for Your Data Privacy Laws

Zuckerberg also calls for a “common global framework” for data privacy laws, “rather than regulation that varies significantly by country and state.” There are many benefits to having a uniform standard, rather than forcing companies to comply with numerous different state and federal laws. However, we’re not sure that’s all Zuckerberg is saying here: In the U.S., for example, it’s very much in Facebook’s interest to push for a federal data privacy law that would preempt stronger, existing state laws.

In the U.S., for example, current state laws across the country have already created strong protections for user privacy. Three particularly strong examples are California's Consumer Privacy Act, Illinois' Biometric Privacy Act, and Vermont's Data Broker Act. If Congress enacts weaker federal data privacy legislation that preempts such stronger state laws, the result will be a massive step backward for user privacy.

And Zuckerberg’s point here isn’t just about privacy; it’s also about competition. Facebook was able to achieve its current size thanks in part to a lack of data privacy laws in its early days. Imposing a one-size-fits-all standard on companies and organizations of different sizes, with different resources, in different places would put would-be competitors at a disadvantage that Facebook never had to overcome. Unsurprisingly, Zuckerberg's vision for Internet regulation prioritizes Facebook's business interests above those of its potential competitors.

If governments and regulators want to explore new rules for the Internet, Mark Zuckerberg is the last person they should ask for advice. Instead, they should talk to users, small innovators and platforms, engineers (including the people who built the Internet), civil society, educators, activists, and journalists – all of whom depend on robust protections for both privacy and the freedom to express and communicate without running through a gauntlet of gatekeepers.