The UK government has had more than a year to revise its Online Safety Bill into a proposal that wouldn’t harm users’ basic rights. It has failed to do so, and the bill should be scrapped. The current bill is a threat to free expression, and it undermines the encryption that we all rely on for security and privacy online. 

The government intended to advance and vote on the Online Safety Bill last month, but the scheduled vote was postponed until a new Prime Minister of the UK can be chosen. Members of Parliament should take this opportunity to insist that the bill be tossed out entirely. 

Subjective Standards for Censorship 

If the Online Safety Bill passes, the UK government will be able to directly silence user speech, and even imprison those who publish messages that it doesn’t like. The bill empowers the UK’s Office of Communications (OFCOM) to levy heavy fines or even block access to sites that offend people. We said last year that those powers raise serious concerns about freedom of expression. Since then, the bill has been amended, and it’s gotten worse. 

People shouldn’t be fined or thrown in jail because a government official finds their speech offensive. In the U.S., the First Amendment prevents that. But UK residents can already be punished for online statements that a court deems “grossly offensive,” under the 2003 Communications Act. If the Online Safety Bill passes, it would expand the potential scope of such cases. It would also significantly deviate from the new E.U. internet bill, the Digital Services Act, which avoids transforming social networks and other services into censorship tools

Section 10 of the revised bill even authorizes jail time—up to two years—for anyone whose social media message could cause “psychological harm amounting to at least serious distress.”  The message doesn’t even have to cause harm. If the authorities believe that the the offender intended to cause harm, and that there was a substantial risk of harm, that’s enough for a prosecution. There’s also a separate crime of transmitting “false communications,” punishable by fines or up to 51 weeks of imprisonment.  

The problem here should be obvious: these are utterly subjective criteria. People disagree all the time about what constitutes a false statement. Determining what statements have a “real and substantial risk” of causing psychological harm is the epitome of a subjective question, as is who might have a “reasonable excuse” for making such a statement. The apparent lack of legal certainty casts doubt on whether the UK's Online Safety Act meets international human rights standards.

The few exceptions in the section appear to be grants to large media concerns. For instance, recognized news publishers are exempt from the section on communications offenses. So is anyone “showing a film made for cinema to members of the public.” 

The exceptions are telling. The UK’s new proposed censors at OFCOM are making it clear they’ll never enforce against corporate media concerns; it’s only small media creators, activists, citizen journalists, and everyday users who will be subject to the extra scrutiny and accompanying punishments. 

Online platforms will also face massive liability if they don’t meet OFCOM’s deadlines with regards to removing images and messages relating to terrorism or child abuse. But it is extremely difficult for human reviewers to correctly discern between activism, counter-speech, and extremist content. Algorithms do an even worse job. When governments around the world pressure websites to quickly remove content they deem “terrorist,” it results in censorship. The first victims of this type of censorship are usually human rights groups seeking to document abuses and war. And whilst the bill does require online service providers consider the importance of journalistic freedom of expression, the safeguards are onerous and weak. 

Another Attack on Encryption

The bill also empowers OFCOM to order online services to “use accredited technology”—in other words, government-approved software—to find child abuse images (Section 104). Those orders can be issued against online services that use end-to-end encryption, meaning they currently don’t have any technical way to inspect user messages. This part of the bill is a clear push by the bill’s sponsors to get companies to either abandon or compromise their encryption systems. 

Unfortunately, we’ve seen this pattern before. Unable to get public support for the idea of police scanning every message online, some lawmakers in liberal democracies have turned to work-arounds. They have claimed certain types of encryption backdoors are needed to inspect files for the worst crimes, like child abuse. And they’ve claimed, falsely, that certain methods of inspecting user files and messages, like client-side scanning, don’t break encryption at all. We saw this in the U.S. in 2020 with the EARN IT Act, last year with Apple’s proposed client-side scanning system, and this year we have seen a similar system proposed in the E.U. 

These types of systems create more vulnerabilities that endanger the rights of all users, including children. Security experts and NGOs have spoken clearly about this issue, and asked for the anti-encryption sections of this bill to be withdrawn, but the bill’s sponsors have unfortunately not listened. 

If it passes, the censorious, anti-encryption Online Safety Bill won’t just affect the UK—it will be a blueprint for repression around the world. The next UK Prime Minister should abandon the bill in its entirety. If they won’t, Parliament should vote to reject it.