2022 marked an important year for digital rights across the European Union as the landmark Digital Services Act (DSA) came into force on 16 November seeking to foster a safer and more competitive digital space.
The DSA overhauls the EU’s core platform regulation, the e-Commerce Directive, and is intended to be an important tool in making the internet a fairer place by setting out new legal responsibilities for online platforms and educating users on why content is removed and what they can do about it. The powers of Big Tech are also reined in as the DSA subjects “very large online platforms (VLOPs)” to comply with far-reaching obligations and responsibly tackle systemic risks and abuse on their platform. These risks cover a variety of aspects, including the dissemination of illegal content, disinformation, and negative impact on fundamental rights. VLOPs also face oversight through independent audits, which will assess whether platforms respect the obligations under the DSA.
Whilst the obligations placed on intermediary services depend on the role, size, and impact of the service in the online ecosystem, the DSA introduces all-encompassing protections on user rights to privacy by prohibiting platforms from undertaking targeted advertising based on sensitive user information, such as ethnicity or sexual orientation. More broadly, the DSA increases the transparency about the ads users see on their feeds as platforms must place a clear label on every ad, with information about the buyer of the ad and other details. Despite being in its infancy, this provision is already inducing tension as companies like Twitter – whose primary source of income is obtained through ad revenue – have publicly affirmed their intention to further amplify targeted ads on the platform, in potential contravention of the DSA.
The DSA’s emphasis on greater transparency and user rights also includes requirements on platforms to explain their content curation algorithms in more detail and in user-friendly language. This aims to ensure that users can better understand how content decisions – which should be non-arbitrary – are made, and how they can pursue reinstatement should platforms make mistakes. The DSA also requires platforms to give users the option to choose a content curation algorithm that is not based on profiling.
By and large, and this is the right approach to platform governance regulation, the DSA doesn’t tell social media platforms what speech they can and can’t publish. Instead, it focuses a lot on making processes and content moderation clear to users, and requires platforms to take concerns for safety and the protection of fundamental rights seriously.
Moreover, the DSA largely preserves the EU’s system of limited liability for online intermediaries, which means that platforms cannot be held responsible for user content provided that they remove content they actually “know” to be illegal. After extensive deliberation, the DSA rejected takedown deadlines that would have suppressed legal, valuable, and benign speech and EFF helped to ensure that the final language steered clear of intrusive filter obligations. This will enhance user rights online as without intermediary liability, users become subject to harmful profiling, stifled free speech, and a system that often leads to a pernicious culture of self-censorship. However, new due diligence standards could still encourage platforms to over-remove, whilst other requirements seek to moderate platform’s actions against user speech. We will be watching closely to see how this plays out in practice.
Many believe that the new DSA could become a new gold standard for other regulators in the world. But the DSA isn’t all good news and some aspects may be a good fit for Europe but not for other parts of the world. One particularly concerning omission from the DSA is an express protection on anonymous user speech. Instead, the DSA provides a fast-track procedure for law enforcement authorities to take on the role of “trusted flaggers” and uncover data about anonymous speakers and remove allegedly illegal content – which platforms become obligated to remove quickly. Issues with government involvement in content moderation are pervasive and whilst trusted flaggers are not new, the DSA’s system could have a significant negative impact on the rights of users, in particular that of privacy and free speech.
Since the DSA was first introduced by the European Commission in December 2020, EFF has fought for protections on four key areas: platform liability, interoperability mandates, procedural justice, and user control. And our message to the EU has remained clear: Preserve what works. Fix what is broken. And put users back in control.
Yet despite the DSA finally passing, our work has just begun. The success of the DSA’s pledge to create a user-protective online environment will depend on how social media platforms interpret their new obligations, and how European Union authorities enforce the regulation. Respect for the EU’s Fundamental Rights Charter and inclusion of digital rights groups and marginalized communities in the implementation process is crucial to ensure that the DSA becomes a positive model for legislation on digital rights – both inside and outside the EU’s borders. And as racist language proliferates across platforms like Twitter, and free speech is arbitrarily removed at the requests of law enforcement on other services, user-centered and transparent content governance processes are more pertinent than ever.