Rewriting the legal pillars of the Internet is a popular sport these days. Frustration at Big Tech, among other things, has led to a flurry of proposals to change long-standing laws, like Section 230, Section 512 of the DMCA, and the E-Commerce Directive, that help shield online intermediaries from potential liability for what their users say or do, or for their content moderation decisions.

It’s popular right now to blame social media platforms for a host of ills. Sometimes that blame is deserved. And sometimes it’s not. 

If anyone tells you revising these laws will be easy, they are gravely mistaken at best. For decades, Internet users – companies, news organizations, creators of all stripes, political activists, nonprofits, libraries, educators, governments and regular humans looking to connect – have relied on these protections. At the same time, some of the platforms and services that help make that all possible have hosted and amplified a great deal of harmful content and activity. Dealing with the latter without harming the former is an incredibly hard challenge. As a general matter, the best starting point is to ask: “Are intermediary protections the problem? Is my solution going to fix that problem? Can I mitigate the inevitable collateral effects?” The answer to all three should be a firm “Yes.” If so, the idea might be worth pursuing. If not, back to the drawing board.

That’s the short version. Here’s a little more detail about what EFF asks when policymakers come knocking.

What’s it trying to accomplish?

This may seem obvious, but it’s important to understand the goal of the proposal and then match that goal to its likely actual impacts. For example, if the stated goal of the proposal is to “rein in Big Tech,” then you must consider whether the plan might actually impede competition from smaller tech companies. If the stated goal is to prevent harassment, then we want to be sure the proposal won’t discourage platforms from moderating their content to cut down on harassment, and to consider whether the proposal will encourage overbroad censorship of non-harassing speech. In addition, we pay attention to whether the goal is consistent with EFF’s mission: to ensure that technology supports freedom, justice, and innovation for everyone.

Is it constitutional?

Too many policymakers seem to care too little about this detail – they’ll leave it for others to fight it out in the courts. Since EFF is likely to be doing the fighting, we want to plan ahead – and help others do the same. Call us crazy, but we also think voters care about staying within the boundaries set by the Constitution and also care about whether their representatives are wasting time (and public money) on initiatives that won’t survive judicial review.

Is it necessary – meaning, are intermediary protections the problem?


It’s popular right now to blame social media platforms for a host of ills. Sometimes that blame is deserved. And sometimes it’s not. Critics of intermediary liability protections too often forget that the law already affords rights and remedies to victims of harmful speech when it causes injury, and that the problem may stem from the failure to apply or enforce existing laws against users who violate those laws. State criminal penalties apply to both stalking and harassment, and a panoply of civil and criminal statutes address conduct that causes physical harm to an individual. Moreover, if an Internet company discovers that people are using its platforms to distribute child sexual abuse material, it must provide that information to the National Center for Missing and Exploited Children and cooperate with law enforcement investigations. Finally, law enforcement sometimes prefers to keep certain intermediaries active so that they can better investigate and track people who are using the platform to engage in illegal conduct.

Against this backdrop, we ask: are intermediaries at fault here and, if so, are they beyond the reach of existing law?

If law enforcement lacks the resources they need to follow up on reports of harassment and abuse, or lacks clear understanding or commitment to enforcing those problems when they come up in the digital space, that’s a problem that needs fixing, immediately. But the solution probably doesn’t start or end with a person screening content in a cubicle, much less an algorithm attempting to do the same.

In addition to criminal charges, victims can use defamation, false light, intentional infliction of emotional distress, common law privacy, interference with economic advantage, fraud, anti- discrimination laws, and other civil causes of action to seek redress against the original author of the offending speech. They can also sue a platform if the platform owner is itself authoring the illegal content.

As for the platforms themselves, intermediary protections often contain important exceptions. To take just a few examples: Section 512 does not limit liability for service providers’ own infringing activities, and requires them to take action when they have knowledge of infringement by their users. Section 230 does not provide immunity against prosecutions under federal criminal law, or liability based on copyright law or certain sex trafficking laws. For example, backers of SESTA/FOSTA, the last Section 230 “reform,” pointed to Backpage.com as a primary target, but the FBI shut down the site without any help from that law. Nor does Section 230 provide immunity against civil or state criminal liability where the company is responsible, in whole or in part, for the creation or development of information. Nor does Section 230 immunize certain intermediary involvement with advertising, e.g., if a platform requires advertisers to choose ad recipients based on their protected status.

Against this backdrop, we ask: are intermediaries at fault here and, if so, are they beyond the reach of existing law? Will the proposed change help alleviate the problem in a practical way? Might targeting intermediaries impede enforcement under existing laws, such as by making it hard for law enforcement to locate and gather evidence about criminal wrongdoers?

Will it cause collateral damage? If so, can that damage be adequately mitigated?

As a civil liberties organization, one of the main reasons EFF defends limits on intermediary liability is because we know the crucial role intermediaries play in empowering Internet users who rely on those services to communicate. Attempts to change platform behavior by undermining Section 230 or Section 512, for example, may actually harm lawful users who rely on those platforms to connect, organize, and learn. This is a special risk to the historically marginalized communities that often lack a voice in traditional media and who often find themselves improperly targeted by content moderation systems. The ultimate beneficiaries of limits on intermediary liability are all of us who want those intermediaries to exist so that we can post things without having to code and host it ourselves, and so that we can read, watch, and re-use content that others create.

Further, we are always mindful that intermediary liability protections are not limited to brand name “tech companies,” of any size. Section 230, by its language, provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation. “User,” in particular, has been interpreted broadly to apply “simply to anyone using an interactive computer service.” This includes anyone who maintains a website that hosts other people’s comments, posts another person’s op-ed to message boards or newsgroups, or anyone who forwards email written by someone else. A user can be an individual, a nonprofit organization, a university, a small brick-and-mortar business, or, yes, a “tech company.” And Section 512 protects a wide ranges of services, from your ISP to Twitter to the Internet Archive to a hobby site like Ravelry.

Against this backdrop, we ask: Who will be affected by the law? How will they respond?

For example, will intermediaries seek to limit their liability by censoring or curtailing lawful speech and activity? Will the proposal require intermediaries to screen or filter content before it is published? Will the proposal directly or indirectly compel intermediaries to remove or block user content, accounts, whole sections of websites, entire features or services? Will intermediaries shut down altogether? Will the cost of compliance become a barrier to entry for new competitors, further entrenching existing gate-keepers? Will the proposal empower a heckler’s veto, where a single notice or flag that an intermediary is being used for illegal purposes results in third-party liability if the intermediary doesn’t take action?

We asked sex workers and child safety experts what they thought about SESTA/FOSTA. They told us it was dangerous. 

If the answer is yes to any of these, does the proposal include adequate remediation measures? For example (focusing just on competition) if compliance could make it difficult for smaller companies to compete or alternatives to emerge, does the proposal include mitigation measures? Will those mitigation measures be effective?

What expertise is needed to evaluate this proposal? Do we have it? Can we get it?

One of the many lessons of SESTA/FOSTA was that it’s hard to assess collateral effects if you don’t ask the right people. We asked sex workers and child safety experts what they thought about SESTA/FOSTA. They told us it was dangerous. They were right.

We take the same approach with the proposals coming in now. Do we understand the technological implications? For example, some proposed changes to Section 230 protections in the context of online advertising might effectively force systemic changes that will be both expensive and obsolete in a few years. Some might make a lot of sense and not be too burdensome for some services. Others might simply be difficult to assess without deeper knowledge of how the advertising system works now and will likely work in the future. Some implications for users might not be clear to us, so it’s especially important to seek out potentially affected communities and ensure they have a meaningful opportunity to consult and be heard on the impacts of any proposal. We try to make sure we know what questions to ask, but also to know what we don’t know.

Online Intermediary Reform Is Hard

Many intermediary liability reform proposals are little more than vaporware from policymakers who seem bent on willfully misunderstanding how intermediary protections and even the Internet work. But some are more serious, and deserve consideration and review. The above questions should help guide that process.

 

 

Related Issues

Tags