Some of the biggest names in the U.S. entertainment industry have expressed a recent interest in a topic that’s seemingly far away from their core business: shutting down online prostitution. Disney, for instance, recently wrote to key U.S. senators expressing their support for SESTA, a bill that was originally aimed at sex traffickers. For its part, 20th Century Fox told the same senators that anyone doing business online “has a civic responsibility to help stem illicit and illegal activity.”
Late last year, the bill the entertainment companies supported morphed from SESTA into FOSTA, and then into a kind of Frankenstein bill that combines the worst aspects of both. The bill still does nothing to catch or punish traffickers, or provide help to victims of sex trafficking.
As noted by Freedom Network USA, the largest coalition of organizations working to fight human trafficking, law enforcement already has the ability to go after sex traffickers and anyone who helps them. Responsible web operators can help in that task. The civil liabilities imposed by FOSTA could actually harm the hunt for perpetrators.
Freedom Network suggests the better approach would be to provide services and support to victims, but that’s not what FOSTA does. What it does do is offer a powerful incentive for online platforms to police the speech of users and advertisers. A perceived violation of a state’s anti-trafficking laws could lead to authorities seeking civil or criminal penalties, or a barrage of lawsuits.
So, why are movie studios involved at all in this debate? Hollywood is lobbying for laws that will force online intermediaries to shut down user speech. That’s what they’ve been seeking since practically the beginning of the Internet.
A Brief History of Safe Harbors
The Internet as we know it is underpinned by two critical laws that have allowed user speech to blossom: Section 230 of the Communications Decency Act, and 17 U.S. Code § 512, which outlines the “safe harbor” provisions of the Digital Millennium Copyright Act, or DMCA.
Section 230 prevents online platforms from being held liable, in many cases, for their users’ speech. Platforms are free to moderate speech in a way that works for them—removing spam or trolling comments, for instance—without being compelled to read each comment, or view each video, a task that’s simply impossible on sites with thousands or millions of users.
Similarly, the DMCA safe harbor shields the same service providers from copyright damages based on user infringement, as long as they follow certain guidelines. The two laws work together to send a clear message: in the online world, users are responsible for their own actions and speech, and online platforms can mediate that speech—or not—as fits the needs of their community.
For two decades now, Section 230 and the DMCA have complemented each other, allowing for an explosion of online creativity. Without the DMCA safe harbor, small businesses could face bankruptcy over the copyright infringement of a few users. And without Section 230, the same businesses could be sued for a vast array of user misbehavior that they didn’t even know about. Lawsuits for libel or invasion of privacy, for instance, could be aimed at the platform, rather than the person who actually committed those acts.
Without these key legal protections, many sites would make the safe choice and simply choose to not host free and unfettered discussions. Others might begin to police user content overzealously, removing or blocking lots of lawful speech for fear of letting something illegal slip through. The safe harbors keep the focus for any online wrongdoing on the actual wrongdoer, whether it’s a civil violation like copyright infringement, or criminal acts.
It’s hardly a free-for-all for the companies protected by the safe harbors, which have significant limits. Online platforms that edit or direct user speech that violates the law, for instance, can’t avail themselves of Section 230 protections. It’s fine to run online advertisements, but sites that help users post ads for illegal or discriminatory content can be, and have been, held accountable.
Section 230 doesn’t offer any shield against federal criminal law, and one doesn’t have to look far to find website operators that have been punished under those laws. The operator of the online marketplace Silk Road, for instance, was convicted of federal drug trafficking offences.
Nor does protection accrue to websites that make contributions, even small ones, to illegal content. An online housing website, Roommates.com, lost Section 230 protection simply because it required users to answer questions that could be used in housing discrimination. While EFF has long expressed concerns about the free speech implications of the 2008 Fair Housing Council v. Roommates.com decision, it remains the law and demonstrates that Section 230 is far from a free pass.
Likewise, the DMCA safe harbors only apply if an online platform complies with numerous requirements, including implementing a repeat-infringer policy and responding to notices of infringement by taking down content.
Towards a Filtered Net?
For legacy software and entertainment companies, breaking down the safe harbors is another road to a controlled, filtered Internet—one that looks a lot like cable television. Without safe harbors, the Internet will be a poorer place—less free for new ideas and new business models. That suits some of the gatekeepers of the pre-Internet era just fine.
The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies shouldn’t “blindly run platforms with no control of the content.”
That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An Internet that’s policed by “copyright bots” is what major film studios and record have advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about.
Software filters can provide a useful first take in moderating content, but they need proper supervision from humans. Bots still can’t determine when use of copyrighted material is fair use, for instance, which is why a best practice is to always let human creators dispute the determination of an automated filter.
Similarly, it’s unlikely that an automated filter will be able to determine the nuanced difference between actual online sex-trafficking and a discussion about sex-trafficking. Knocking down safe harbors will lead to an over-reliance on flawed filters, which can easily silence the wrong people.
Those filters would create a huge barrier to entry for startups, non-profits, and hobbyists. And at the end of the day, they’d hurt free speech. Saying that new technology can produce a successful filter is a fallacy—bots simply can’t do fair use.
So when Hollywood and entrenched tech interests suddenly take a new interest in the problem of sex trafficking, it’s fair to wonder why. After all, an Internet subject to corporate filters will make it harder, not easier, to hunt down and prosecute sex traffickers.
Punching a hole in safe harbors to reshape the Internet has been the project, in many different forms, for more than a decade now. The FOSTA bill, if it passes the Senate, will be the first major success in dismantling a safe harbor. But don’t count on it to be the last.
Stop SESTA and FOSTA