Why are we telling these stories? Because Alice is under attack. A few loud voices in the patent lobby want to amend the law to bring back these stupid patents. It’s time to tell the stories of the individuals and businesses that have been sued or threatened with patents that shouldn’t have been issued in the first place.
Several US senators spoke out this week on the importance of net neutrality to innovation and free speech. They are right. The Internet has become our public square, our newspaper, our megaphone. The Federal Communications Commission is trying to turn it in something more akin to commercial cable TV, and we all have to work together to stop it.
What makes the Internet revolutionary is the ability of every user to create news and culture and participate in conversations with people all across the globe. Mass consumption of entertainment products may be big business and may even help drive adoption, but it’s not new and empowering like the opportunity to participate in speech on an infinite variety of topics. As the Supreme Court recently observed, Internet platforms “can provide perhaps the most powerful mechanism available to a private citizen to make his or her voice heard.” Seven in ten American adults regularly use at least one Internet social networking service. Facebook alone has more than 1.79 billion monthly active users around the world. Twitter has over 310 million monthly active users who publish more than 500 million tweets each day. Instagram has over 600 million monthly users who upload over 95 million photos every day. Snapchat has over 100 million daily users who send and watch over 10 billion videos per day. And that’s just a small sampling of the commercial Internet platforms many of us use everyday. Millions more log into sites like Wikipedia, the Internet Archive, news outlets, government services and local libraries to access a wealth of information and culture.
FCC Chairman Ajit Pai is threatening to eliminate net neutrality protections altogether by dismantling the legal structure on which they depend
Most importantly, the Internet has played an increasingly vital role in political expression and organizing. Conservative activists from around the country coalesced over various social networking platforms to form the Tea Party movement. The Black Lives Matter movement used Twitter to help spark a national conversation on racial inequality. The Standing Rock Sioux used Twitter, Facebook, Instagram, and YouTube to galvanize national support for their protests against the Dakota Access Pipeline and its threat to their drinking water. Earlier this year organizers used Facebook and Twitter to share information, plan events, and motivate participation in the Women’s March.
What does this have to do with net neutrality? Simple: all of these services depend the existence of open communications protocols that let us innovate without having to ask permission from any company or government.
The Internet was built on the simple but powerful idea that while you may need to pay a service provider for Internet access, that provider doesn’t get to shape what you access – or who has access to you. Anyone who wants to offer a new Internet service can, without paying extra fees to any provider. Users, in turn, can make their own choices about which services they want to use – including the next Twitter/Facebook/Snapchat that’s being created in someone’s basement right now.
In 2014, that powerful idea motivated millions of Internet users to band together and demand that the FCC enact clear, legally sound rules to prevent broadband providers from taking advantage of their power as gatekeepers to engage in unfair practices like paid prioritization, blocking, and other forms of data discrimination. We know that such practices could transform this extraordinary engine for civic discourse into something more like cable TV, where providers and content owners bargain over what content will be available at full speed and what will be throttled.
In 2015, the FCC answered our call and adopted the Open Internet Order to protect net neutrality. In 2016, the DC Circuit Court of Appeal upheld it – in contrast to the efforts of prior FCCs that operated on shaky legal theories. But the new FCC Chairman, Ajit Pai, wants to reverse course. He’s calling on the public to comment on whether we even need open Internet rules in the first place, and threatening to eliminate net neutrality protections altogether by dismantling the legal structure on which they depend, despite widespread public support for those protections and despite the fact that net neutrality has been the rule of the Internet from its inception, backed by a combination of legal requirements and cultural norms that are now in danger of being eliminated.
We can’t let that happen. We still have an open Internet that lets us make ourselves heard, so Let’s Make. Ourselves. Heard. The millions of Internet users who fought for Net Neutrality in 2014, and the millions more who have been mobilized in the intervening years, need to send a simple message to Chairman Pai and his backers in Congress and the Trump Administration: Don't let big cable mess with our Internet.
If the federal government wants to compel an online service provider, like Yahoo or Google, to turn over your email, they need a warrant. That's the industry-accepted best practice, implemented by nearly every major service provider. More importantly, it's what the Fourth Amendment requires.
The Securities and Exchange Commission (SEC), the federal agency charged with enforcing federal securities laws, seems to think it falls outside the warrant requirement. In a civil case currently pending in Maryland, the agency asked a federal judge to compel Yahoo to comply with an administrative subpoena—read, not a warrant—it sent to the company, which would require the company to turn over the emails of one of its users. An administrative subpoena lacks the privacy safeguards of a warrant, including a higher standard justifying government access (i.e., probable cause) and prior review by a judge.
Yahoo fought back, refusing to comply with the subpoena and opposing the SEC's motion. Last week, EFF, joined by our friends at CDT, filed an amicus brief in support of Yahoo. Our brief made a simple point: if the federal government wants to compel a third-party provider to turn over a user's email, it needs a warrant. That rule applies to the SEC, just as any other federal or state government agency.
The SEC's position isn't a new one. They have long claimed a right to access email content from providers without a warrant. In fact, the SEC has been one of the primary obstacles to passing an update to the Electronic Communications Privacy Act (ECPA), the federal law that governs government access to emails and other content stored in the cloud. But this is the first time (as far as we know) that the SEC has tested its theory in court.
Fortunately, even though the SEC has so far been successful in blocking attempts to amend ECPA, the agency still has to contend with the Constitution. As we explained in our brief, because users have a reasonable expectation of privacy in their email stored with online service providers (a point SEC wisely conceded), the Fourth Amendment requires the agency to obtain a warrant—or to rely on an exception to the warrant requirement—in order to intrude upon that privacy.
The SEC argues that, as a civil law enforcement agency, it lacks the power to obtain a warrant by itself. But as we pointed out, whenever there is a criminal component to an investigation—as is the case here—the SEC can coordinate with the Justice Department to obtain a warrant. Apparently, the SEC is concerned that, in purely civil cases, when it can't work with the Justice Department to obtain a warrant, companies or individuals may be able to shield their emails from disclosure. But civil litigation offers a variety of levers for the SEC to pull in order to obtain the same or similar information, without compelling its disclosure from a third-party service provider.
Ultimately, our constitutional privacy rights shouldn't be diminished just because the SEC wants to conduct its investigations more efficiently. The hearing in the case is scheduled for Friday, June 30. We hope the court will send a clear message to government agencies: if you want to compel a third-party provider to turn over email content, get a warrant.
When looking at a proposed policy regulating Internet businesses, here’s a good question to ask yourself: would this bar new companies from competing with the current big players? Google will probably be fine, but what about the next Google?
In the past few years, some large movie studios and record labels have been promoting a proposal that would effectively require user-generated media platforms to use copyright bots similar to YouTube’s infamous Content ID system. Today’s YouTube will have no trouble complying, but imagine if such requirements had been in place when YouTube was a three-person company. If copyright bots become the law, the barrier to entry for new social media companies will get a lot higher.
A Brief History of Copyright Bots
In many ways, the history of copyright bots is really the history of Content ID. Content ID was not the first bot on the market, but it’s the template for what major film studios and record labels have come to expect of content platforms.
When Google acquired YouTube in 2006, the platform was under heavy fire from major film studios and record labels, which complained in court and in Congress that the platform enabled widespread copyright infringement. YouTube complied with all of the requirements that the Digital Millennium Copyright Act (DMCA) puts on content platforms—including following the notice-and-takedown procedure when rights holders accuse their users of infringement. The DMCA essentially offers content platforms a trade—if they do their part to tackle infringing activity, they’re sheltered from copyright liability under the DMCA safe harbor rules. Hollywood agreed to those rules back in 1998, but now it wanted to rewrite the deal.
In response to legal and commercial pressure from content industries, Google developed Content ID, a program that goes beyond YouTube’s DMCA obligations. Content ID doesn’t replace notice-and-takedown; it creates a system for proactive filtering that often lets rights holders remove allegedly infringing content without even having to send a DMCA takedown request.
Rights holders submit large databases of video and audio fingerprints, and YouTube patrols new uploads for closely matching content. Rights holders can choose to have YouTube automatically remove or monetize videos, or they can review them manually and decide what they want YouTube to do with them. There’s a built-in appeals process (which includes escalation to a DMCA takedown, with the fair use consideration the DMCA requires), but it has problems of its own.
For better or worse, Content ID changed YouTube. It bought the company some goodwill with big content owners, many of which have now become prolific YouTube adopters.
Writing Bots into the Law
But the success of Content ID has led some rights holders to the dangerous notion that filtering alone can end the copyright wars. Now, copyright bots have begun to show up all over the Internet—often in places where they make no sense, like your private videos on Facebook. And it appears that some major content owners won’t be satisfied until web platforms have no choice but to adopt systems like Content ID – in other words, turning a voluntary system into a mandate.
Over the past few years, lobbyists representing large content owners both in the U.S. and in Europe have begun to demand mandatory filtering. These proposals vary, but their goals are the same: a world where social media platforms are vulnerable to massive copyright infringement damages unless they go to extreme measures to police their members’ uploads for potential infringement. The Chinese government has gone all-in on copyright filtering, partnering with Hollywood to scan not just people’s social media posts but even their private devices.
For the record, copyright bots can raise major problems even when they aren’t compelled by law. In principle, bots can be useful for weeding out cases of obvious infringement and obvious non-infringement, but they can’t be trusted to identify and allow many instances of fair use. What’s more, their appeals and conflict-resolution systems are often completely opaque to users and seem designed to favor large content companies.
Still, there’s a world of difference between platforms implementing copyright bots as a business decision and being forced to do so by governments. The latter creates a huge, expensive stumbling block for a company to cross before it can ever compete in the market.
Narrow Regulations and Broad Patents
It gets worse. When companies are given only narrow space in which to compete and innovate, it becomes easier for incumbents to set legal traps within those boundaries.
It might be tempting to think that software patents on copyright filtering will incentivize innovation in filtering, thus making copyright bots more accessible to small platforms. But a patent as broad and generic as Microsoft's risks cutting off innovation well short of that goal: overbroad patents blanket an entire field, rarely disclosing any information of value about the underlying technology.
Business regulations should provide companies wide berth to innovate, experiment, and differentiate themselves from competitors. Patents should cover specific, narrowly defined inventions. Narrow regulations and broad patents are a dangerous combination.
Keep Safe Harbors Safe
Safe harbor protections are essential to how today’s Internet works—without them, many Internet companies would simply be exposed to too much legal risk to operate. Safe harbors have given us the entire social media boom and many other Internet technologies that we take for granted every day.
So any proposal that makes it more burdensome to comply with safe harbor requirements should be examined closely to make sure that it doesn’t close the market to new competitors. Mandatory copyright filtering is likely to do exactly that.
If the kind of laws big media companies are proposing today had been in place 12 years ago, it’s doubtful that YouTube could have survived its early days as a startup. And if those laws get implemented today, new players will need tremendous resources just to get started. Mandatory filtering would create a narrower playing field for Internet businesses and let the most successful players use legal tricks to maintain their advantages. It’s a bad idea.
The field of machine learning and artificial intelligence is making rapid progress. Many people are starting to ask what a world with intelligent computers will look like. But what is the ratio of hype to real progress? What kinds of problems have been well solved by current machine learning techniques, which ones are close to being solved, and which ones remain exceptionally hard?
There isn’t currently a good single place to find the state of the art on well-specified machine learning metrics, let alone the many problems in artificial intelligence that are still so hard that there are no good datasets and benchmarks to keep track of them yet. So we are trying to make one. Today, we’re launching the EFF AI Progress Measurement experiment, and encouraging machine learning researchers to give us feedback and contribute to the effort.
We want to know what types of AI we need to start engaging with on legal, political, and technical safety fronts.
We have drawn data from a number of sources: blog posts that report on snapshots of progress; websites that try to collate data on specific subfields of machine learning; and review articles. Where those sources didn’t have coverage, we’ve gone to the research literature itself and gathered data.
What we have thus far is an experiment, and we’d like to know: Is this information useful to the machine learning community? What important problems, datasets, and results are we missing?
EFF’s interest in AI progress is primarily from a policy perspective. We want to know what types of AI we need to start engaging with on legal, political, and technical safety fronts. Beyond that, we’re also just excited to see how many things computers are learning to do over time.
Given that machine learning tools and AI techniques are increasingly part of our everyday lives, it is critical that journalists, policy makers, and technology users understand the state of the field. When improperly designed or deployed, machine learning methods can violate privacy, threaten safety, and perpetuate inequality and injustice. Stakeholders must be able to anticipate such risks and policy questions before they arise, rather than playing catch-up with the technology. To this end, it’s part of the responsibility of researchers, engineers, and developers in the field to help make information about their life-changing research widely available and understandable. We hope you’ll join us.
EFF has just launched the Summer Security Camp, a two-week membership drive that challenges people everywhere to gather ‘round the online rights movement and prepare for the privacy and free speech challenges in their paths.
Through the 4th of July, anyone can join EFF or renew as a Silicon level member for just $20 and receive a set of miniature field guides with shareable security tips covering these cruciallyrelevantissues:
Border Search: know your rights and defend personal data at the border.
The EFF site contains extensive analysis of these topics and much more, but the Summer Security Camp's printed pocket guides distill some of the most important information to help keep you safe on the go, come what may. Members will have access to home-printable versions of these tips to share with friends and family because as we know, privacy is a team sport and everyone wins.
As a bonus, participants will receive a special edition embroidered patch to help them show support for the cause. Think of it as a digital civil liberties merit badge.
Threats to privacy and free expression abound, but EFF doesn’t believe in the no-win scenario. We work every day to defend user rights and empower you with knowledge that you can share in your community. The more prepared we are and the more we can count on each other, the stronger we’ll be. Let’s take a stand for online rights today!
The Supreme Court’s unanimous decision in Matal v. Tam striking down the trademark non-disparagement requirement as unconstitutional is a big victory for the First Amendment. First, the Court strongly pushed back against the expansion of the government-speech doctrine, perhaps the biggest current threat to free speech jurisprudence. Second, the Court strengthened a position EFF has long advocated—that intellectual property rights and First Amendment rights must be balanced against each other rather than weighted in favor of the former.
The case arose when the band The Slants was denied a federal trademark based on afederal law that prohibits the registration of a trademark that may “disparage. . . or bring into contemp[t] or disrepute” any “persons, living or dead.” The Court found that provision violated the First Amendment. It may no longer be used as a basis for denying trademark registration.
Pushing Back on the Dangerous Government-Speech Doctrine
The Governments’ primary argument in defense of the disparaging trademark ban was that registered trademarks were “government-speech,” not the speech of the trademark owner. That is, in denying registration, the government was not punishing The Slants because it disagreed with the viewpoint the mark expressed; rather, the government was simply choosing not to include disparaging terms in its own speech.
The government-speech doctrine is unique among First Amendment law in that it is the only situation in which the government may discriminate on the basis of the speaker’s viewpoint. In its most basic application, it is noncontroversial: the government itself may adopt policy positions and promote them without having to equally promote opposing policies advocating the opposite viewpoint. In all other contexts, the government cannot deny a speaker access to a forum or otherwise punish them because of a disagreement with the views expressed.
As the Court recognized in Matal, the government-speech doctrine “is susceptible to dangerous misuse. If private speech could be passed off as government speech by simply affixing a government seal of approval, government could silence or muffle the expression of disfavored viewpoints. For this reason, we must exercise great caution before extending our government-speech precedents.”
Significantly, the Court put a stop to what many saw as a gradual expansion of the government-speech doctrine through its previous decisions. The Court characterized its most recent government-speech decision,Walker v. Texas Div., Sons of Confederate Veterans, Inc., in which it held that a state’s specialty license plate program was government-speech, as “likely mark[ing] the outer bounds of the government-speech doctrine.”
The government-speech doctrine is unique among First Amendment law in that it is the only situation in which the government may discriminate on the basis of the speaker’s viewpoint.
The Court thus resoundingly rejected the government’s argument in Matal, explaining that it “would constitute a huge and dangerous extension of the government-speech doctrine.” It characterized the government’s position as “far-fetched” and not even “remotely support[ed]” by any of the Court’s previous government-speech decisions. Trademark registration does not bear any of the hallmarks of government-speech. Rather than articulating an official position by registering various trademarks, often of conflicting views, “the Government is babbling prodigiously and incoherently.” Moreover, “[t]rademarks have not traditionally been used to convey a Government message” and “there is no evidence the public associates the contents of trademarks with the Federal Government.”
Also highly significant to First Amendment doctrine, a plurality of the Court limited another aspect of its government-speech jurisprudence. In several cases, the Court has held that speech by private speakers but subsidized by the government may also be government speech and thus the provision of the subsidy may be subject to viewpoint discrimination without offending the First Amendment. But in Matal, the four justices rejected this argument and sharply limited these subsidy cases to those in which in the government makes cash payments for speech, not any other kind of subsidy.
Reasserting a Better Balance Between Free Speech and Trademark Law
The Court also reaffirmed that trademarks are expressive and imbued with First Amendment protections.
Perhaps the most worrisome implication of the Government’s argument concerned the system of copyright registration. If federal registration makes a trademark government speech and thus eliminates all First Amendment protection, would the registration of the copyright for a book produce a similar transformation? The justices unanimously rejected the government’s suggestion that trademarks could be distinguished from copyright on the ground that they are not expressive:
The Government attempts to distinguish copyright on the ground that it is “‘the engine of free expression,’” Brief for Petitioner 47 (quoting Eldred v. Ashcroft, 537 U. S. 186, 219 (2003)), but as this case illustrates, trademarks often have an expressive content. Companies spend huge amounts to create and publicize trademarks that convey a message. It is true that the necessary brevity of trade- marks limits what they can say. But powerful messages can sometimes be conveyed in just a few words.
In addition, the Court explained that the government does not have a greater ability to discriminate against disfavored viewpoints in registering trademarks merely because trademarks are “commercial speech.” Although commercial speech in many contexts gets somewhat diminished First Amendment protections, even commercial speech is not subject to the government’s viewpoint discrimination.
The U.S. Supreme Court, in Packingham v. South Carolina, unanimously struck down a state law that banned registered sex offenders (RSOs) from using all Internet social media, holding that the law violated the First Amendment.
EFF and our allies Public Knowledge and the Center for Democracy & Technology filed an amicus brief urging this result. The Court cited our brief for three propositions regarding the extraordinary consequences of banishing people from all Internet social media:
Seven in ten American adults use at least one Internet social networking service.
One of them, Facebook, has 1.79 billion active users.
All Governors and nearly all members of Congress use social media to communicate with their constituents.
The Court also cited our brief for the proposition that the broadly worded law might bar access not just to commonplace social media websites, but also to other websites like Amazon.com, Washingtonpost.com, and Webmd.com. Our brief was written by Professor David Post, as well as Jonathan Sherman, Perry M. Grossman, and Henry Bluestone Smith of Boies, Schiller & Flexner LLP.
Both Justice Kennedy’s majority opinion and Justice Alito’s concurrence in the judgment assumed without deciding that the law was content neutral, and thus applied the intermediate scrutiny test used for content neutral laws. Both opinions therefore required the government to prove that the law was narrowly tailored, meaning the law does not burden substantially more speech than necessary to achieve the government’s goal of protecting children. Both concluded that the law failed this test, because it banished RSOs from all Internet social media.
Several statements from the Court’s opinion (which Justice Alito’s opinion did not join) will be critical in deciding all manner of future cases applying the First Amendment to the Internet:
“Cyberspace . . . in general” and “social media in particular” are “the most important places (in a spatial sense) for the exchange of views.”
Internet social media “can provide perhaps the most powerful mechanism available to a private citizen to make his or her voice heard.”
“Even convicted criminals—and in some instances especially convicted criminals—might receive legitimate benefits form these means for access to the world of ideas, in particular if they seek to reform and to pursue lawful and rewarding lives.”
In addition to opposing the banishment of RSOs from all Internet social media, EFF also has long opposed government efforts to strip RSOs of their right to anonymousspeech on the Internet, and efforts to force RSOs to wear location-tracking shackles every moment for the rest of their lives.
EFF opposes laws like these that burden the digital liberties of RSOs for three reasons. First, digital liberty is a fundamental human right that all people should enjoy. Second, government often imposes new technological burdens on “the worst of the worst,” and then expands those burdens to other populations. Third, the government has designated nearly one million people as RSOs, including many non-dangerous people.
The Court’s decision in Packingham strengthens the First Amendment rights of all people to participate in the Internet.