When looking at a proposed policy regulating Internet businesses, here’s a good question to ask yourself: would this bar new companies from competing with the current big players? Google will probably be fine, but what about the next Google?
In the past few years, some large movie studios and record labels have been promoting a proposal that would effectively require user-generated media platforms to use copyright bots similar to YouTube’s infamous Content ID system. Today’s YouTube will have no trouble complying, but imagine if such requirements had been in place when YouTube was a three-person company. If copyright bots become the law, the barrier to entry for new social media companies will get a lot higher.
A Brief History of Copyright Bots
In many ways, the history of copyright bots is really the history of Content ID. Content ID was not the first bot on the market, but it’s the template for what major film studios and record labels have come to expect of content platforms.
When Google acquired YouTube in 2006, the platform was under heavy fire from major film studios and record labels, which complained in court and in Congress that the platform enabled widespread copyright infringement. YouTube complied with all of the requirements that the Digital Millennium Copyright Act (DMCA) puts on content platforms—including following the notice-and-takedown procedure when rights holders accuse their users of infringement. The DMCA essentially offers content platforms a trade—if they do their part to tackle infringing activity, they’re sheltered from copyright liability under the DMCA safe harbor rules. Hollywood agreed to those rules back in 1998, but now it wanted to rewrite the deal.
In response to legal and commercial pressure from content industries, Google developed Content ID, a program that goes beyond YouTube’s DMCA obligations. Content ID doesn’t replace notice-and-takedown; it creates a system for proactive filtering that often lets rights holders remove allegedly infringing content without even having to send a DMCA takedown request.
Rights holders submit large databases of video and audio fingerprints, and YouTube patrols new uploads for closely matching content. Rights holders can choose to have YouTube automatically remove or monetize videos, or they can review them manually and decide what they want YouTube to do with them. There’s a built-in appeals process (which includes escalation to a DMCA takedown, with the fair use consideration the DMCA requires), but it has problems of its own.
For better or worse, Content ID changed YouTube. It bought the company some goodwill with big content owners, many of which have now become prolific YouTube adopters.
Writing Bots into the Law
But the success of Content ID has led some rights holders to the dangerous notion that filtering alone can end the copyright wars. Now, copyright bots have begun to show up all over the Internet—often in places where they make no sense, like your private videos on Facebook. And it appears that some major content owners won’t be satisfied until web platforms have no choice but to adopt systems like Content ID – in other words, turning a voluntary system into a mandate.
Over the past few years, lobbyists representing large content owners both in the U.S. and in Europe have begun to demand mandatory filtering. These proposals vary, but their goals are the same: a world where social media platforms are vulnerable to massive copyright infringement damages unless they go to extreme measures to police their members’ uploads for potential infringement. The Chinese government has gone all-in on copyright filtering, partnering with Hollywood to scan not just people’s social media posts but even their private devices.
For the record, copyright bots can raise major problems even when they aren’t compelled by law. In principle, bots can be useful for weeding out cases of obvious infringement and obvious non-infringement, but they can’t be trusted to identify and allow many instances of fair use. What’s more, their appeals and conflict-resolution systems are often completely opaque to users and seem designed to favor large content companies.
Still, there’s a world of difference between platforms implementing copyright bots as a business decision and being forced to do so by governments. The latter creates a huge, expensive stumbling block for a company to cross before it can ever compete in the market.
Narrow Regulations and Broad Patents
It gets worse. When companies are given only narrow space in which to compete and innovate, it becomes easier for incumbents to set legal traps within those boundaries.
It might be tempting to think that software patents on copyright filtering will incentivize innovation in filtering, thus making copyright bots more accessible to small platforms. But a patent as broad and generic as Microsoft's risks cutting off innovation well short of that goal: overbroad patents blanket an entire field, rarely disclosing any information of value about the underlying technology.
Business regulations should provide companies wide berth to innovate, experiment, and differentiate themselves from competitors. Patents should cover specific, narrowly defined inventions. Narrow regulations and broad patents are a dangerous combination.
Keep Safe Harbors Safe
Safe harbor protections are essential to how today’s Internet works—without them, many Internet companies would simply be exposed to too much legal risk to operate. Safe harbors have given us the entire social media boom and many other Internet technologies that we take for granted every day.
So any proposal that makes it more burdensome to comply with safe harbor requirements should be examined closely to make sure that it doesn’t close the market to new competitors. Mandatory copyright filtering is likely to do exactly that.
If the kind of laws big media companies are proposing today had been in place 12 years ago, it’s doubtful that YouTube could have survived its early days as a startup. And if those laws get implemented today, new players will need tremendous resources just to get started. Mandatory filtering would create a narrower playing field for Internet businesses and let the most successful players use legal tricks to maintain their advantages. It’s a bad idea.
The field of machine learning and artificial intelligence is making rapid progress. Many people are starting to ask what a world with intelligent computers will look like. But what is the ratio of hype to real progress? What kinds of problems have been well solved by current machine learning techniques, which ones are close to being solved, and which ones remain exceptionally hard?
There isn’t currently a good single place to find the state of the art on well-specified machine learning metrics, let alone the many problems in artificial intelligence that are still so hard that there are no good datasets and benchmarks to keep track of them yet. So we are trying to make one. Today, we’re launching the EFF AI Progress Measurement experiment, and encouraging machine learning researchers to give us feedback and contribute to the effort.
We want to know what types of AI we need to start engaging with on legal, political, and technical safety fronts.
We have drawn data from a number of sources: blog posts that report on snapshots of progress; websites that try to collate data on specific subfields of machine learning; and review articles. Where those sources didn’t have coverage, we’ve gone to the research literature itself and gathered data.
What we have thus far is an experiment, and we’d like to know: Is this information useful to the machine learning community? What important problems, datasets, and results are we missing?
EFF’s interest in AI progress is primarily from a policy perspective. We want to know what types of AI we need to start engaging with on legal, political, and technical safety fronts. Beyond that, we’re also just excited to see how many things computers are learning to do over time.
Given that machine learning tools and AI techniques are increasingly part of our everyday lives, it is critical that journalists, policy makers, and technology users understand the state of the field. When improperly designed or deployed, machine learning methods can violate privacy, threaten safety, and perpetuate inequality and injustice. Stakeholders must be able to anticipate such risks and policy questions before they arise, rather than playing catch-up with the technology. To this end, it’s part of the responsibility of researchers, engineers, and developers in the field to help make information about their life-changing research widely available and understandable. We hope you’ll join us.
EFF has just launched the Summer Security Camp, a two-week membership drive that challenges people everywhere to gather ‘round the online rights movement and prepare for the privacy and free speech challenges in their paths.
Through the 4th of July, anyone can join EFF or renew as a Silicon level member for just $20 and receive a set of miniature field guides with shareable security tips covering these cruciallyrelevantissues:
Border Search: know your rights and defend personal data at the border.
The EFF site contains extensive analysis of these topics and much more, but the Summer Security Camp's printed pocket guides distill some of the most important information to help keep you safe on the go, come what may. Members will have access to home-printable versions of these tips to share with friends and family because as we know, privacy is a team sport and everyone wins.
As a bonus, participants will receive a special edition embroidered patch to help them show support for the cause. Think of it as a digital civil liberties merit badge.
Threats to privacy and free expression abound, but EFF doesn’t believe in the no-win scenario. We work every day to defend user rights and empower you with knowledge that you can share in your community. The more prepared we are and the more we can count on each other, the stronger we’ll be. Let’s take a stand for online rights today!
The Supreme Court’s unanimous decision in Matal v. Tam striking down the trademark non-disparagement requirement as unconstitutional is a big victory for the First Amendment. First, the Court strongly pushed back against the expansion of the government-speech doctrine, perhaps the biggest current threat to free speech jurisprudence. Second, the Court strengthened a position EFF has long advocated—that intellectual property rights and First Amendment rights must be balanced against each other rather than weighted in favor of the former.
The case arose when the band The Slants was denied a federal trademark based on afederal law that prohibits the registration of a trademark that may “disparage. . . or bring into contemp[t] or disrepute” any “persons, living or dead.” The Court found that provision violated the First Amendment. It may no longer be used as a basis for denying trademark registration.
Pushing Back on the Dangerous Government-Speech Doctrine
The Governments’ primary argument in defense of the disparaging trademark ban was that registered trademarks were “government-speech,” not the speech of the trademark owner. That is, in denying registration, the government was not punishing The Slants because it disagreed with the viewpoint the mark expressed; rather, the government was simply choosing not to include disparaging terms in its own speech.
The government-speech doctrine is unique among First Amendment law in that it is the only situation in which the government may discriminate on the basis of the speaker’s viewpoint. In its most basic application, it is noncontroversial: the government itself may adopt policy positions and promote them without having to equally promote opposing policies advocating the opposite viewpoint. In all other contexts, the government cannot deny a speaker access to a forum or otherwise punish them because of a disagreement with the views expressed.
As the Court recognized in Matal, the government-speech doctrine “is susceptible to dangerous misuse. If private speech could be passed off as government speech by simply affixing a government seal of approval, government could silence or muffle the expression of disfavored viewpoints. For this reason, we must exercise great caution before extending our government-speech precedents.”
Significantly, the Court put a stop to what many saw as a gradual expansion of the government-speech doctrine through its previous decisions. The Court characterized its most recent government-speech decision,Walker v. Texas Div., Sons of Confederate Veterans, Inc., in which it held that a state’s specialty license plate program was government-speech, as “likely mark[ing] the outer bounds of the government-speech doctrine.”
The government-speech doctrine is unique among First Amendment law in that it is the only situation in which the government may discriminate on the basis of the speaker’s viewpoint.
The Court thus resoundingly rejected the government’s argument in Matal, explaining that it “would constitute a huge and dangerous extension of the government-speech doctrine.” It characterized the government’s position as “far-fetched” and not even “remotely support[ed]” by any of the Court’s previous government-speech decisions. Trademark registration does not bear any of the hallmarks of government-speech. Rather than articulating an official position by registering various trademarks, often of conflicting views, “the Government is babbling prodigiously and incoherently.” Moreover, “[t]rademarks have not traditionally been used to convey a Government message” and “there is no evidence the public associates the contents of trademarks with the Federal Government.”
Also highly significant to First Amendment doctrine, a plurality of the Court limited another aspect of its government-speech jurisprudence. In several cases, the Court has held that speech by private speakers but subsidized by the government may also be government speech and thus the provision of the subsidy may be subject to viewpoint discrimination without offending the First Amendment. But in Matal, the four justices rejected this argument and sharply limited these subsidy cases to those in which in the government makes cash payments for speech, not any other kind of subsidy.
Reasserting a Better Balance Between Free Speech and Trademark Law
The Court also reaffirmed that trademarks are expressive and imbued with First Amendment protections.
Perhaps the most worrisome implication of the Government’s argument concerned the system of copyright registration. If federal registration makes a trademark government speech and thus eliminates all First Amendment protection, would the registration of the copyright for a book produce a similar transformation? The justices unanimously rejected the government’s suggestion that trademarks could be distinguished from copyright on the ground that they are not expressive:
The Government attempts to distinguish copyright on the ground that it is “‘the engine of free expression,’” Brief for Petitioner 47 (quoting Eldred v. Ashcroft, 537 U. S. 186, 219 (2003)), but as this case illustrates, trademarks often have an expressive content. Companies spend huge amounts to create and publicize trademarks that convey a message. It is true that the necessary brevity of trade- marks limits what they can say. But powerful messages can sometimes be conveyed in just a few words.
In addition, the Court explained that the government does not have a greater ability to discriminate against disfavored viewpoints in registering trademarks merely because trademarks are “commercial speech.” Although commercial speech in many contexts gets somewhat diminished First Amendment protections, even commercial speech is not subject to the government’s viewpoint discrimination.
The U.S. Supreme Court, in Packingham v. South Carolina, unanimously struck down a state law that banned registered sex offenders (RSOs) from using all Internet social media, holding that the law violated the First Amendment.
EFF and our allies Public Knowledge and the Center for Democracy & Technology filed an amicus brief urging this result. The Court cited our brief for three propositions regarding the extraordinary consequences of banishing people from all Internet social media:
Seven in ten American adults use at least one Internet social networking service.
One of them, Facebook, has 1.79 billion active users.
All Governors and nearly all members of Congress use social media to communicate with their constituents.
The Court also cited our brief for the proposition that the broadly worded law might bar access not just to commonplace social media websites, but also to other websites like Amazon.com, Washingtonpost.com, and Webmd.com. Our brief was written by Professor David Post, as well as Jonathan Sherman, Perry M. Grossman, and Henry Bluestone Smith of Boies, Schiller & Flexner LLP.
Both Justice Kennedy’s majority opinion and Justice Alito’s concurrence in the judgment assumed without deciding that the law was content neutral, and thus applied the intermediate scrutiny test used for content neutral laws. Both opinions therefore required the government to prove that the law was narrowly tailored, meaning the law does not burden substantially more speech than necessary to achieve the government’s goal of protecting children. Both concluded that the law failed this test, because it banished RSOs from all Internet social media.
Several statements from the Court’s opinion (which Justice Alito’s opinion did not join) will be critical in deciding all manner of future cases applying the First Amendment to the Internet:
“Cyberspace . . . in general” and “social media in particular” are “the most important places (in a spatial sense) for the exchange of views.”
Internet social media “can provide perhaps the most powerful mechanism available to a private citizen to make his or her voice heard.”
“Even convicted criminals—and in some instances especially convicted criminals—might receive legitimate benefits form these means for access to the world of ideas, in particular if they seek to reform and to pursue lawful and rewarding lives.”
In addition to opposing the banishment of RSOs from all Internet social media, EFF also has long opposed government efforts to strip RSOs of their right to anonymousspeech on the Internet, and efforts to force RSOs to wear location-tracking shackles every moment for the rest of their lives.
EFF opposes laws like these that burden the digital liberties of RSOs for three reasons. First, digital liberty is a fundamental human right that all people should enjoy. Second, government often imposes new technological burdens on “the worst of the worst,” and then expands those burdens to other populations. Third, the government has designated nearly one million people as RSOs, including many non-dangerous people.
The Court’s decision in Packingham strengthens the First Amendment rights of all people to participate in the Internet.
Los californianos ahora tienen la oportunidad de recuperar importantes protecciones de privacidad online.
A principios de este año, el Congreso votó, estrechamente, para revocar las normas federales de privacidad que impedían a tu ISP vender información sobre quién eres y qué haces en línea sin tu permiso. Hoy en día, los legisladores de California están introduciendo una nueva legislación estatal: la Ley de Privacidad de Internet de Banda Ancha de California, A.B. 375 (Chau) - que efectivamente restablecerá esas normas para los usuarios de Internet en California.
Los ISP son nuestros guardianes de Internet, y no deberíamos sacrificar nuestra privacidad a estas empresas sólo para ponernos en línea.
Esta salvajemente impopular votación en el Congreso a principios de este año, anuló años de trabajo en la FCC para crear reglas de privacidad en línea que codificaron y ampliaron las protecciones de privacidad de larga data. Las reglas actualizadas, establecidas para entrar en vigor a finales de 2017, eran necesarias para proteger la información personal revelada a su ISP.
Sin estas protecciones de privacidad, ISPs como Comcast, AT&T y Verizon – compañías a las que ya les pagas para acceder a Internet - tendrán rienda suelta para ganar aún más dinero vendiendo información sobre lo que ves, lo que compras, con quién hablas en línea y más. Y esa capacidad de vender su información sin tu permiso abrirá la puerta a medidas que harán más daño a tu privacidad y seguridad en Internet.
Debido a la herramienta que el Congreso usó para derogar las reglas de privacidad en línea de la FCC - llamada resolución de la Ley de Revisión del Congreso - la FCC no puede escribir reglas similares en el futuro. Y gracias al panorama legal actual, no hay otra agencia federal que pueda proteger a los usuarios de Internet de violaciones de privacidad por sus ISP.
Eso significa que las legislaturas estatales son el mejor lugar para la lucha de los usuarios de Internet para restablecer sus derechos de privacidad. Dieciocho otros estados ya han tomado medidas para considerar medidas similares a la HB 2813 de Oregón (Williamson, Clem, Sánchez y Marsh) que es el proyecto de ley más reciente que se introducirá.
Los ISP lucharon duro, utilizando puntos débiles de la industria, para ganar en el Congreso. Y dado que las normas fueron derogadas a nivel federal, los ISP se han comprometido a proteger la privacidad de sus clientes, pero sabemos que sus promesas les dejaran mucho espacio para vender su información.
Con A.B. 375, tenemos la oportunidad de proteger nuestra privacidad de las infracciones de privacidad de los ISP en California.
Actualización (6/20/17): Una versión anterior de esta publicación identificó incorrectamente el número de estados con medidas pro privacidad de banda ancha. Además de California, 18 estados han introducido o considerado la legislación pro privacidad de banda ancha.
Californians now have a chance to reclaim crucial online privacy protections.
Earlier this year, Congress narrowly voted to repeal federal privacy rules that kept your ISP from selling information about who you are and what you do online without your permission. Today, California legislators are introducing new state legislation—the California Broadband Internet Privacy Act, A.B. 375 (Chau)— that would effectively reinstate those rules for Internet users in California.
ISPs are our gatekeepers to the Internet, and we shouldn’t have to sacrifice our privacy to these companies just to get online.
The wildly unpopular vote in Congress earlier this year undid years of work at the FCC to create online privacy rules that codified and expanded on long-standing privacy protections. The updated rules, set to go into effect in late 2017, were necessary to protect personal information revealed to your ISP.
Without these privacy protections, ISPs like Comcast, AT&T, and Verizon—companies that you already pay to access the Internet—will have free rein to make even more money off of you by selling information about what you look at, what you buy, who you talk to, and more, online. And that ability to sell your information without your permission will open the door to measures that further harm your privacy and security on the Internet.
Because of the tool Congress used to repeal the FCC’s online privacy rules—called a Congressional Review Act resolution—the FCC can’t write similar rules in the future. And thanks to the current legal landscape, there’s no other federal agency that can protect Internet users from privacy violations by their ISPs.
That means state legislatures are the best place for Internet users to fight to reinstate their privacy rights. Eighteen other states have already taken steps to consider similar measures with Oregon’s HB 2813 (Williamson, Clem, Sanchez, and Marsh) being the most recent bill to be introduced.
The ISPs fought hard—using debunked industry talking points—to win in Congress. And since the rules were repealed at a federal level, ISPs have “pledged" to protect their customers’ privacy. But we know that their promises leave plenty of room for them to sell your information.
With A.B. 375, we have a chance to protect our privacy from ISPs’ privacy violations in California.
UPDATE (6/20/17): An earlier version of the post incorrectly identified the number of states with broadband privacy measures. In addition to California, eighteens states have introduced or considered broadband privacy legislation.
La ley de vigilancia extranjera del gobierno de Estados Unidos es tan secreta que ni siquiera un proveedor de servicios que impugna una orden emitida por una corte secreta tiene acceso a ella.
Ese episodio kafkiano - negarle a una de las partes el acceso a la ley que se está utilizando en su contra - se hizo público esta semana en un dictamen FISC que obtuvo EFF como parte de un juicio FOIA que presentamos en 2016.
La opinión [.pdf] muestra que en 2014, el Tribunal de Vigilancia de Inteligencia Extranjera (FISC) rechazó la solicitud de un proveedor de servicios para obtener otros dictámenes del FISC que los abogados del gobierno habían citado y se basó en las solicitudes de los tribunales para obligar a la cooperación del proveedor.
La decisión estaba relacionada con el desafío – finalmente fallido – de parte del proveedor a una directiva de vigilancia que recibió bajo la Sección 702, la autoridad de vigilancia sin garantías que expirará este año.
La decisión es sorprendente porque demuestra cómo el secreto pone en peligro uno de los principios más fundamentales de nuestro sistema legal: todos llegan a conocer aquello que es la ley. Aparentemente, ese principio no se extiende al FISC.
La solicitud del proveedor surgió en medio de un informe legal por parte de él y el DOJ sobre su impugnación de una orden 702. Después de que el Departamento de Justicia citó dos dictámenes anteriores del FISC que no eran públicos en ese momento, uno de 2014 y otro de 2008, el proveedor solicitó al tribunal el acceso a esas resoluciones.
El proveedor sostuvo que sin poder revisar las decisiones anteriores del FISC, no podía entender completamente las decisiones anteriores del tribunal, y mucho menos responder eficazmente al argumento del DOJ. El proveedor también argumentó que debido a que estaban representados por abogados con permisos de seguridad de rango Top Secret, podían revisar las sentencias sin plantear un riesgo a la seguridad nacional.
El tribunal discrepó en varios aspectos. Encontró que las reglas del tribunal y la Sección 702 prohibían la liberación de los documentos. También rechazó la afirmación del proveedor de que la Cláusula de Debido Proceso de la Constitución lo autorizaba a acceder a los documentos.
El dictamen afirma: "Más allá de lo que obliga la cláusula del debido proceso, la Corte está convencida de que la omisión de las opiniones solicitadas no viola la equidad del sentido común". Esto es debido a que la Corte creyó que el Departamento de Justicia había representado con precisión las decisiones en su Escritos legales y no inducian a error al proveedor sobre estipulado en esas sentencias.
El tribunal también dijo que incluso si las opiniones fueran publicadas, "sería de poca ayuda, si es que alguna" a los méritos de los argumentos del proveedor.
A pesar de la opinión de la corte, no existe nada justo en ocultar importantes casos legales - que probablemente interpretaron o crearon leyes – a una de las partes de una disputa legal.
La decisión de la corte se asemeja a permitir que una de las partes lea y cite un caso del Tribunal Supremo mientras que prohíbe a la otra parte hacer lo mismo. Desfavorece fundamentalmente a una de las partes de una lucha legal, además de negarle el acceso al caso para asegurarse de que la parte con conocimientos está representando con precisión la decisión.
En el caso del proveedor, la baraja siempre estaba apilada en contra de su capacidad de desafiar la orden 702. Tradicionalmente, el FISC sólo escucha una de las partes -el Poder Ejecutivo- y generalmente es comprensivo con los reclamos de seguridad nacional.
Aunque los cambios recientes en el FISC como resultado de la Ley de Libertad de los Estados Unidos están en la dirección correcta, incluyendo la capacidad de las partes externas de argumentar ante el tribunal, el Departamento de Justicia todavía tiene muchas ventajas.
En el caso del proveedor, la carta de triunfo era que los abogados del DOJ tenían que leer y confiar en casos que el proveedor nunca llegó a ver.
Ciertamente, este resultado injusto no es enteramente culpa del FISC. Como señala el fallo, el Congreso ha proporcionado pocos, o ningún, recurso a una de las parte que está impugnando órdenes secretas de vigilancia para poder obtener documentos y resoluciones del FISC que sean directamente relevantes para su caso.
Con la Sección 702 a punto de desaparecer este año, el Congreso debe reconocer que el sistema judicial que estableció para aprobar órdenes de vigilancia y escuchar desafíos a esas órdenes se parece muy poco a nuestro sistema de justicia más amplio. Esta inequidad corrompe nuestros principios democráticos fundamentales y es otra razón por la que el Congreso debe terminar con la Sección 702.