When legal issues light up the Internet, people turn to EFF for answers. Whether it’s attacks on coders' rights, overreaching copyright claims online, or governments' efforts to censor or spy on people, we are often among the first to hear about troubling events online, and we're frequently the first place people turn to for legal help.
So why are there times when EFF is involved in an important case but is silent or gives only limited information about it? Usually it’s for one of three reasons: to protect the people who have asked us for help, because of a specific court requirement or because we’re putting the strategy into place.
First, the legal protections for attorney/client communications and attorney work product allow lawyers and their prospective or existing clients to speak frankly with each other and to honestly evaluate the strengths and weaknesses of their cases. But these communications and notes must be kept strictly confidential in order to remain protected. If the confidentiality is broken, the person or a person's attorney can be required to reveal their communications, legal strategies, and evaluations to their opponents – including to prosecutors who can put them in jail or opposing civil lawyers. Breaching these privileges can hurt the people who ask us for help and undermine our chances of winning a case, so we are very careful to avoid doing so.
Other times, a court limits our ability to speak. A recent example of this is when the government demanded information from Twitter as part of its Wikileaks investigation, where we were subjected to a court sealing order. In this Twitter records case, we are representing Birgitta Jonsdottir, one of the Twitter users whose records are being sought by the U.S. government. Initially, the fact of our representation was the only thing we could acknowledge publicly. The court documents in the case were filed under seal, and we could not even discuss the hearing we were preparing for, leading to many awkward and frustrating conversations with EFF members as well as reporters. However, we asked the judge to unseal the court records, and she ultimately did unseal nearly all filed documents in the case to be released to the public, including news of the hearing. In such cases, we press as hard as we can to get the legal proceedings made public, especially for cases involving important personal privacy and free speech implications.
Finally, there are times when we are simply not finished investigating a case to determine whether to take it, or are taking the initial steps to put a strategy into place. Here’s a page outlining some of the things we consider when making those decisions. This often involves not only gathering background information, but also conducting a legal and technological analysis of the situation. We also try to help people find other lawyers for cases we can't take. While working through this process, the worst thing we could do is to talk publicly before a legal strategy is in place and before EFF has solidified our role. This is especially true when the legal situation is in flux, as when emergency legal relief is sought or when some of the people potentially involved have not yet been notified or identified. We’ve had a few of those recently – close watchers of EFF may have some guesses about specific instances where this has been the situation.
However, none of this should keep EFF members, the press, or the public from emailing us at email@example.com when something is happening that potentially requires EFF's involvement. EFF members and the general public are an essential part of our early warning system – a form of crowdsourcing that helps us have a much broader view of what’s going on and where the important cases are occurring. But we hope you will understand if we answer your call or email with limited detail or if we hold back from commenting extensively in the press or on our blog. We believe strongly that everyone’s rights online should be vigorously protected, and sometimes that requires us to be silent.
Further proof that the recording industry’s oft-repeated claims of the downfall of the entire music industry hold no water: a new report finding that filesharing has led directly to "reduced costs of bringing works to market and a growing role of independent labels." In other words, in the past decade, we have seen more music from independent outlets and at lower prices – something that consumers and music fans should all be happy about.
The study, by University of Minnesota economist Joel Waldfogel, proves just what we’ve been saying as recently as last week – that filesharing (unauthorized or not) has led more artists to create more music, and – just as importantly – more different music. U.S. copyright law is based on a compromise recognized in the Constitution that grants authors (or artists, or musicians) a limited monopoly designed to give those authors an incentive to make their creative works. As we’ve long known and as this study makes clear yet again, even in the face of filesharing, those incentives still exist.
UPDATE (3/26/11): HTTPS is again available for those in the countries discussed below. Microsoft denies deliberately blocking access to HTTPS, blaming the problem on a bug:
We are aware of an issue that impacted some Hotmail users trying to enable HTTPS. That issue has now been resolved. Account security is a top priority for Hotmail and our support for HTTPS is worldwide – we do not intentionally limit support by region or geography and this issue was not restricted to any specific region of the world.
Microsoft appears to have turned off the always-use-HTTPS option in Hotmail for users in more than a dozen countries, including Bahrain, Morocco, Algeria, Syria, Sudan, Iran, Lebanon, Jordan, Congo, Myanmar, Nigeria, Kazakhstan, Uzbekistan, Turkmenistan, Tajikistan, and Kyrgyzstan. Hotmail users who have set their location to any of these countries receive the following error message when they attempt to turn on the always-use-HTTPS feature in order to read their mail securely:
Your Windows Live ID can't use HTTPS automatically because this feature is not available for your account type.
Microsoft debuted the always-use-HTTPS feature for Hotmail in December of 2010, in order to give users the option of always encrypting their webmail traffic and protecting their sensitive communications from malicious hackers using tools such as Firesheep, and hostile governments eavesdropping on journalists and activists. For Microsoft to take such an enormous step backwards undermining the security of Hotmail users in countries where freedom of expression is under attack and secure communication is especially importantis deeply disturbing. We hope that this counterproductive and potentially dangerous move is merely an error that Microsoft will swiftly correct.
The good news is that the fix is very easy. Hotmail users in the affected countries can turn the always-use-HTTPS feature back on by changing the country in their profile to any of the countries in which this feature has not been disabled, such as the United States, Germany, France, Israel, or Turkey. Hotmail users who browse the web with Firefox may force the use of HTTPS by defaultwhile using any Hotmail location settingby installing the HTTPS Everywhere Firefox plug-in.
First, the London School of Economics released a paper finding that while filesharing may explain some of the decline in sales of physical copies of recorded music, the decline “should be explained by a combination of factors such as changing patterns in music consumption, decreasing disposable household incomes for leisure products and increasing sales of digital content through online platforms.” And even if the sales of recorded music are down, there is an important distinction to draw: the recording industry may be hurting, but the music industry is thriving. For example, the LSE paper points out that in the UK in 2009, the revenues from live music shows outperformed recorded music sales.
We’ve also seen more and more artists making a go of it on their own. Rebecca Black, a 13-year-old, is reportedly netting nearly $25,000 a week from digital downloads of her hit song, "Friday." The band OK Go famously made a name for itself by self-producing widely popular music videos and then leaving a big record label that failed to “recognize the basic mechanics of the Internet” by attempting to prohibit embedding of the band's video content. As the lead singer noted, "[c]urbing the viral spread of videos isn't benefiting the company’s bottom line, or the music it's there to support." Even bands with record deals are finding different ways to make money. For example, the popular band the Black Keys makes 85% of its money from live shows.
Another recent study, this one by the Social Science Research Council, delves into international aspects of "piracy," especially in emerging markets, and finds unauthorized filesharing in some developing economies has actually created opportunities for media companies to come up with innovative business models that allow legal and widespread access to media goods. For example, in India, "where large domestic film and music industries dominate the national market, [large media companies] set prices to attract mass audiences, and in some cases compete directly with pirate distribution." The impact of this cannot be understated: in many of these emerging markets, the new business models are improving legal access to music and art that was previously unaffordable for many people.
The SSRC report also points out that, despite the content industry’s dire predictions, the media business is still thriving: "Software, DVD, and box office revenues in most middle-income countries have risen in the past decade — in some cases dramatically. Sales of CDs have fallen, but the overall music business, including performance, has grown."
Despite these realities, the policy debate continues to focus on enforcement and "strengthening intellectual property," which, SSRC rightly points out, is incredibly counterproductive and comes at a high social cost. Instead of discussing ways to make sure artists get paid for their work and fans have access to media goods, time and energy is wasted debating how to continue an enforcement policy that has failed to actually curb unauthorized filesharing.
We are encouraged to see studies like these that challenge policy makers to shift the tone of the debate to a more productive conversation about how to innovate and use new technologies to benefit artists and their fans. Because the bottom line is this: those who find ways to capitalize on new technologies will be the ones to succeed going forward.
On March 15th, an HTTPS/TLS Certificate Authority (CA) was tricked into issuing fraudulent certificates that posed a dire risk to Internet security. Based on currently available information, the incident got close to — but was not quite — an Internet-wide security meltdown. As this post will explain, these events show why we urgently need to start reinforcing the system that is currently used to authenticate and identify secure websites and email systems.
There is a post up on the Tor Project's blog by Jacob Appelbaum, analyzing the revocation of a number of HTTPS certificates last week. Patches to the major web browsers blacklisted a number of TLS certificates that were issued after hackers broke into a Certificate Authority. Appelbaum and others were able to cross-reference the blacklisted certificates' serial numbers against a comprehensive collection of Certificate Revocation Lists (these CRL URLs were obtained by querying EFF's SSL Observatory databases) to learn which CA had been affected.
The answer was the UserTrust "UTN-USERFirst-Hardware" certificate owned by Comodo, one of the largest CAs on the web. Comodo has now published a statement about the improperly issued certs, which were for extremely high-value domains including google.com, login.yahoo.com and addons.mozilla.org (this last domain could be used to trojan any system that was installing a new Firefox extension, though updates to previously installed extensions have a second layer of protection from XPI signatures). One cert was for "global trustee" — not a domain name. That was probably a malicious CA certificate that could be used to flawlessly impersonate any domain on the Web.
Comodo also said that the attack came primarily from Iranian IP addresses, and that one of the fraudulent login.yahoo.com certs was briefly deployed on a webserver in Iran.1
What should we do about these attacks?
Discussing problems with the revocation mechanisms that should (but don't) protect users who don't instantly get browser updates, Appelbaum makes the following assertion:
If the CA cannot provide even a basic level of revocation, it's clearly irresponsible to ship that CA root in a browser. Browsers should give insecure CA keys an Internet Death Sentence rather than expose the users of the browsers to known problems.
Before discussing whether or not such a dramatic conclusion is at all warranted, it is worth considering what the consequences of blacklisting Comodo's UserTrust CA certificate would have been. We used the SSL Observatory datasets to determine what had been signed by that CA certificate. The answer was that, as of August 2010, 85,440 public HTTPS certificates were signed directly by UTN-USERFirst-Hardware. Indirectly, the certificate had delegated authority to a further 50 Certificate Authorities, collectively responsible for another 120,000 domains. In the event of a revocation, at least 85,000 websites would have to scramble to obtain new SSL certificates.
The situation of the 120,000 other domains is more complicated — some of these are cross-certified by other root CAs or might be able do obtain such cross-certifications. In most — but not all — cases, these domains could continue to function without updating their webserver configurations or obtaining new certs.
The short answer, however, is that the Comodo's USERFirst-Hardware certificate is too big to fail. If the private key for such a CA were hacked, by the Iranians or by anybody else, browsers would face a horrible choice: either blacklisting the CA quickly, causing outages at tens or hundreds of thousands of secure websites and email servers; or leave all of the world's HTTPS, POP and IMAP deployments vulnerable to the hackers for an extended period of time.
Fortunately, Comodo has said that the master CA private keys in its Hardware Security Modules (HSMs) were not compromised, so we did not experience that kind of Internet-wide catastrophic security failure last week. But it's time for us to start thinking about what can be done to mitigate that risk.
Cross-checking the work of CAs
Most Certificate Authorities do good work. Some make mistakes occasionally,2 but that is normal in computer security. The real problem is a structural one: there are 1,500 CA certificates controlled by around 650 organizations,3 and every time you connect to an HTTPS webserver, or exchange email (POP/IMAP/SMTP) encrypted by TLS, you implicitly trust all of those certificate authorities!
What we need is a robust way to cross-check the good work that CAs currently do, to provide defense in depth and ensure (1) that a private key-compromise failure at a major CA does not lead to an Internet-wide cryptography meltdown and (2) that our software does not need to trust all of the CAs, for everything, all of the time.
For the time being, we will make just one remark about this. Many people have been touting DNSSEC PKI as a solution to the problem. While DNSSEC could be an improvement, we do not believe it is the right solution to the TLS security problem. One reason is that the DNS hierarchy is not trustworthy. Countries like the UAE and Tunisia control certificate authorities, and have a history of compromising their citizens' computer security. But these countries also control top-level DNS domains, and could control the DNSSEC entries for those ccTLDs. And the emergence of DNS manipulation by the US government also raises many concerns about whether DNSSEC will be reliable in the future.
We don't think this is an unsolvable problem. There are ways to reinforce our existing cryptographic infrastructure. And building and deploying them may not be that hard. Look for a blog post from us shortly about how we should go about doing that.
1. This is strong circumstantial evidence that the attack was perpetrated by Iranians, though it also possible that the perpetrators used compromised systems in Iran in order to frame Iran.
3. These numbers are from the SSL Observatory. Before we performed those scans, we are unsure that anybody knew how many CAs were trusted by our browsers and operating systems, because CAs regularly delegate authority to subordinate CAs without announcing this publicly
Yesterday’s decision rejecting the proposed settlement in the Google Books case, Authors Guild v. Google, got a number of things right. For starters, as we wrote shortly after the decision was announced, we’re glad that the court acknowledged the importance of the privacy concerns we helped to raise.
With respect to the class action analysis, the court correctly concluded that the settlement did not take account of the interests of all of the class members, such as academic authors. As UC Berkeley law professor (and EFF board member) Pamela Samuelson noted in a letter quoted by in the decision,
Academic authors, almost by definition, are committed to maximizing access to knowledge. The [Authors] Guild and the [Association of American Publishers], by contrast, are institutionally committed to maximizing profits.
For example, academic authors, if they had been represented at the negotiating table, might have pushed harder for settlement terms that would have allowed readers open access to orphan works.
On the policy front, the court recognized – as do we – the extraordinary potential benefits of the settlement for readers, authors and publishers. We firmly believe that the world's books should be digitized so that the knowledge held within them can made available to people around the world. But the court also recognized that the settlement could come at the price of undermining competition in the marketplace for digital books, giving Google a de facto monopoly over orphan books (meaning, works whose owner cannot be located). The court concluded that solving the orphan works problem is properly a matter for Congress, not private commercial parties. Sadly, Congress has thus far lacked the will to do so. Perhaps yesterday’s decision will finally spur Congress to revisit this important issue and pass comprehensive orphan works legislation, that allows for mass book digitization.
That said, the court also got some things fundamentally wrong in its copyright analysis. For example, it states that “a copyright owner’s right to exclude others from using his property is fundamental and beyond dispute” and then proceeds to quote at length from the letters of numerous authors (and their descendants) who share the misguided notion that a copyright is, by definition, an exclusive right to determine how a work can be used. We respectfully disagree. Copyright law grants to authors significant powers to manage exploitation of creative works as a function of spurring the creation of more works, not as a natural or moral right. And those powers are subject to numerous important exceptions and limitations, such as the first sale and fair use doctrines. Those limits are an essential part of the copyright bargain, which seeks to encourage the growth and endurance of a vibrant culture by both rewarding authors for their creative investments and ensuring that others will have the opportunity to build on those creative achievements. Thus, as the Supreme Court has explained, such limits are "neither unfair nor unfortunate" but rather "the means by which copyright advances the progress of science and art." If the legal issues raised in the underlying lawsuit are ever litigated on the merits, let's hope this or any future judge keeps the traditional American copyright bargain firmly in mind.
The court also insists that it is “incongruous with the purpose of copyright laws” to ask copyright owners to come forward (via the proposed settlement’s opt-out procedures) to protect their rights if they object to Google's activities. Actually, that is precisely what our legal system assumes, i.e., that the copyright owner carries the burden of asserting its rights. And a good thing too, because in many cases, as the orphan works discussion demonstrates, the rightsholder may not be findable (this is especially the case when the original author has died or the publisher has gone under) or may not be interested in asserting her rights. Either way, putting the onus on the rightsholder to step forward expands the breathing space for re-use of creative works.
Finally, the court gives undue credence to the alleged concerns of foreign rightsholders, even though most non-U.S. works were excluded from the proposed settlement, concluding that whether or not the international law concerns are legally valid, “it is significant that foreign authors, publishers, and, indeed, nations would raise the issue.” If, as the court suggests, the orphan works problem can only be solved at the global level, we fear that solution is far-off indeed.
The court urges the parties to go back to the drawing and craft an “opt-in” solution. We worry that limiting digitizers to an "opt-in" approach will result in the creation of limited digital bookstores instead of vast digital libraries. Regardless, we’ll be watching to see what parties to the lawsuit do – and who is invited to participate in the process.
A federal district court in New York today issued a long-awaited ruling in the Google Books case, Authors Guild v. Google, rejecting the proposed settlement between the parties.
EFF participated in the case as counsel to a collection of authors and publishers, including Michael Chabon, Jonathan Lethem and Cory Doctorow, who objected to the settlement based on concerns about reader privacy. EFF worked with the ACLU and the Samuelson Clinic at University of California at Berkeley on the objection.
While noting that "[T]he privacy concerns are real," the court decided that they were not a basis, in themselves, to reject the proposed settlement. It noted that the settlement contained privacy protections for Rightsholders and also noted that Google had "committed" to certain safeguards for readers, while acknowledging that those were voluntary only. The court closed with a strong nudge to Google: "I would think that certain additional privacy protections could be incorporated, while still accommodating Google's marketing efforts."
We look forward to continuing our discussions with Google about implementing additional privacy protections in whatever form the Google Books project takes as it moves forward. In the meantime, EFF and the ACLU are also working together on digital book privacy legislation in California, which should be introduced shortly. The proposed law, which partially grew out of our negotiations with Google, will extend to digital booksellers and libraries the longstanding privacy protections against overreaching government and civil litigation demands for information about readers.
We'll have more to say soon about the Authors Guild v Google decision's potential implications for antitrust, class action and copyright law. The bottom line, for now, is that this ruling greatly increases the need for Congress to step in to fix copyright law to allow for the mass digitization of copyright works, whether under Orphan Works legislation or otherwise. The future must include wide, digital access to the world's accumulated knowledge in books. Regardless of whether this decision is appealed or the agreement is renegotiated, the ball is now in Congress' court to make it so.
Last Friday, a judge in the Nevada federal district court patiently explained why fair use disposes of Righthaven's copyright claim arising from the republication of an entire news article by a nonprofit organization. The hearing was in one of the now-250 Righthaven copyright cases. A written order, which will help set a persuasive precedent for other copyright troll cases, will be issued later.
The hearing was in Righthaven v. Center for Intercultural Organizing. Righthaven sued CIO, an Oregon non-profit organization promoting immigrant rights, alleging copyright infringement of a Las Vegas Review-Journalarticle. Righthaven did not create the news article, but claims the right to sue based on an assignment from the LVRJ.
The copyright troll's business model is to search for blogs and websites that include a newspaper's material, acquire the right to sue on particular articles from the paper, and then file a lawsuit without any prior notice to the defendant. Righthaven seeks the maximum damages under the Copyright Act as well as control over the domain name, but is willing to settle for four-figure sums that seem calculated to be less than the cost of defense. Meanwhile, the actual articles that Righthaven sues over remain available for no charge on the newspaper website.
At the hearing, the Judge went through each of the four factors of a fair use analysis, explaining why they favored a finding of fair use and why Righthaven's legal arguments were unavailing. The judge noted that the work was being used by Righthaven "exclusively for lawsuits", and the lawsuits were having a "chilling effect" on fair uses. Since Righthaven's use of the work "does nothing to advance the Copyright Act's purpose, which is to encourage and protect creativity," Judge Mehan was inclined to find CIO's non-commercial use to be fair even thought it used the entirety of the article.
The Judge asked the defendants and amicus Professor Jason Schultz to draft the order. Righthaven will have an opportunity to comment before it is signed by the Judge and becomes final.
This case is the second case to rule on the merits of Righthaven's copyright-trolling lawsuits. The first, Righthaven v. Realty One, also found a fair use. Righthaven says it intends to appeal both cases.