On Saturday October 1st, eight countries (the United States, Australia, Canada, Japan, Morocco, New Zealand, Singapore, and South Korea) signed the Anti-Counterfeiting Trade Agreement (ACTA) in Tokyo, Japan. Three of the participating countries (the European Union, Mexico, and Switzerland) have not yet signed the treaty, but have issued a joint statement affirming their intentions to sign it “as soon as practicable.” ACTA will remain open for signature until May 2013. While the treaty’s title might suggest that it deals only with counterfeit physical goods such as medicines, it is in fact far broader in scope. ACTA contains new potential obligations for Internet intermediaries, requiring them to police the Internet and their users, which in turn pose significant concerns for citizens’ privacy, freedom of expression, and fair use rights.
EFF was one of the first groups to raise the alarm about ACTA, when negotiations were first announced by the U.S. Trade Ambassador, the European Union, and Japan in October of 2007. From the beginning, we were deeply concerned about the lack of transparency in the negotiating process. The U.S. Trade Representative (USTR) drafted a confidentiality agreement, signed by all parties, which purported to prohibit negotiating countries from disclosing any information about ACTA. Nevertheless, several versions of the trade agreement text and accompanying negotiating documents were leaked to the public, which allowed legal scholars from the participating countries to effectively analyze the impact of ACTA on many different countries with differing legal regimes and regulatory policies. The combination of scholarly analysis and pressure from civil society has helped to rein in the treaty. Many of the most concerning specific provisions that were present in preliminary versions of ACTA, such as requirements for ISPs to adopt Three Strikes Internet disconnection policies, were eliminated from the "final" version released by the USTR in May 2011.
Controversy over ACTA in the United States is far from over. Senator Ron Wyden has sent a letter to President Obama asking why the administration believes that ACTA does not require formal approval from Congress. Wyden goes on to point out that legal scholars have repeatedly raised concerns that ACTA is not consistent with US law and if the USTR ratifies ACTA without Congressional consent, it may be circumventing Congress' Constitutional authority to regulate international commerce. The letter goes on to say:
The executive branch lacks Constitutional authority to enter a binding international agreement covering issues delegated by the Constitution to Congresses' authority, absent Congressional approval.
Meanwhile, Brazil's parliament is debating proposed "Anti-ACTA" legislation, with provisions for the protection of net neutrality and the privacy and personal data of individuals, in direct opposition to language in ACTA which gives copyright holders carte blanche to demand trafic logs from ISPs to identify alleged offenders.
Unfortunately, rightholders' efforts to use multi-lateral treaties to enforce their intellectual property rights across the world may not end with ACTA. A leaked version of the IP chapter of the Trans Pacific Partnership Agreement (TPP), which is currently being negotiated by nine countries (U.S., Australia, Peru, Malaysia, Vietnam, New Zealand, Chile, Singapore, and Brunei) indicates that U.S. negotiators are pushing for the adoption of copyright measures far more restrictive than ACTA. Like ACTA, TPP is being negotiated rapidly and with little transparency. Negotiating countries hope to complete the agreement by November 2011. If you are in the U.S., now is the time to contact your lawmakers and demand transparency around TPP.
Part two in a short series on EFF’s Open Source Security Audit
Our recent security audit of libpurple and related libraries got us thinking about the general problem of open source security auditing, and we wanted to share what we’ve learned. Free and open source software that happens to be community-supported can be challenging from a security perspective. There is a fair amount of recent literature on this topic, and it is debatable whether openly readable source code helps defenders more than it helps attackers. The key issue is not about source code but the fact that community-based open source software projects often lack the organized resources of their corporate cousins. If large corporate projects choose to prioritize security, they can usually afford to hire experts to do regular security reviews; community projects need to find and coordinate volunteers with this specialized focus. In an environment where developers are stretched thin and often have a wide array of responsibilities, the search for security bugs may be less organized and lag behind. How do we combat this problem? How can we ensure good security in a world where vulnerabilities in important open source software can have disastrous consequences for users all over the world?
These are hard questions without simple answers. Yet although there are weaknesses to free, community-supported open source, there are also strengths: one can take advantage of crowdsourcing, open discussion, and can often give integrated updates with less hassle due to friendlier and sane licensing. In order to take advantage of the strengths while mitigating the weaknesses, we think that there are some design choices that these projects can make to drastically cut down on the amount of effort that will be required to do security auditing. These suggestions are by no means original, but we think are even more important to emphasize within the framework of the community-supported open source.
Make the code as simple, modular, and easy to understand as possible. To take advantage of volunteer effort to crowdsource security auditing, the barrier to entry for understanding the code has to be quite low. Modularity in itself helps improve security, but it also helps people take a look at one aspect of the code without having to digest the possibly complicated way that it all hangs together.
Treat every bug as potentially guilty of being a security vulnerability until proven innocent. There can be disastrous consequences to miscategorizing a security threat as benign or publishing security leaks too widely. Though publishing bugs openly helps community development and we want to encourage this practice, we would advise being cognizant about certain classes of bugs that should set off a security risk flag:
Memory bugs: wild or null pointer dereference, use after free, stack or heap corruption, etc.
User input bugs: unvalidated user input, unconstrained memory controlled by the user.
Exploit mitigation bugs: broken or missing mitigations such as ASLR, stack canaries, array bounds checking, ELF hardening, etc.
Avoid using native code (i.e. C/C++) if at all possible in situations where one needs to make security guarantees; instead opt to use a Very High Level Language by default. Although the choice of language is a contentious issue, one can resolve the question scientifically with tests. In particular, one should establish quantified performance requirements; try tuning the hot-spots; try writing only small sections of native code with VHLL bindings. Native code is not type-safe or memory-safe and opens one up to an entire class of attack vectors based on vulnerabilities such as such as buffer overflows and double free bugs. By choosing a VHLL, one effectively eliminates the possibility of being attacked this way.
Avoid giving the user options that could compromise security, in the form of modes, dialogs, preferences, or tweaks of any sort. As security expert Ian Grigg puts it, there is “only one Mode, and it is Secure.” Ask yourself if that checkbox to toggle secure connections is really necessary? When would a user really want to weaken security? To the extent you must allow such user preferences, make sure that the default is always secure.
In some respects our review only scratched the surface of libpurple, GnuTLS and libxml2. In addition to encouraging developers to follow the bullet points above, we also would like to encourage security experts who rely on open source software to get involved in the security auditing effort. Your expertise is invaluable, and writing security patches is just about the nicest thing you can do.
For its 800 millions users, logging out of Facebook is not something done idly. Closing the Facebook tab won’t do it. Closing your browser won’t do it unless you’ve adjusted the settings in your browser to clear cookies upon closing. And Facebook has buried the log-out button so that it isn’t apparent from your Facebook main page or profile page. This doesn’t mean that logging out of Facebook is difficult; it’s not. But this does indicate that when someone logs out of Facebook, they are doing so purposefully. They aren’t just stepping outside of Facebook; they’re closing the door behind them.
On September 25th, 2011, Nik Cubrilovic, a hacker and writer, published a blog post1 that showed that a particular Facebook session cookie wasn’t being deleted after a user logged out. He noted that the session cookie included your Facebook user id number, which would presumably facilitate Facebook associating any data they collected about your browsing the web with your Facebook account. Cubrilovic’s review showed that, based on what the cookies were transmitting, Facebook could easily connect some of your browsing habits to your unique Facebook account.
This set off a storm of media coverage, but much of it lacked a detailed analysis of what Facebook is actually tracking and an understanding of how this could influence pending privacy legislation in Congress.
What Does Facebook Really Track?
Facebook sets two types of cookies: session cookies and tracking cookies.
Session cookies are set when you log into Facebook and they include data like your unique Facebook user ID. They are directly associated with your Facebook account. When you log out of Facebook, the session cookies are supposed to be deleted.
Tracking cookies - also known as persistent cookies - don’t expire when you leave your Facebook account. Facebook sets one tracking cookie known as 'datr' when you visit Facebook.com, regardless of whether or not you actually have an account. This cookie sends data back to Facebook every time you make a request of Facebook.com, such as when you load a page with an embedded Facebook 'like' button. This tracking takes place regardless of whether you ever interact with a Facebook 'like' button. In effect, Facebook is getting details of where you go on the Internet.
When you leave Facebook without logging out and then browse the web, you have both tracking cookies and session cookies. Under those circumstances, Facebook knows whenever you load a page with embedded content from Facebook (like a Facebook 'like' button) and also can easily connect that data back to your individual Facebook profile.
Based on Cubrilovic’s recent findings, there was also a period of time when you kept a session cookie after logging out of Facebook, allowing Facebook to easily associate your web browsing history and your Facebook account. Facebook says they’ve addressed this issue, and that now all session cookies are deleted at log out.
But there have been other concerns around Facebook tracking, including an issue that has surfaced three times in the last year. Dutch doctoral candidate Arnold Rosendaal, independent security researcher Ashkan Soltani, and Stanford doctoral candidate and law student Jonathan Mayer have each discovered instances in which Facebook was setting tracking cookies on browsers of people when they visited sites other than Facebook.com. These tracking cookies were being set when individuals visited certain Facebook Connect sites, like CBSSports. As a result, people who never interacted with a Facebook.com widget, and who never visited Facebook.com, were still facing tracking by Facebook cookies.
But there’s yet another layer to this, a layer often glossed over by mainstream coverage of this issue: Facebook can track web browsing history without cookies. Facebook is able to collect data about your browser – including your IP address and a range of facts about your browser – without ever installing a cookie. They can use this data to build a record of every time you load a page with embedded Facebook content. They keep this data for 90 days and then presumably discard or otherwise anonymize it. That's a far cry from being able to shield one’s reading habits from Facebook.
For its part, Facebook admits they collected the data through the accidental setting of tracking cookies and the failure to delete session cookies upon log out - but says these were oversights. They say that the issues are now resolved. They expanded their help section and sent us this statement:
Our intentions stand in stark contrast to the many ad networks and data brokers that deliberately and, in many cases, surreptitiously track people to create profiles of their behavior, sell that content to the highest bidder, or use that content to target ads on sites across the Internet.
The Trust Gap
For users concerned about privacy, this statement is small consolation. It’s clear that Facebook does extensive cross-domain tracking, with two types of cookies and even without. With this data, Facebook could create a detailed portrait of how you use the Internet: what sites you visit, how frequently you load them, what time of day you like to access them. This could point to more than your shopping habits – it could provide a candid window into health concerns, political interests, reading habits, sexual preferences, religious affiliations, and much more.
Facebook insists they aren’t misusing the data they are collecting. The question is then: do we as Internet users trust Facebook? Do we trust them not to connect our data with our Facebook profiles, sell it to marketers, or provide it to the government upon request? If Facebook’s business model becomes less profitable in the coming years, do we trust them to continue to not connect tracking data to profiles? If the government brings pressure to bear on Facebook, do we trust Facebook to stand with users and safeguard the data they’ve collected? And, do we believe that Facebook isn’t actually connecting browsing data to profiles now, given their history of mistakes when it comes to tracking and the clear market incentive they would derive from that sort of connection?
This is the “trust gap”- the space between what Facebook promises they are doing with the data they are collecting and what we as Facebook users can reasonably trust them to do. And, when it comes to safeguarding the sensitive reading habits of millions of users, the trust gap is pretty wide.
Could Privacy Snafus Spur Privacy Legislation?
If you are uneasy with Facebook’s cross-domain tracking, you aren’t alone. This has led to a call from lawmakers as well as privacy advocates to have the FTC investigate whether Facebook deceived users by tracking logged-out users. And a group of 6 Facebook users has filed suit against Facebook over this issue.
This newest privacy snafu could prod legislators into moving on one of the many online privacy bills that have been introduced this year. Users’ unease with the quickly-evolving technical capabilities of companies to track users, combined with the abstruse ways in which that data can be collected (from social widgets to super cookies to fingerprinting), has resulted in a growing user demand to have Congress provide legal safeguards for individual privacy when using the Internet.
Unsurprisingly, Facebook hopes that its brand of data collection through ‘like’ buttons won’t be subject to federal regulation. According to AdAge, Facebook sent an “army of lawyers” to Washington to convince Senators McCain and Kerry to carve out exceptions to their recently introduced privacy bill so that Facebook could track their users via social widgets on other sites (dubbed the "Facebook loophole"). But while Kerry and McCain may have acquiesced to Facebook's requests, Senator Rockefeller did not. He introduced legislation that would empower the FTC to create rules around how best to protect users online from pervasive online tracking by third parties.
Facebook seems keen to influence future legislation on these issues. They recently filed paperwork to form a political action committee that will be "supporting candidates who share our goals of promoting the value of innovation to our economy while giving people the power to share and make the world more open and connected."
We hope that these efforts to influence politicians won't come at the cost of strong protections for user privacy on the Internet. As the situation currently stands, the resources available to governments and corporations to track users across the Internet far outstrip the resources of the average user to fend off such tracking. And from all appearances, self-regulation by industry is failing.
What You Can Do
If you find yourself creeped-out by being tracked by Facebook on non-Facebook sites, then you have a few options to protect yourself and voice your concerns.
Adjust the settings in your browser to delete all cookies upon closing. Clear your cookies when leaving a social networking site, and log out of Facebook before browsing the web. You should consider having one browser strictly for logging into your Facebook account and one browser for the rest of your web usage.
Support privacy legislation like the Rockefeller Do Not Track bill, which will give users a voice when it comes to online tracking.
1. According to his blog, Cubrilovic says he’s been trying to inform Facebook of these issues since November 14, 2010
Tomorrow, October 11, Egyptian blogger Maikel Nabil Sanad will have reached the 50th day of his hunger strike. Arrested in March, Sanad was later sentenced, by a military court, to three years in prison for accusing the military of having conducted virginity tests on female protesters (a charge later found to be true) and stating that "the army and the people are not one," a statement that runs counter to much of the sentiment expressed in Tahrir Square throughout January. In August, Sanad began a hunger strike in the hopes that it would "draw public attention to his plight and force the ruling military council to reconsider what he describes as the military’s 'discriminatory' policies," according to Shahira Amin of Index on Censorship.
Sanad himself has written from prison, sending missives via the site MidEast Youth. Sanad's father also recently wrote a letter of support for his son, citing his mental and physical state and calling for his immediate release.
A call for free expression in Egypt In post-revolutionary Egypt, free expression is not yet a guarantee. Numerous activists have been investigated by the ruling Supreme Council of Armed Forces (SCAF), while, between February and September, 11,879 people had been tried or investigated by military courts. Though Sanad's case has garnered minimal support in Egypt due to his stance on Israel (which he has supported for what he calls its "democratic values and freedom of expression"), calls for his release persist. Paraphrasing Evelyn Beatrice Hall, Professor Rasha Abdulla of the American University of Cairo recently wrote that, while she does not support Sanad's points of view, "as someone who has always been a staunch supporter of freedom of thought and expression, I will defend to the death his right to say them." Yesterday, Field Marshal Mohamed Hussein Tantawivowed to end to military trials "with notable exceptions," which many see as too little, too late. Among the exceptions is the crime of "spreading false information about the military," the same crime for which Sanad was initially charged.
EFF reiterates our call for the immediate release Maikel Nabil Sanad. If Sanad remains in prison, he will die.
EFF joins millions around the world in mourning the passing of Steve Jobs. Steve was an extraordinary innovator who changed how we think about, develop, use, and experience new technologies, music, and ideas. While we've sometimes found ourselves frustrated with some of Apple's business strategies, we here at EFF have always had tremendous respect for Steve's creative genius and commitment to making products that were powerful, accessible, and elegant. His imagination and vision changed the world. He will be missed.
The European Parliament today formally recognized what has become increasingly clear: some European tech companies have been selling to repressive governments the tools used to surveil democracy activists. In response, it passed a resolution to bar overseas sales of systems that monitor phone calls and text messages, or provide targeted Internet surveillance, if they are used to violate democratic principles, human rights or freedom of speech.
According to Bloomberg, the decision came after a Bloomberg report in August that "a monitoring system sold and maintained by European companies had generated text-message transcripts used in the interrogation of a human-rights activist tortured in Bahrain." The legislation reportedly leaves enforcement to the EU’s 27 member nations.
But European companies aren't the only ones. Recently Narus, a Boeing subsidiary based in Silicon Valley, was revealed to have sold to Egypt sophisticated equipment used for surveillance. (Note: EFF watchers will recognize Narus as one of the companies whose equipment is in AT&T “secret room” used to help the NSA conduct warrantless surveillance in the U.S. at the heart of our Jewel and Hepting cases).
And it's not just a problem in the Middle East. Cisco Systems is facing litigation in both Maryland and California based on their sales of surveillance equipment used by China to allegedly track, monitor and otherwise facilitate the arrest, detention or disappearance of human rights activists and religious minorities who have been subjected to gross human rights violations.
Despite the “head in the sand” approach of some tech companies, this concern is real and is not going away. Members of the U.S. Congress, such as Republican Representatives Chris Smith and Mary Bono and Democratic Senator Richard Durbin, are also watching closely.
It’s time for tech companies to step up and ensure that they aren’t wittingly or unwittingly assisting in the commission of gross human rights violations. While there may be many ways to accomplish this, a simple step would be for companies to voluntarily adopt a robust "know your customer" approach. First, companies selling these specialized surveillance technologies to repressive foreign governments need to take affirmative steps to know who they are selling to and what the technology will be used for, especially when they are providing ongoing service or customization of the systems. The U.S. State Department already publishes annual human rights reports about countries around the world and other objective resources are readily available, including EFF. This wouldn't be much more of a burden than what these sophisticated companies already must do to comply with laws like the Foreign Corrupt Practices Act and the the U.S. export restrictions. Second, companies need to refrain from participating in transactions where there is either objective evidence or credible concerns that the technologies or services are being used, or will be used, to facilitate human rights violations.
We'll be writing more about this. But the message from the EU Parliament is clear: Tech companies need to stop participating in human rights abuses around the world by selling tools that repressive governments need to commit them. Tech companies need to stop serving as "repression's little helpers."
After months of work, and spurred by an initial report by Professor Ted Byfield of New School University's Parsons New School for Design, we’re happy to report a security vulnerability fix in a product called Safe•Connect.
While the immediate story is good, the underlying context should raise real concerns about the dangers inherent in the ongoing obsession of Congress and the content industry with pressuring intermediaries, especially universities, to use their status as network operators to require individuals to install monitoring software like Safe•Connect on their computers in order to appease the content industry. As Stewart Baker, then the Department of Homeland Security’s policy czar warned during a similar incident involving the Sony Rootkit: "It’s very important to remember that it’s your intellectual property — it’s not your computer. And in the pursuit of protection of intellectual property, it’s important not to defeat or undermine the security measures that people need to adopt in these days."
Network administrators have been interested for years in software meant to enforce rules on other people's computers connected to a network – a technology called Network Access Control (NAC). NAC software runs as an agent on behalf of the network administrator, reporting back information about how the computer is configured, examining its security policies, and, in some cases, making changes. We might describe such software as spyware that network operators ask users to install on their computers, although the Safe•Connect system does not appear to be configured to report back on the content a user stores on his or her computer. Why do network operators want this power? There are many possible reasons, but, most often, it's aimed at making sure the network users have taken security precautions and applied software updates that the network operator considers necessary. Such enforcement software sometimes requires administrative privileges on the users' computer, and in any case its use raises serious questions about computer users' autonomy and right to control and make decisions about their own computers.
In an academic environment, the use of this software on non-university-owned computers — like the personal machines owned by students, teachers and campus visitors — is sometimes controversial. Although it might be used largely in users' own interest, especially when it helps remind less-sophisticated users to apply software upgrades they might otherwise neglect, it can also introduce security and privacy threats of its own. At a minimum, schools should examine this type of software skeptically and should give sophisticated users a way to opt out of installing it. Unfortunately, one source of pressure overshadowing universities' decision-making in this area lately has been Congressional attention to copyright enforcement.
While the RIAA has abandoned its ineffective litigation campaigns, it and the MPAA have increased their efforts to lobbying Congress, pressure intermediaries, and lobby Congress to pressure intermediaries to take every more draconian steps to try to stop copyright infringement. In particular, colleges and universities have always been popular targets for both Big Content and Congress. In addition to threatening letters, ill-advised lawsuits, and propaganda campaigns, anti-P2P zealots have embraced technological “solutions” such as Audible Magic’s CopySense. EFF’s technologists believe these technologies are fundamentally flawed: they are expensive, easily circumvented, and ultimately ineffective. However, the drumbeat coming from Congress may be deterring some universities from looking critically at these technologies, instead encouraging them to adopt quick fixes.
Safe•Connect Security Vulnerability
Enter Safe•Connect, a product developed by Impulse Point, LLC. Safe•Connect is one of a breed of NAC products, designed to keep private networks—particularly college and university networks— “clean.” Impulse Point markets Safe•Connect as capable of enforcing compliance with security policies set by the school’s network administrators. In addition to keeping student’s, teachers’ and campus visitors’ anti-virus software updated and their operating systems patched (security measures that users might be neglecting), the technology is marketed, and in some cases used by schools, to prevent those on campus from running certain peer-to-peer software over the school’s network resources. In other cases, the technology “warns” those on campus that are running P2P software, making sure they know that Big Brother is watching.
It was New School University’s requirement that students and faculty install Safe•Connect on their own computers that led Professor Byfield, a professor of Art, Media and Technology, to raise his initial concerns. Starting with Professor Byfield’s work, and especially curious about Impulse Point’s claimed ability to notify users about and block peer-to-peer systems, EFF and researchers at the University of Michigan started investigating. We obtained a copy of the Policy Key, the application from Safe•Connect that universities require each student, faculty or visitor to install on her personal computer before she is allowed access to the Internet over the university network. After a bit of reverse engineering, the researchers found that an older but widely-distributed version of the Policy Key would accept purported “updates” from a local server with no authentication. So a knowledgeable attacker, even on a non-university network, could pretend to be this server and substitute malicious software of their choice, disguised as Policy Key updates. That means users who ran this version of the Policy Key on their systems could be vulnerable to attacks from strangers even after leaving the universities that originally asked them to install it. This goes to show that asking people to install software just to be allowed onto a network can come with its own set of security risks, since bugs in that software constitute new ways onto users' machines. (The MacOS X Policy Key version also ran as root with improperly-set file permissions, which would let any other software on a MacOS system with the Policy Key installed gain administrative privileges and take over the system.)
Concerned about the thousands of students, faculty and campus visitors who—whether in the name of network security or intellectual property protection—were required to install and run vulnerable software, EFF and the researchers contacted Impulse Point. To their credit, the Safe•Connect developers responded promptly. They pointed out that the vulnerabilities had already been fixed in newer versions for returning students and staff, and they then delivered the security patch to their university network and other customers for those with past versions of the software that were still on their university networks. Impulse Point is also committed to implementing a plan to address those (such as graduating seniors, staff who have left and campus visitors) who were not otherwise likely to get automatic updates.
Bullet Dodged, But Underlying Problems Remain
Overall, we were pleased with Impulse Point’s openness, willingness to respond and speed with which they responded to us. It was a refreshing change from the hostility with which some technology companies respond to security vulnerabilities. We also have no reason to believe any of the identified vulnerabilities were ever exploited in the wild.
But the underlying problem remains: Big Content’s relentless crusade against P2P technology has unintended consequences. Just as the RIAA’s lawsuits embroiled a number of innocent people in expensive litigation and Congress’ DMCA takedown procedures often chill speech protected by fair use, these technological “solutions” can cause collateral damage. The pressure to require students, professors and campus visitors to install and run software on their computers as a way to “protect” the content industry is wrong, and can be dangerous. Even in the context of protecting network security, requiring everyone on campus to run programs that either run as root or can be adapted or manipulated from afar is troubling, but as a quixotic attempt to deter copyright infringement, it definitely goes too far.
A Virginia district court is the latest to call out a copyright troll for using a business model designed to be little more than a shakedown operation to extract quick and easy settlements from hundreds of thousands of John Doe defendants. Judge Gibney says it far better than we could:
The Court currently has three similar cases before it, all brought by the same attorney. The suits are virtually identical in their terms, but filed on behalf of different film production companies. In all three, the plaintiffs sought, and the Court granted, expedited discovery allowing the plaintiffs to subpoena information from ISPs to identify the Doe defendants. According to some of the defendants, the plaintiffs then contacted the John Does, alerting them to this lawsuit and their potential liability. Some defendants have indicated that the plaintiff has contacted them directly with harassing telephone calls, demanding $2,900 in compensation to end the litigation. When any of the defendants have filed a motion to dismiss or sever themselves from the litigation, however, the plaintiffs have immediately voluntarily dismissed them as parties to prevent the defendants from bringing their motions before the Court for resolution.
This course of conduct indicates that the plaintiffs have used the offices of the Court as an inexpensive means to gain the Doe defendants' personal information and coerce payment from them. The plaintiffs seemingly have no interest in actually litigating the cases, but rather simply have used the Court and its subpoena powers to obtain sufficient information to shake down the John Does. Whenever the suggestion of a ruling on the merits of the claims appears on the horizon, the plaintiffs drop the John Doe threatening to litigate the matter in order to avoid the actual cost of litigation and an actual decision on the merits.
The plaintiffs' conduct in these cases indicates an improper purpose for the suits. In addition, the joinder of unrelated defendants does not seem to be warranted by existing law or a non-frivolous extension of existing law.
The Virginia court ordered the plaintiff to show why it should not be sanctioned for this behavior, and also ordered it to “immediately” notify the subpoena recipients (the ISPs) that the subpoenas have been quashed and all defendants but one severed from the case. Also of note, the court ordered the plaintiff to file (under seal), copies of all notices sent to all defendants. It’s unclear what, if anything, the court will do with that information, but we’re hopeful it will help notify the Doe Defendants that they’ve been severed from the suit.
The Eastern District of Virginia orders join a couple of other positive recent rulings. In Texas, repeat plaintiff’s lawyer Evan Stone was scolded by Judge McBryde for not “display[ing] the slightest degree of candor” by failing to disclose that he has:
filed at least sixteen lawsuits similar to the instant action in [another] division of this court, that each of those lawsuits was summarily dismissed, principally for improper joinder of the defendants, and that discovery of the kind, and under the conditions, sought by, and granted to, plaintiff in this action was inappropriate.
And in the Northern District of California, Magistrate Judge Grewal severed all but one of 5,041 Doe Defendants, stating that,
As the court has come to learn in yet another of the recent “mass copyright” cases, subscriber information appears to be only the first step in the much longer, much more intrusive investigation required to uncover the identity of each Doe Defendant. The reason is simple: an IP address exposed by a wireless router might be used by the subscriber paying for the address, but it might not. Roommates, housemates, neighbors, visitors, employees or others less welcome might also use the same address.
We applaud these judges for calling these cases what they really are – little more than a shakedown scheme – and for stopping plaintiffs from running roughshod over due process in order to extort settlements.