We all know that HTML links are the heart of the World Wide Web. What many don’t appreciate is that legal liability for linking varies greatly across countries. Given the importance of linking to the World Wide Web, whether websites can be held liable for copyright infringement for linking to material that is potentially copyright-infringing is a key issue. While US copyright law has a safe harbor for websites that provide location tools, the European framework for e-commerce does not have a specific limitation on liability for websites that provide links. As a result, courts in different EU member states have developed different standards for linking liability. In recent years, Spanish courts have issued several inconsistent rulings on whether websites containing links to potentially copyright-infringing material on peer-to-peer networks violate copyright owners’ exclusive right under Spanish law of making available copyrighted works. But now a recent decision of the influential Court of Appeals of Barcelona (Audiencia Provincial de Barcelona) in the case of Indice-web has clarified that merely providing a link is not "making available" content, and does not infringe copyright.
The case was brought by the Spanish collecting society Sociedad General de Autores y Editores (SGAE), which sued the owner of Indice-web, a website that provided, among other content, links to potentially copyright infringing content that could be downloaded with P2P software. The court at first instance found that Indice-web was not liable for copyright infringement because it did not host any copyright-infringing content and merely operated as an index of websites, providing only links. If viewers chose to click on the links and download particular content, the content would be transmitted by the third party web server and reproduced on the user’s computer, without any involvement of Indice-web. On that basis, the court denied the provisional measures requested by SGAE—an injunction ordering immediate cessation of making available links to musical works in SGAE's repertoire without permission; seizure of all the proceeds earned by the defendant in the marketing of Indice-web; and the suspension of the services provided by the upstream host of Indice-web.
The court noted that Indice-web merely acts as a guide for users by providing a link to works that could later be downloaded or exchanged through P2P programs. The court also noted that Spanish law does not forbid such guidance or orientation. In this case, the court held that "the linking system does not constitute distribution, nor reproduction nor public communication," under Spanish law.
SGAE appealed the ruling, arguing that the first court’s decision only analyzed the defendant’s conduct regarding provision of links to content accessible via P2P networks, but did not consider other possible bases for copyright infringement liability such as providing assistance for direct downloads and unauthorized streaming of copyrighted works hosted on a third party server. The Court of Appeals declined to rule on those questions because they had not been raised by SGAE at first instance. The court clarified that the main issue in question was whether placing a link pointing to content stored on a different server constituted impermissible reproduction, "making available," or communication to the public, under Spain’s copyright law.
The Court of Appeals reaffirmed the reasoning of the previous court and ruled that Indice-web did not violate copyright because it merely provides links and does not participate in hosting or the transmission of potentially copyright-infringing content. It found that: "Providing a link does not imply making available the protected work according to letter i) of article 20.2 of the Intellectual Property Act, and in such sense does not qualify as public communication. Making available the protected work occurs in the computers where the protected work is hosted and where it can be downloaded through P2P networks. In such sense, [it is those] users who make available the protected work." The court also found that Indice-web was not engaged in advertising, or any for profit activities.
Although this ruling is not directly binding outside Spain it is important because it comes from the influential Barcelona Court of Appeals and clarifies the previous inconsistent Spanish rulings. Given the fundamental importance of linking to the World Wide Web, we are heartened to see that the Barcelona Appeals Court and the court of first instance understand Internet architecture and the important policy issues this case raises. We hope other European courts will take a similarly thoughtful approach to these issues going forward.
In countries across the world, IP rightholders are pushing website blocking as the latest weapon against online copyright infringement. United Nations’ Human Rights experts, security engineers, law professors and others are pushing back, noting both the enormous collateral damage such blocking can cause and the likelihood that it will do little to actually curb infringement.
Against this background, the UK government’s announcement on Wednesday that it will not go forward with a highly controversial website blocking regime - at least for the time being - is an important step in the right direction. Unfortunately, this is unlikely to be the end of the debate in the UK, if Freedom of Information documents and news reports about comments made by UK Minister for Communications, Culture and the Creative Industries Ed Vaizey are any indication.
The UK government’s decision was based on a report by UK regulator Ofcom, dated 27 May 2011, but released on Wednesday as part of the UK government’s welcome response to the landmark Hargreaves report from May. As we saw in last week’s Newzbin2 judgment, rightsholders in the UK already have the ability under existing law to obtain court injunctions requiring ISPs to block websites proven to have infringed copyright, but the reserved blocking powers in sections 17-18 of the 2010 Digital Economy Act are broader and more controversial. That’s why the UK Department of Culture, Media and Sports asked Ofcom in February to review whether those website blocking provisions were workable. Ofcom concluded they weren’t, but did not reject use of website-blocking altogether.
Ofcom’s report considers the technical feasibility of four techniques that Internet intermediaries could use to block sites (Internet Protocol blocking, Domain Name System alteration, URL blocking, and Packet Inspection of network traffic) against 7 criteria: speed of implementation; cost, blocking effectiveness; difficulty of circumvention by users and counter measures ISPs could take; ease of administrative or judicial process; the integrity of network performance; and the level of granularity of blocking that is possible and corresponding impact on legitimate services. Some of this analysis was redacted in the report released by the UK DCMS on Wednesday but an unredacted version of the Ofcom report has now been posted here.
Ofcom concludes that while it is feasible to “constrain access to prohibited locations on the Internet” using these techniques alone or in combination, “none of the techniques is 100% effective; each carries different costs and has a different impact on network performance and the risk of over-blocking” and that “[f]or all blocking methods circumvention by site operators and Internet users is technically possible and would be relatively straightforward by determined users.”
This, of course, is not news to anyone who has taken the time to investigate what is involved in website blocking.
Despite all that, Ofcom concludes that website blocking could form ”part of a broader package of measures to tackle infringement”:
“Although imperfect and technically challenging, site blocking could nevertheless raise the costs and undermine the viability of at least some infringing sites, while also introducing barriers for users wishing to infringe. Site blocking is likely to deter casual and unintentional infringers and by requiring some degree of active circumvention raise the threshold even for determined infringers.”
The report goes on to suggest that if blocking is to be implemented, DNS blocking is the preferable approach because it would cause the least delay and cost – two of the key concerns voiced by copyright holders. It also suggests augmenting this by requiring search engines to delist websites.
However, Ofcom notes that DNS blocking is at best a short term solution because implementation of DNS Security Extensions (DNSSEC) will shortly make DNS blocking more transparent and hence less effective. It therefore recommends Packet Inspection for a longer term solution but highlights that it is technically complicated (and therefore slower to implement), expensive (translation: Internet access costs are likely to go up as costs are passed onto subscribers), and raises a number of pesky legal questions – such as whether DPI is compatible with UK privacy and data protection law. (We’d like to read that legal opinion).
After IP righstholders’ intense lobbying for PIPA in the US and the concerted push in international fora (such as WIPO and the OECD) for Internet intermediaries to act as copyright police and engage in website blocking, it is heartening to see the UK regulator’s sensible discussion of the technological limitations, and awareness of the public policy implications of ordering Internet intermediaries to comb through our online communications.
We welcome the UK government’s announcement, and commend its commitment to evidence-based policy-making. Let’s hope that policymakers across the world take the time to understand the implications for all Internet users’ security, for human rights and the rule of law, and the future of the open global Internet of telling DNS servers to “lose” parts of the Internet in the name of enforcing copyright holders’ private rights.
In a cursory opinion issued today that left us scratching our heads, a federal judge has ruled that the government does not have to return a domain name seized by Immigration and Customs Enforcement (ICE), because its seizure did not create a substantial hardship. Really?
Puerto 80 is the Spanish company behind popular sports streaming sites Rojadirecta.com and Rojadirecta.org, which were both seized by U.S. ICE earlier this year -- even though a Spanish court found they did not violate copyright law. Puerto 80 filed a petition to have the sites released pending a trial on the merits of the case. The petition explained that government's seizure and continued control of the site was seriously damaging Puerto 80's business and also infringed on its readers' First Amendment right to access its content. EFF, with co-amici Public Knowledge and Center for Democracy and Technology, submitted an amicus brief that elaborated on the First Amendment issues.
Puerto 80's petition explained that while the company can host content elsewhere, its usual visitors might not know how to find it. Too bad, said the court. "Rojadirecta has a large internet presence and can simply distribute information about the seizure and its new domain to its customers," it declared. Perhaps the court thinks Puerto 80 should buy some Google ads? Would the court come to the same conclusion if the site in question was youtube.com? (Maybe so, which is even more frightening).
And the court's First Amendment analysis is flatly wrong. Puerto 80 (and EFF) explained to the court that cutting off access to the site also meant cutting off access to clearly legal content, such as discussion forums. The court dismissed these concerns with a wave:
Although some discussion may take place in the forums, the fact that visitors must now go to other websites to partake in the same discussions is clearly not the kind of substantial hardship that Congress intended to ameliorate in enacting § 983 [the statute that allows for the return of seized property].
Here's the thing: the Supreme Court doesn't agree. The fact that you can get information via a second route does not mean that there is no speech problem with shutting down the first one. In a 1939 case, Schneider v. New Jersey, for example, the Supreme Court held that
one is not to have the exercise of his liberty of expression in appropriate places abridged on the plea that it may be exercised elsewhere.”
It repeated this basic tenet some forty years later in Va. State Bd. of Pharmacy v. Va. Citizens Consumer Council, Inc.:
We are aware of no general principle that freedom of speech may be abridged when the speaker’s listeners could come by his message by some other means . . . .”
As if misapplying the relevant substantive First Amendment analysis weren't bad enough, the court failed to even address the fatal procedural First Amendment flaws inherent in the seizure process: namely, that a mere finding of "probable cause" does not and cannot justify a prior restraint. How the court can conclude that the seizure satisfies the First Amendment in this regard is a mystery.
This ruling is profoundly disappointing, to say the least. And it certainly doesn't bode well for the rights of folks whose websites might be targeted under the PROTECT-IP Act now pending in Congress.
UPDATE, 8/25/11: There are a couple of revisions to this post which are marked inline below, and explained further here.
Earlier this year, two researchpapers reported the observation of strange phenomena in the Domain Name System (DNS) at several US ISPs. On these ISPs' networks, some or all traffic to major search engines, including Bing, Yahoo! and (sometimes) Google, is being directed to mysterious third party proxies.
A report in New Scientist today documents that the traffic is being rerouted through a company called Paxfire. This blog post, coauthored with one of the teams that discovered the phenomenon, will explain the situation in more detail.
Who is rerouting this search traffic?
The proxies in question are operated either directly by Paxfire, or by the ISPs using web proxies provided by Paxfire. Major users of the Paxfire system include Cavalier, Cogent, Frontier, Fuse, DirecPC, RCN, and Wide Open West. Charter also used Paxfire in the past, but appears to have discontinued this practice.
Why do they do this?
In short, the purpose appears to be monetization of users' searches. ICSI Networking's investigation has revealed that Paxfire's HTTP proxies selectively siphon search requests out of the proxied traffic flows and redirect them through one or more affiliate marketing programs, presumably resulting in commission payments to Paxfire and the ISPs involved. The affiliate programs involved include Commission Junction, the Google Affiliate Network, LinkShare, and Ask.com. When looking up brand names such as "apple", "dell", "groupon", and "wsj", the affiliate programs direct the queries to the corresponding brands' websites or to search assistance pages instead of providing the intended search engine results page.
What can I do about it?
If you want to know if the network you're currently on is subject to this type of traffic redirection, you can run a Netalyzr test. And the best protection against the privacy and security risks created by this type of hijacking is to visit sites using HTTPS rather than HTTP, which can easily be achieved using EFF's HTTPS Everywhere Firefox extension.
More technical details below...
A detailed explanation
For most users of the World Wide Web, visiting a website equals clicking on a link to the site or entering the site's name into their browser, and receiving the corresponding page from the site. Users generally assume that the site's name is identical to the site itself, and essentially trust the site's authenticity if it looks as usual and the browser does not pop up phishing warnings or other signs of trouble. Paxfire's misdirection of search traffic undermines this trust.
The ICSI Networking group develops and operates the ICSI Netalyzr, a tool that tests the characteristics of users' Internet connections. Netalyzr's measurements show that approximately a dozen US Internet Service Providers (ISPs), including DirecPC, Frontier, Hughes, and Wide Open West, deliberately and with no visible indication route thousands of users' entire web search traffic via Paxfire's web proxies.
To explain these redirections further, we first need to delve into the workings of the Internet a bit. Since the Internet does not route traffic to names but to network addresses, contacting a website involves translating the site's name (say "www.google.com") to the IP address (say 184.108.40.206) of a computer that runs Google's web server. It is to this address that the browser actually sends its request. The Domain Name System (DNS) is in charge of facilitating this mapping of names to addresses. It is the Internet's equivalent of telephone books.
Usually, ISPs provide DNS servers (directory assistance, essentially) for their users. When a user's computer asks to map a name to an IP address, the user's system contacts the ISP's DNS server, which looks up the correct IP address for the name and returns it to the user. As currently implemented, this process does not provide any guaranteed correctness. In essence, users must trust their ISP's DNS servers to correctly return IP addresses that indeed belong to the site the user intends to visit. In some instances, however, this trust may not be warranted.
For a while now, a number of ISPs have worked in cooperation with Paxfire and similar businesses like Barefruit and Golog to profit from mistakes that users make when typing names into their browsers. Paxfire provides a product for ISPs that rewrites DNS errors (effectively conveying "the name you asked for doesn't exist") to responses sending users to search pages that host advertisements, for which Paxfire then shares the corresponding ad-related revenue with the ISPs. This practice has already been controversial.
Rerouting of requests to and responses from search engines
Paxfire's product also includes an optional, unadvertised, and more alarming feature that drastically expands Paxfire's window into users' traffic. Instead of activating only upon error, this product redirects the customers' entire web search traffic destined for Yahoo!, Bing, and sometimes Google, to a small number of separate web traffic proxies.
These proxies collect receive, examine and process all search terms and results, but only log a small subset of search queries that were entered into a browser search box and are related to major trademark holders, the users' web searches and the corresponding search results, mostly forwarding them the rest to and from the intended search engines. This allows Paxfire and/or the ISPs to directly monitor all searches made by the ISPs' customers Paxfire's code to examine the queries and responses, selecting out those that are of relevance to its business. and build up corresponding profiles, a process on which Paxfire holds a patent. It also puts Paxfire in a position to modify the underlying traffic if it decides to.
Under specific conditions, the Paxfire proxies do not merely relay traffic to and from the search engines. When the user initiates searches for specific keywords from the browser's URL bar or search bar, the proxy no longer relays the query to the intended search engine, but instead redirects the browser's request through affiliate networks, as the equivalent of a click on advertisements. Using the names of popular websites, we have so far identified 170 brand-related keywords that trigger redirections via affiliate programs and result either on the brands' sites or on search assistance pages unrelated to the intended search engine results page.
The subset of customers affected varies from temporally localized deployments to apparently entire customer bases. The DNS-based redirection operates in a surgical fashion, affecting only search engines but not other services such as Google Maps or Yahoo! Mail, and remains completely invisible to the user. The treatment of Google queries varies. Charter and Cogent appear to redirect only Bing and Yahoo, while DirecPC, Frontier and Wide Open West also used to redirect Google to Paxfire proxies located within their own networks. Google has recently put significant pressure (see the answer to the question) on the ISPs to get them to stop redirecting Google searches. As of August 2011, all major ISPs involved have stopped proxying Google, but they still proxy Yahoo and Bing.
Last month, we wrote about Cisco’s plans to help the Chinese government build a massive camera surveillance network in the city of Chongqing. This is the same company that sold equipment to China to build the Great Firewall, which prevents Chinese Internet users from accessing much of the Internet, including online references to the Tiananmen Square protests, information on China’s human rights abuses, and social media sites such as Facebook and Twitter.
Reports indicate that Cisco has also customized its technology to help China with surveillance of political activists. We've had our eye on Cisco for years; in 2010, they were at the top of our list of "companies of interest" selling surveillance technologies to repressive regimes.
A lawsuit brought by Ward & Ward, PLLC against Cisco Systems, Inc., alleges that the company knowingly enabled the Chinese Communist Party’s harassment, arrest, and torture of Chinese political activists. Yesterday, as outlined in a blog post by his lawyers, one of the plaintiffs in the lawsuit, dissident writer Du Daobin, was questioned by Party officials regarding his involvement.
According to his lawyers, "Mr. Du's persecution began in 2003, when he was arrested while his house was raided by Chinese authorities. On June 11, 2004, he was charged with 'inciting to subvert state power' and was sentenced to three years in prison for posting pro-democracy articles online. Instead of immediately serving that sentence, he was placed under probation for four years, after which it was determined that he violated the terms of his probation and was then forced to serve his original three year prison sentence. During his imprisonment, Mr. Du was subjected to extreme physical and psychological torture. By the time of his release in 2010, Du was suffering from extreme malnutrition, cardiac issues, could no longer walk without assistance, and was dependent on a wheelchair."
Mr. Du is once again under threat for challenging an American company’s policies and speaking out against censorship in China. EFF has created a petition calling on Cisco to use its influence to tell the Chinese government not to commit further human rights abuses in order to protect the company. We also call on Cisco to stop selling tools of repression in China and elsewhere around the world.
Two weeks ago, the Mexican newspaper El Milenio reported on a U.S. Department of Homeland Security (DHS) Office of Operations Coordination and Planning (OPC) initiative to monitor social media sites, blogs, and forums throughout the world. The document discloses how OPC’s National Operations Center (NOC) plans to initiate systematic monitoring of publicly available online data including “information posted by individual account users” on social media.
The NOC monitors, collects and fuses information from a variety of sources to provide a “real-time snap shot of the [U.S.] nation’s threat environment at any moment.” The NOC also coordinates information sharing to “help deter, detect, and prevent terrorist acts and to manage [U.S.] domestic incidents.” The NOC has initiated systemic monitoring of publicly available, user-generated data to follow real-time developments in U.S. crisis activities such as natural disasters as well as to help corroborate data received through official sources with ‘on-the-ground’ input.
The monitoring program appears to have its basis in a similar program used by NOC in its Haitian disaster relief efforts, where information from social media sources provided a vital source of real-time input that assisted NOC’s response, recovery and rebuilding efforts surrounding the 2009 earthquake. The new initiative attempts to leverage similar information sources in assessing and responding to a broader range of crisis activities, including terrorism, cybersecurity, nuclear and other disasters, health epidemics, domestic security, and border threats. While the addition of real-time social media sources can be extremely beneficial in disaster relief-type efforts, the breadth of activities covered by the initiative as well as the keywords and websites scheduled for systemic monitoring raise potential concerns, and the safeguards put in place by the initiative may not be sufficient to address these.
The NOC report entitled, “Privacy Impact Assessment of Public Available Social Media Monitoring and Situational Awareness Initiative”, reveals that NOC’s team of data miners are gathering, storing, analyzing, and sharing “de-identified” online information. The sources of information are “members of the public...first responders, press, volunteers, and others” who provide online publicly available information. To collect the information, the NOC monitors search terms such as “United Nations”, “law enforcement”, “anthrax”, “Mexico”, “Calderon”, “Colombia”, “marijuana”, “drug war”, “illegal immigrants”, “Yemen”, “pirates”, “tsunami”, “earthquake”, “airport”, “body scanner”, “hacker”, “DDOS”, “cybersecurity”, "2600" and “social media”. The report also contains a list of sites targeted for monitoring, including numerous blogs and news sites, as well as Wikileaks, Technorati, Global Voices Online, Facebook and Twitter. As the report was released in January 2011, this monitoring may already be taking place.
While the monitoring envisioned by the report is broad in scope, the initiative includes a number of safeguards that attempt to address privacy concerns. But these safeguards do not go far enough. Furthermore, while the NOC is attempting to limit the circumstances under which agents are permitted to collect or disclose personal data, these limitations only apply to DHS agents operating under this specific initiative. DHS “may use social media for other purposes including...law enforcement, intelligence, and other operations...” Other U.S. government agencies and initiatives have different rules and regulations that are subject to change.
With respect to the safeguards, NOC agents on social networks are prohibited from “post[ing] information, actively seek[ing] to connect..., accept[ing]... invitations to connect, or interact[ing] with others” including, presumably, responding to messages sent by other users. It is not clear, however, that this prohibition is sustainable in light of the NOC's objective. For example, NOC agents are authorized to “establish user names and passwords to form profiles and follow relevant government, media, and subject matter experts on social media sites.” Social networking sites are premised on the concept of “interacting with others.” Distinctions such as ‘following’ a user on Twitter and ‘connecting’ with such a user are not clear-cut.
Genuine attempts are being made to limit monitoring to publicly available information while excluding private sources. For example, agents may be prohibited from collecting information found on Facebook profiles which are restricted to “friends only.” However, problems may arise with respect to more ambiguous “semi-public” spaces that are emerging in many online venues. If NOC agents are authorized to “follow” a user on Twitter, are they allowed to “friend” a Facebook (or Google+) user whose profile contains purely public “relevant government, media, and subject matter”? What about information posted by other people following that user under the extended “friends of friends” setting? The NOC initiative may find it difficult to navigate such distinctions.
Monitoring of purely public online information to assess situational threats can also lead to abuse. During the G20 meeting in Toronto, Canada, police monitoring of real-time on the ground social media interactions was used to locate and arrest large numbers of peaceful protesters. As noted by Constable Drummond, a law enforcement agent deeply involved in Canadian G20 social media surveillance efforts:
“...people have a tendency to have tunnel vision when posting things on sites, feeling faceless and untraceable. It is with those postings that we were able to use our talent and use the information posted to our advantage. It allowed our officers to monitor public sites that protestors were using to share information.”
In the lead up to G20 in Pittsburgh, two individuals were arrested for broadcasting police positions on twitter in an attempt to help peaceful protesters. In the UK, Paul Chambers, a 27-year-old accountant, was convicted of “menacing” for posting a joke on his twitter feed which was taken by government agents to be an airport security threat. As Chambers used the NOC listed search term ‘airport’ in his joke, it may have come to NOC’s attention had it been tweeted in the U.S.
The report reminds individuals that if they do not want the NOC to collect their public data, they should not make it public in the first place: “[a]ny information posted publicly can be used by the NOC.” It places the responsibility of protecting privacy on end users, stating that “primary account holder[s] should be able to redress any [privacy] concerns through the third party social media service [and] should consult the privacy policies of the services they subscribe to for more information.” Moreover, DHS considers publication of the report as sufficient ‘notice’ to users that their public data may be monitored.
Unfortunately, following these policies is not as simple as it seems. Studies have shown that privacy policies are “hard to read” and are “read infrequently”, and even educated Facebook users who were concerned about privacy had trouble limiting data sharing with third parties. Moreover, they are nearly always subject to change. Facebook’s privacy policies have morphed continuously over the years, and have eroded privacy by making previously private information publicly available to everyone. Due to constantly shifting privacy settings, it is not clear that the NOC's definition of ‘public' and 'private’ align with user expectations.
Once NOC has identified useful raw online data for the DHS, attempts are made to “extract only the pertinent, authorized information and put it into a specific web application.” The report explicitly emphasizes that the data extracted from the raw information is to be “free of personal identifiable information”, and efforts are made to carry out this objective. The report claims that if personal data is collected beyond what is authorized, the NOC will immediately redact this information. This “de-identified” information will be shared with federal and state governments when “appropriate”, as well as with the private sector and foreign governments as “otherwise authorized by law.”
This raises concerns, however, as there is significant research (read here, here, here, and here) demonstrating that de-identification is not always effective. With enough information, individuals can often be “re-identified” through complex computational systems. The details of the actual techniques of the de-identification process deserve broader debate that is open to public scrutiny.
This newly discovered initiative is part of a broader trend of monitoring and using online information in various investigative contexts. What should users both inside and outside the US learn from these discoveries? As always Internet users should certainly think carefully before posting information about themselves on public sites and remember that privacy policies are constantly subject to change. Not only do we know that the government is watching, we have some clues as to how it is doing it.
In a major blow to one of the most pernicious copyright trolls now operating, the US Copyright Group (USCG), federal judge Robert Wilkins of the District of Columbia has effectively dismissed thousands of Doe defendants due to lack of jurisdiction.
The ruling, which partially echoesarguments EFF has made in cases around the country, comes in a mass copyright case that was notable for just how very massive it was -- 23,322 Doe defendants. The plaintiff in the case, represented by USCG, is Nu Image, a California corporation that claims to own the rights to the movie "The Expendables." Following the normal protocol in these cases, Nu Image/USCG filed a copyright infringement complaint again anonymous BitTorrent users who had allegedly downloaded the movie, listing their supposed IP addresses, and then asked the court for permission to subpoena their identities. The court initially granted the request. Two months later, however, when it learned that Nu Image/USCG hadn't gotten around to issuing a single subpoena and that the vast majority of the defendants likely did not reside in D.C., the court ordered Nu Image/USCG to explain why the suit should proceed there.
Nu Image/USCG responded with the now-familiar theories that courts apply a liberal standard to "jurisdictional discovery" -- meaning, initial investigations to determine where a person can be sued -- and, besides, some of the Does who live outside DC might have committed infringement there. Not good enough, said the court:
The Court’s broad discretion includes imposing reasonable limitations on discovery, particularly where, as here, the Court has a duty to prevent undue burden, harassment, and expense of third parties. . . . Furthermore, while jurisdictional discovery is liberally granted, a plaintiff is not entitled to take it solely because he requests it—he still must make the requisite showing of good cause.
Applying a variety of standards, including a copyright-specific provision that ties jurisdiction to the residency of the defendant, the court concluded that Nu Image/USCG could not establish the court's jurisdiction over any defendant that did not reside in D.C. Therefore, Nu Image/USCG could issue subpoenas only where, using generally available geolocation services, it could determine that the defendant was likely to be located there.
Wryly observing that it understood that using single lawsuit as a vehicle to identify thousands of Does was "convenient" for Nu Image/USCG, the court noted that this approach put a significant burden on others -- including the court itself:
[T]he Court must take into account the delay and unproductive utilization of court resources in prosecuting this lawsuit if the Plaintiff is allowed to seek discovery with respect to all 23,322 putative defendants, only to result in the eventual dismissal of the vast majority of those John Does later when it is revealed that they are not District of Columbia residents.
Torrentfreak has run the numbers and concluded that just 84 of the IP addresses the plaintiffs originally submitted are likely to be connected to computers located in D.C. Thus, over 23,000 Does can breathe a sigh of relief.
Aside from the sheer number of Does affected, this decision is notable for two more reasons. First, it is based on jurisdiction. Most of the other decisions that have effectively dismissed the mass copyright cases have been based on improper joinder, or the idea that it is not fair to lump together hundreds or even thousands of people based solely on the allegation that they used the same software to share the same work (or group of works).
Second, it comes out of the District of Columbia which, due to some unfortunate legal decisions, like this one, has been perceived as a sympathetic venue for copyright trolls. This decision should help shift that perception, and fast.
It's great to see yetanotherfederaljudge recognize the problems with mass copyright litigation. Kudos to Judge Wilkins for refusing to allow USCG to play fast-and-loose with fundamental due process rights.
EFF activist Eva Galperin interviews EFF criminal defense attorney, Hanni Fakhoury, on the newest edition of Line Noise, the EFF podcast. Whether law enforcement wants to search your home computer, tries to browse through your smart phone at a traffic stop, or seeks to thumb through your camera at customs, you should know your rights.
Learn more about your privacy rights by reading our Know Your Rights guide, or test your skills with our quiz.
This edition of Line Noise was recorded on-site from the San Francisco studio of Bamm.tv