In 2011, Colombian graduate student Diego Gomez shared another student’s Master’s thesis with colleagues over the Internet. After a long legal battle, Diego was able to breathe a sigh of relief today as he was cleared of the criminal charges that he faced for this harmless act of sharing scholarly research.
That’s true, but Diego’s story also demonstrates what can go wrong when nations enact severe penalties for copyright infringement. Even if all academic research were published freely and openly, researchers would still need to use and share copyrighted materials for educational purposes. With severe prison sentences on the line for copyright infringement, freedom of expression and intellectual freedom suffer.
Diego’s story demonstrates what can go wrong when nations enact severe penalties for copyright infringement.
Diego’s story also serves as a cautionary tale of what can happen when copyright law is broadened through international agreements. The law Diego was prosecuted under was enacted as part of a trade agreement with the United States. But as is often the case when trade agreements are used to expand copyright law, the agreement only exported the U.S.’ extreme criminal penalties; it didn’t export our broad fair use provisions. When copyright law becomes more restrictive with no account for freedom of expression, people like Diego suffer.
Diego was lucky to have the tireless support of local NGO Fundación Karisma, and allies around the world such as EFF, who brought global attention to the injustice of the criminal accusations against him. However, the prosecutor in the case has appealed the verdict, leaving Diego with possible liability continuing to hang over his head for an undetermined time to come.
There are also many other silent victims of overzealous copyright enforcement, including those who are constrained from performing useful research, who shut down websites that come under unfair attack, and who shy away from sharing with colleagues for fear of being targetted with civil or criminal charges.
Please join us today in standing up for open access, standing up for fair copyright law, and standing with Diego.
In the digital age, a lot depends on whether we actually own our stuff, and who gets to decide that in the first place.
In The End of Ownership: Personal Property in the Digital Age, Aaron Perzanowski and Jason Schultz walk us through a detailed and highly readable explanation of exactly how we’re losing our rights to own and control our media and devices, and what’s at stake for us individually and as a society. The authors carefully trace the technological changes and legal changes that have, they argue, eroded our rights to do as we please with our stuff. Among these changes are the shift towards cloud distribution and subscription models, expanding copyright and patent laws, Digital Rights Management (DRM), and use of End User License Agreements (EULAs) to assert all content is “licensed” rather than “owned.” And Perzanowski and Schultz present compelling evidence that many of us are unaware of what we’re giving up when we “buy” digital goods.
Ownership, as the authors explain, provides a lot of benefits. Most importantly, ownership of our stuff supports our individual autonomy, defined by the authors as our “sense of self-direction, that our behavior reflects our own preferences and choices rather than the dictates of some external authority.” It lets us choose what we do with the stuff that we buy – we can keep it, lend it, resell it, repair it, give it away, or modify it, without seeking anyone’s permission. Those rights have broader implications for society as a whole – when we can resell our stuff, we enable secondary and resale markets that help disseminate knowledge and technology, support intellectual privacy, and promote competition and user innovation. And they’re critical to the ability of libraries and archives to serve their missions – when a library owns the books or media in its collection, it can lend those books and media almost without restriction, and it generally will do so in a way that safeguards the intellectual privacy of its users.
These rights, long established for personal property, are safeguarded in part by copyright law’s “exhaustion doctrine.” As the authors make clear, that doctrine, which holds that some of a copyright holders’ rights to control what happens to a copy are “exhausted” when they sell the copy, is a necessary feature in copyright law’s effort to limit the powers granted to copyright holders so that overbroad copyright restrictions do not undermine the intended benefit to the public as a whole.
Throughout the book, Perzanowski and Schultz present a historical account of rights holder attempts to overcome exhaustion and exert more control over what people do with their media and devices. The authors describe book publishers’ hostile, “fearful” response to lending libraries in the 1930’s:
…a group of publishers hired PR pioneer Edward Bernays….to fight against used “dollar books” and the practice of book lending. Bernays decided to run a contest to “look for a pejorative word for the book borrower, the wretch who raised hell with book sales and deprived authors of earned royalties.”…Suggested names included “bookweevil,”…”libracide,” “booklooter,” “bookbum,” “culture vulture,” … with the winning entry being “booksneak.”
Publishers weren’t alone, the authors show that both record labels and Hollywood studios fought against the rise of secondary markets for music and home video rental, respectively. Hollywood fought a particularly aggressive battle against the VCR. In the end, the authors note, Hollywood continued to “resist the home video market,” at least until they gained more control over the distribution technology.
But while historically, overzealous rights holders may have been stymied to some extent by the law’s limitation of their rights, recent technological changes have made their quest a lot easier.
“In a little more than the decade,” the authors explain, we’ve seen dramatic changes in content distribution, from tangible copies, to digital downloads, to the cloud, and now, increasingly, to subscription services. These technological changes have precipitated corresponding changes in our abilities to own the works in our libraries. While, as the authors explain, copyright law has long relied on the existence of a physical copy to draw the lines between rights holders’ and copy owners’ respective rights, “[e]ach of these shifts in distribution technology has taken us another step away from the copy-centric vision at the heart of copyright law.” Unfortunately, the law hasn’t kept up: “Even as copies escape our possession and disappear from our experience, copyright law continues to insist that without them, we only have the rights copyright holders are kind enough to grant us.”
Perzanowski and Schultz point to End User License Agreements (EULAs), with their excessive length, one-sided, take-it-or-leave-it nature, complicated legalese, and relentless insistence that what you buy is only “licensed” to you (not “owned”), as a main culprit behind the decline of ownership. They provide some pretty standout examples – including EULAs that exceed the lengths of classic works of literature, and those that claim to prevent a startling array of activity. For the authors, these EULAs
. . . create private regulatory schemes that impose all manner of obligations and restrictions, often without meaningful notice, much less assent. And in the process, licenses effectively rewrite the balance between creators and the public that our IP laws are meant to maintain. They are an effort to redefine sales, which transfer ownership to the buyer, as something more like conditional grants of access.
And unfortunately, despite their departure from some of contract law’s core principles, some courts have permitted their enforcement, “so long as the license recites the proper incantations.”
The authors are at their most poetic in their criticism of Digital Rights Management (DRM) and Section 1201 of the DMCA, perhaps the worst scourges of ownership in the book. As they point out, even in the absence of restrictive EULA terms, DRM embeds rights holders’ control directly into our technologies themselves – in our cars, our toys, our insulin pumps and heart monitors. Comparing it to Ray Bradbury’s Farenheit 451, they explain:
While not nearly as dramatic as flamethrowers and fighting robot dogs, the unilateral right to enforce such restrictions through DRM exerts many of the types of social control Bradbury feared. Reading, listening, and watching become contingent and surveilled. That system dramatically shifts power and autonomy away from individuals in favor of retailers and rights holders, allowing for enforcement without anything approaching due process.
As Perzanowski and Schultz explain, these shifts aren’t just about our relationship to our stuff. They recalibrate the relationship between rights holders and consumers on a broad scale:
When we say that personal property rights are being eroded or eliminated in the digital marketplace, we mean that rights to use, to control, to keep, and to transfer purchases – physical and digital – are being plucked from the bundle of rights purchasers have historically enjoyed and given instead to IP rights holders. That in turn means that those rights holders are given greater control over how each of us consume media, use our devices, interact with our friends and family, spend our money, and live our lives. Cast in these terms, it is clear that there is a looming conflict between the respective rights of consumers and IP rights holders.
The authors repeatedly remind us that who makes the decision between what is owned and what is licensed is crucial – both on the individual and societal scale. When we allow companies to define when we can own our stuff, through EULAs or Digital Rights Management, we shift crucially important decisions about how our society should work away from legislatures, courts, and public processes, to private entities with little incentive to serve our interests. And, when we don’t know exactly what we give up when we “buy” digital goods, we’re not making an informed choice. Further, when we opt for mere access over ownership, our choices have broader societal effects. The more we shift to licensing and subscription models, the more it may become harder for those who would rather own their stuff to exercise that option – stores close, companies shift distribution models, and some works disappear from the market.
In the end, Perzanowski and Schultz leave us with a thread of hope that we still might see a future for ownership of digital goods. They believe that at least some courts and policy makers, and “[p]erhaps more importantly, readers, listeners, and tinkerers – ordinary people – are expressing their own reluctance to accept ownership as an artifact of some bygone predigital era.” And they provide a set of arguments and reform proposals to martial in the fight to save ownership before it’s too late. They lay out an array of technological and legal strategies to reduce deceptive practices, curb abusive EULAs, and, reform copyright law. The most thoroughly developed of these proposes a legislative restructuring of copyright exhaustion in a flexible, multi-factor format, in part modeled on the United States’ fair use doctrine. It’s a good idea, and it would probably work. But (and the authors acknowledge this) even modest attempts at reform have failed to garner the necessary support in Congress to move forward. A more ambitious proposal, like this one, seems at least unlikely in the near term.
Overall, the End of Ownership is a deeply concerning exposition of how we’re losing valuable rights. The questions it raises about whether and how we can preserve the benefits of ownership in the digital age will likely continue to be relevant even as technology, and the law, evolve. Most critically, it asks us to rethink who we want making the decisions that shape how we live our lives. While the book tackles complex issues in law and technology, it does so in a way that’s accessible and interesting both for lawyers and laypersons alike. The book’s ample real world examples of everything from disappearing e-book libraries, to tractors, dolls, and medical devices resistant to their owners’ control bring home both the impact of abstract legal doctrines and the urgency of their reform.
To learn about some of EFF’s efforts to protect your rights of ownership and autonomy, you can:
With the global and debilitating WannaCry ransomware attack dominating the news in recent weeks, it’s increasingly necessary to have a serious policy debate about disclosure and patching of vulnerabilities in hardware and software.
Although WannaCry takes advantage of a complex and collective failure in protecting key computer systems, it’s relevant to ask what the government’s role should be when it learns about new vulnerabilities. At EFF, we’ve been pushing for more transparency around the decisions the government makes to retain vulnerabilities and exploit them for “offensive purposes.”
Now, some members of Congress are taking steps towards addressing these decisions with the the proposal of the Protecting Our Ability to Counter Hacking—or PATCH—Act (S.1157). The bill, introduced last week by Sens. Ron Johnson, Cory Gardner, and Brian Schatz and Reps. Blake Farenthold and Ted Lieu, is aimed at strengthening the government’s existing process for deciding whether to disclose previously unknown technological vulnerabilities it finds and uses, called the “Vulnerabilities Equities Process” (VEP).
The PATCH Act seeks to do that by establishing a board of government representatives from the intelligence community as well as more defensive-minded agencies like the Departments of Homeland Security and Commerce. The bill tasks the board with creating a new process to review and, in some cases, disclose vulnerabilities the government learns about.
The PATCH Act is a good first step in shedding some light on the VEP, but, as currently written, it has some shortcomings that would make it ineffective in stopping the kind of security failures that ultimately lead to events like the WannaCry ransomware attack. If lawmakers really want to deal with the dangers of the government holding on to vulnerabilities, the VEP must apply to classified vulnerabilities that have been leaked.
The VEP was established in 2010 by the Obama administration and was intended to require government agencies to collectively weigh the costs and benefits of disclosing these vulnerabilities to outside parties like software vendors instead of holding onto them to use for spying and law enforcement purposes.
Unfortunately, after EFF fought a long FOIA battle to obtain a copy of the written VEP policy document, we’ve learned that it went largely unused. In the meantime, agencies like the NSA and CSA suffered major thefts of their often incredibly powerful tools. In particular, the 2016 Shadow Brokers leak enabled outsiders to later develop the WannaCry ransomware using an NSA tool that the agency likened to “fishing with dynamite.”
Lawmakers should be commended for trying to codify and expand the existing process to ensure that the government is adequately considering these risks, and the PATCH Act is a welcome first step.
But there are two areas in particular where it needs to go further.
First, as described above, the current bill seems to overlook situations where the government loses control of vulnerabilities that it has decided to retain. As we’ve seen with the Shadow Brokers leaks, this is a very real possibility, one which even kept the NSA up at night, according to the Washington Post. Yet the PATCH Act specifically states that a classified vulnerability will not be considered “publicly known” if it has been “inappropriately released to the public.” That means that a stolen NSA tool can be circulating widely among third parties without triggering any sort of mandatory reconsideration of disclosure to a vendor to issue a patch. While it might be argued that other provisions of the bill implicitly account for this scenario, we’d like to see it addressed explicitly.
In addition to overlooking situations like the WannaCry ransomware attack, the bill excludes cases where the government never actually acquires information about a vulnerability and instead contracts with a third-party for a “black box exploit.”
For example, in the San Bernardino case, the FBI reportedly paid a contractor a large sum of money to unlock an iPhone without ever learning details of how the exploit worked. Right now, the government apparently believes it can contract around the VEP in this way. This raises concerns about the government’s ability to adequately assess the risks of using these vulnerabilities, which is why a report written by former members of the National Security Council recommended prohibiting non-disclosure agreements with third-parties entirely. At the very least, we’d like to see the bill bring more transparency to the use of vulnerabilities even when the government itself doesn’t acquire knowledge of the vulnerability.
We hope to see the bill’s authors address these concerns as it moves forward to ensure that all of the vulnerabilities known to the government are reviewed and, where appropriate, disclosed.
Could the Trans-Pacific Partnership (TPP) be coming back from the dead? It is at least a possibility, following the release of a carefully-worded statement last Sunday from an APEC Ministerial meeting in Vietnam. The statement records the agreement of the eleven remaining partners of the TPP, aside from the United States which withdrew in January, to "launch a process to assess options to bring the comprehensive, high quality Agreement into force." This assessment is to be completed by November this year, when a further APEC meeting in Vietnam is to be held.
We do know, however, that not all of the eleven countries are unified in their view about how the agreement could be brought into force. In particular, countries like Malaysia and Vietnam would like to see revisions to the treaty before they could accept a deal without the United States. This is hardly an unreasonable position, since it was the United States that pushed those countries to accept provisions such as an unreasonably long life plus 70 year copyright term, which is to no other country's benefit.
Other TPP countries, such as Japan and New Zealand, are keen to bring the deal into force without any renegotiation, which could add years of further delay to the treaty's completion. Japan also likely fears losing some of the controversial rules that it had pushed for, such as the ban on software source code audits. The country's Trade Minister, Hiroshige Seko, has been quoted as saying, "No agreement other than TPP goes so far into digital trade, intellectual property and improving customs procedures."
For now, that remains true; many of the TPP's digital rules are indeed extreme and untested. But for how much longer? Industry lobbyists are pushing for the same digital trade rules to be included in Asia's Regional Comprehensive Economic Partnership (RCEP) and in a renegotiated version of the North American Free Trade Agreement (NAFTA). Since RCEP and NAFTA together cover most of the same countries as the TPP, there will be little other rationale for the TPP to exist if lobbyists succeed in replicating its rules in those other deals.
Free Trade Rules that Benefit Users
It's worth stressing that EFF is not against free trade. If trade agreements could be used to serve users rather than to make their lives more difficult EFF could accept or even actively support certain trade rules. For example, last week the Re:Create Coalition, of which EFF is a member, issued a statement explaining how the inclusion of fair use in trade agreements would make them more balanced than they are now. The complete statement, issued by Re:Create's Executive Director Joshua Lamel, says:
If NAFTA is renegotiated and if it includes a chapter on copyright, that chapter must have mandatory language on copyright limitations and exceptions, including fair use. The United States cannot export one-sided enforcement provisions of copyright law without their equally important partner under U.S. law: fair use.
The U.S. should also take further steps to open up and demystify its trade policy-making processes, not only to Congress but also to the public at large, by publishing text proposals and consolidated drafts throughout the negotiation of trade agreements.
The last paragraph of this statement is key: we can't trust that trade agreements will reflect users' interests unless users have a voice in their development. Whether the TPP comes back into force or not, the insistence of trade negotiators on a model of secretive, back-room policymaking will lead to the same flawed rules popping up in other agreements, to the benefit of large corporations and the detriment of ordinary users.
At this point we have no faith that the TPP would be reopened for negotiation in a way that is inclusive, transparent and balanced, and we maintain our outright opposition to the deal. RCEP is being negotiated in an equally closed process, though we are continuing to lobby negotiators about our concerns with that agreement's IP and Electronic Commerce chapters. As for NAFTA, we are urging the USTR to heed our recommendations for reform of the office's practices before negotiations commence.
The death of the TPP didn't mark the end of EFF's work on trade negotiations and digital rights, and its reanimation won't change our course either. No matter where the future of digital trade rules lie, our approach remains the same: advocating for users' rights, and fighting for the reform of closed and captured processes. Until our concerns are heard and addressed, trade negotiators can be assured that regulating users' digital lives through trade agreements isn't going to get any easier.
One of the primary justifications we hear for why patents are social goods is that they encourage innovation. Specifically, the argument goes, patents incentivize companies and individuals to invest in costly research and development that they would not otherwise invest in because they know they will be able to later charge supracompetitive prices and recoup the costs of that development.
Those who want "stronger" patents (i.e. patents that are easier to get and/or harder to invalidate) often use this rationale to justify changing patent laws to make patents more enforceable. For example, a former Judge on the Court of Appeals for the Federal Circuit recently suggested that "America is in danger because we have strangled our innovation system" by making it easier to challenge patents and show they never should have been granted. As another example, the Chief Patent Counsel at IBM argued that "The U.S. leads the software industry, but reductions in U.S. innovation prompted by uncertain patent eligibility criteria threaten our leadership" because "Patents promote innovation."
These arguments all presume that "stronger" patents mean more research and development dollars and thus more innovation. They also presume that if the U.S. doesn't provide "stronger" patents, innovation will go elsewhere.
But reality is much more complex. As one recent paper put it: "there is little evidence that stronger patent laws result in increases in [research and development] investments," at least if the yardstick is patent filings. Indeed, "we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights – either longer patent terms or broader patent rights – encourage research investments into developing new technologies."
There are good reasons to think "stronger" patents do not actually spur innovation. Patents are a double-edged sword. Although they may provide some incentive to innovate (even that premise is unclear), they also create barriers to more innovation. Patents work to prevent the development of follow-on innovation until that patent expires, delaying innovation that would have occurred, but is prevented by the grant of an artificial, government-backed monopoly.
The problem of patents impeding future innovation is exacerbated in software, where the life cycle is relatively short and innovation tends to move quickly. When a patent lasts for 20 years, software patents—especially broad and abstract software patents—have the potential to significantly delay the introduction of new innovations to the market.
Despite no "credible empirical evidence" that recent changes to patent laws, including the limits on patentable subject matter reaffirmed by the U.S. Supreme Court in Alice, have done any harm to the innovation economy or innovation generally, somepatent owners have been lobbying Congress legislate the case away. But doing so would allow patents on abstract ideas, and risks exacerbating the deadweight loss caused by too much patenting. The proposals are not minor changes. For example, if enacted they would mean that anything is patentable, so long as it is doesn't "exist solely in the human mind," i.e. "do it on a computer." Absent any evidence that this would mean more innovation, the recent reform proposals seem like little more than a bid by lawyers to create work for themselves.
Those rushing to ratchet up patent rights are doing so with little to no empirical basis that any such change is necessary, and it may actually end up harming the innovation economy. Congress should think twice before changing patent law so as to make patents even "stronger."
A court ruling today allowing Wikimedia’s claims challenging the constitutionality of NSA’s Upstream surveillance to go forward is good news. It shows that the court—the U.S. Court of Appeals for the Fourth Circuit—is willing to take seriously the impact mass surveillance of the Internet backbone has on ordinary people. Wikimedia's First and Fourth Amendment challenges will move on to the next phase in the case, Wikimedia Foundation v. NSA
The news isn't all good: we disagree with the court's decision disallowing Wikimedia's other dragnet collection claims from going forward, and think the dissent got it right. In Jewel v. NSA, EFF's landmark lawsuit challenging NSA surveillance, the Ninth Circuit Court of Appeals has already ruled that our claims pass initial review. The trial court presiding over the case just last week required the government to comply with our request to provide information about the scope of the mass surveillance. Jewel v. NSA includes specific evidence of a backbone tapping location on Folsom Street in San Francisco presented by former AT&T employee Mark Klein. This level of detail and description is enough for our claims to move forward even with the Fourth Circuit’s ruling.
EFF has identified and addressed the delivery problem, and we extend our deep apologies for the delays to digital activists who use our tools.
We recently became aware that there were significant delays in delivering some of the messages sent to Congress via two of EFF’s open-source messaging tools, Democracy.io and the EFF Action Center. While we have now addressed the problem, we wanted to be transparent with the community about what happened and the steps we’ve taken to fix it.
The EFF Action Center is a tool people can use to speak out in defense of digital liberty using text prompts from EFF, including letters to Congress that users can edit and customize. Democracy.io is a free tool that we built for the world based on the same technical backend as our Action Center. It lets users send messages to their members of Congress on any topic, with as few clicks as possible. The errors we experienced only impacted letters (not petitions, tweet campaigns, or call campaigns) for a number of Representatives and a handful of Senators. We sincerely apologize to everyone who was affected by this delay.
The issue sprang from the way in which our tools handled CAPTCHAs, a type of service that website owners use to verify that a given user is a human and not a bot. Our tools work by filling out contact forms on individual congressional websites on behalf of users. When our tool bumps into a CAPTCHA, it takes a snapshot, returns it to the user, and lets the user give the correct answer to finish filling out the form. Since all of our messages to Congress are submitted by real people, this worked fine for traditional CAPTCHAs. However, a percentage of Congress members had begun using a more complicated type of CAPTCHA known as reCAPTCHA, which was beyond the technical abilities of our system.
At the same time, we have made some fundamental changes to our error-logging system. As a result, the engineers who staff and maintain Democracy.io stopped receiving notifications of delivery errors, so we unfortunately missed the fact that a portion of messages were failing.
Some messages are undeliverable due to user data errors, legislators leaving office, or other irresolvable issues. However, we have now successfully re-sent nearly all the deliverable messages that had been delayed in our system. A very small percentage of messages are still pending, but we will be delivering them over the next few weeks.
In addition to delivering the delayed messages, we’ve made some key infrastructure changes to help prevent problems like this from arising in the future and to mitigate the impact of any issues that do arise. First, we integrated an experimental API delivery for the House of Representatives called Communicating with Congress. This implementation has resolved the reCAPTCHA problems we were facing in the House of Representatives. In addition, when someone tries to send a message to one of the few Senators whose forms we cannot complete, we’ll notify the user in real time and provide a link to the Senator’s website so the user can send a message directly. Finally, we’ve improved our error logging process so that if another significant delay happens in the future, we’ll know about it right away.
It’s unfortunate and frustrating that many members of Congress have placed digital hurdles on constituent communications. In a more perfect democracy, we think it would be easy for constituents to simply send an email to their members of Congress and be assured that the message was received and counted. Instead, each member of Congress adopts their own form, many of them requiring users to provide information like titles, exact street address, topic areas, etc. Users who want to email their Congress members may have to hunt down and complete forms on three different websites, and they may inadvertently end up on the wrong site.
We believe that the voices of technology users should echo loudly in the halls of Congress and that timely and personal communication from constituents is vital to holding our elected officials to account. That’s why we built these tools for both the EFF community and wider world. We’re committed to continuing to improve the process of communicating with Congress, both for EFF friends speaking out in defense of digital rights and for the general public. We hope one day Congress will make it easier for constituents to reach them. Until then, we’ll do our best to help tech users find a powerful voice. We are sorry that in this instance we fell short of our goal.
Laura Poitras—the Academy and Pulitzer Prize Award-winning documentary filmmaker and journalist behind CITIZENFOUR and Risk—wants to know why she was stopped and detained at the U.S. border every time she entered the country between July 2006 and June 2012. EFF is representing Poitras in a Freedom of Information Act (FOIA) lawsuit aimed at answering this question. Since we filed the complaint in July 2015, the government has turned over hundreds of pages of highly redacted records, but it has failed to provide us with the particular justification for each withholding—as it is required to do. In March, in a win for transparency, a federal judge called foul and ordered the government to explain with particularity its rationale for withholding each document.
Poitras travels frequently for her work on documentary films. Between July 2006 and June 2012, she was routinely subject to heightened security screenings at airports around the world and stopped and detained at the U.S. border every time she entered the country—despite the fact that she is a law-abiding U.S. citizen. She’s had her laptop, camera, mobile phone, and reporter notebooks seized, and their contents copied. She was also once threatened with handcuffs for taking notes. (The border agents said her pen could be used as a weapon.) No charges were ever brought against her, and she was never given any explanation for why she was continually subjected to such treatment.
In 2014, Poitras sent FOIA requests to multiple federal agencies for any and all records naming or relating to her, including case files, surveillance records, and counterterrorism documents. But the agencies either said they had no records or simply didn’t respond. The FBI, after not responding to Poitras’ request for a year, said in May 2015 that it had located a mere six pages of relevant material but that it was withholding all six because of grand jury secrecy rules.
With EFF’s help, Poitras ultimately filed a lawsuit against the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence. In the months following the filing of the lawsuit, the government discovered and released over 1,000 pages of responsive records, some of which were on display as at the Whitney Museum in New York last year as part of Poitras’ Astro Noise exhibit. But most of these records are highly redacted, so while Poitras now has some information about why she stopped, the details remain unclear. And the government failed to provide clear rationale for why withholding the redacted information was justified.
Court to Government: “Try Again”
We argued in a motion for summary judgment filed last fall that the government had failed to meet its burden of justifying its continued withholding of information. In an order issued last month, the Honorable Ketanji Brown Jackson agreed with us. As the court explained, the government “describes in great detail the government’s general reasons for withholding entire categories of information, but does not connect these generalized justifications to the particular documents that are being withheld in this case in any discernable fashion.” She noted that instead of providing a complete list of “document-specific justifications,” the government provided a list with “only some of the records that the agency has withheld” and even then failed to “explain the reasons that the particular exemption is being asserted with respect to any document[.]”
The court didn’t grant our motion for summary judgment, but it did order the government to go back and try again—i.e., provide both us and the court with a list describing each document redacted or withheld, noting the FOIA exemption(s) that the government thinks apply to the document, and explaining the “particularized reasons that the government believes that the asserted exemption applies to the particular document at issue.”
It’s clear the judge isn’t planning to just rubber stamp the government’s assertions in this case. Forcing the government to justify its vast withholding of documents in this case is a win for transparency. We will post updates on the case as it proceeds and as we continue our fight to shed more light on the government’s unjust and potentially chilling treatment of a journalist.