Next week, several EFF staffers will be speaking at the first-ever Silicon Valley Human Rights Conference (Rightscon) in San Francisco. The conference, organized by Access Now and sponsored by several foundations and companies, brings together some of the leading thinkers in the digital human rights space, as well as representatives of technology companies from Silicon Valley and beyond for discussions on the human rights implications of the ICT industry. The conference (tickets are still available here!) is jam-packed with excellent speakers and participants, and promises to provide new insights into solutions for the myriad problems facing Silicon Valley companies today.
Corporate Social Responsibility and the ICT Industry
Rightscon couldn't have come at a better time. Though EFF has had concerns about the tech industry's impact on and responsibility toward the human rights agenda for some time, the events of 2011--from Wikileaks to the Arab Spring--have brought the issue into the mainstream, spurring some companies to work harder to ensure human rights and causing others to panic.
In the past year, we've watched private companies like Amazon and PayPal take the law into their own hands, censoring Wikileaks, while halfway across the world Vodafone capitulated to the Egyptian government's demands, shutting down mobile service amidst a massive uprising. Add to that the spate of news that Silicon Valley companies are engaged in government surveillance and censorship in the Middle East, and this discussion becomes even more timely.
Of course, it's not all bad news. There are now numerous organizations thinking on these issues; for example, EFF is a member of the Global Network Initiative (GNI), a multi-stakeholder group focused on corporate social responsibility vis-a-vis privacy and free expression. GNI's Executive Director, Susan Morgan, will be a panelist on Workshop 2, which delves into implentation of the UN Guiding Principles on Human Rights and Business in the technology sector.
Twitter, whose General Counsel Alex MacGillivray will be among the keynote speakers at Rightscon, is another company that has taken human rights under consideration when designing its policies, particularly when it comes to free expression. Another rights-thinking company is Mozilla, whom the EFF has praised for its stance on privacy.
On the lists of attendees and sponsors, EFF also sees several companies about which we have grave concerns. A prime example is AT&T, which famously acted in tandem with the NSA to illegally spy on American citizens. Also amongst the participating companies is Comcast, against which the FCC issued an order (crediting EFF research) in 2008 to stop blocking peer-to-peer traffic. Skype is also on our list of companies of concern due to its surveillance capabilities. Skype is also one of several companies in attendance that has been ranked in EFF's Who Has Your Back? campaign (so far, the company has zero stars).
Notably absent from the list are the myriad Silicon Valley companies that provide censorship and surveillance capabilities to authoritarian regimes, among them Boeing's Narus, Cisco (sign our petition here), McAfee/Intel's SmartFilter, and H-P.
EFFers at Rightscon
EFF will be well-represented at the conference, with several staffers in attendance and three of us on panels. Here's a quick overview of what we'll be up to:
Of course, we plan to listen more than speak, and in that vein, we have a few recommendations for Rightscon participants hoping to get the most out of the experience. Here are just a few of the speakers and panels we're excited about:
Workshop 1, which is framed around the Arab revolutions, looks at the role of tech companies in enabling, supporting, or limiting free speech and civil society. We're particularly excited to see the Tunisian Internet Agency, which has our support in its fight to keep the Tunisian Internet free and open, represented on the panel alongside individuals from Google, the Economist, and AnchorFree.
Workshop 12 brings together some of the leading experts on visual content for a discussion about the role visual media platforms must play in keeping users safe. The panel, which features Sam Gregory of WITNESS; Thor Halvorssen of the Oslo Freedom Forum; Hans Eriksson of livestreaming site Bambuser; Sameer Padania of Macroscope; and Steve Grove of YouTube, will tackle anonymity, privacy, and retribution, as well as terms of service and other regulations of major video and photo-sharing platforms.
Workshop 13 touches upon an issue that has been at the forefront of EFF's focus this year: social media usage in times of crisis. This year, saw activists use social networks to organize and disseminate information, and government entities shutting down--or in the case of the UK, around which the panel is framed, considering the shutdown of--networks. The panel features speakers from Amnesty International, Facebook, the Institute for Human Rights and Business, and the New America Foundation's Internet in a Suitcase project.
Day 2's Roundtable on The Politics of Internet Freedom, hosted by the New York Times' John Markoff and features a diverse range of panelists including Victoria Grand of YouTube; Aaron Swartz of DemandProgress; and award-winning Lebanese human rights activist and co-founder of CyberACT, Imad Bazzi.
Be sure to talk to an EFFer at Rightscon--we're nearly always packing stickers! We look forward to seeing you there.
If you were inspired to support digital civil liberties this afternoon, you may have noticed that EFF's donation pages look different. The information you enter will now wind its way to an EFF-hosted server and populate a local installation of the first-class, open source database management product for nonprofits, CiviCRM. EFF is proud to join a growing cadre of activist organizations using CiviCRM and will continue contributing to its ongoing success.
We have a new partner for advocacy campaigns as well: SalsaLabs. When you take action as part of an advocacy campaign, SalsaLabs makes sure your letter reaches the right representative, and we make sure your data gets transferred to CiviCRM and deleted from SalsaLabs within 48 hours.
Nonprofits Deserve Better Products
The CiviCRM project began in 2006 with the lofty goal of becoming a viable alternative to proprietary database services for nonprofits. We applaud the founders’ foresight, as the choices available to nonprofits now are expensive, privacy invasive, and insecure.
This issue became personal for EFF in September of 2009. EFF, along with eight fellow nonprofit organizations, wrote and distributed to Congress a legislative primer (pdf) entitled, "Online Behavioral Tracking and Targeting Concerns and Solutions," which contained nine principles designed to apply Fair Information Practices to the collection of electronic data about individuals, including collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability.
A few days after publishing our legislative primer, EFF was contacted by someone at our membership database service, Convio, who challenged whether a nonprofit organization -- even one that espoused fair information principles -- could actually implement those principles in practice. Convio is full of "features" -- some that can be disabled, but many that cannot -- that are designed to track the behaviors of individuals who visit a nonprofit organization's system. Convio is not unique in this regard; virtually all of the membership databases currently being marketed to nonprofits are stuffed with "features" that enable those nonprofits to target their members based on online behaviors. While EFF had gone to extraordinary measures to turn off or disable many of these features, we believed our members deserved better.
Designed By and For Nonprofits
The CiviCRM community has done an excellent job creating a product that even non-technical organizations can feel comfortable working with. We have been pleasantly surprised to discover features built into CiviCRM that very much suit our preferences: reducing the time it takes to acknowledge our donors, or putting a stop sign image next to an email address that has opted out of a mailing list, for example. The CiviCRM community has given nonprofits an attractive alternative for managing and protecting the information donors entrust us to store responsibly.
The best part about using an open source solution that we host ourselves is that we can modify and extend it to fit our needs. Now that we're using CiviCRM, we have total control over the security of our data and our constituents' privacy. Instead of storing passwords in plaintext, we no longer store passwords at all. Now that we've switched to CiviCRM, we can safely force HTTPS on all parts of our membership center.
Equally important, using CiviCRM dramatically reduces the cost of supporting a robust membership program, so more dollars can go directly to the programs that actually support your digital rights. In the coming months, EFF's technologists and fundraising team will be speaking to a broader audience of nonprofits about our experiences leaving a proprietary platform in the hopes that more are inspired to follow, including the Bay Area Drupal Camp Conference at UC Berkeley on Friday. And we are eager participants in the CiviCRM project. EFF is hosting a Code Sprint at our new headquarters Tuesday through Thursday this week, where developers across the country will be adding and optimizing a number of new features.
NextGov.com is reporting that the FBI will begin rolling out its Next Generation Identification (NGI) facial recognition service as early as this January.Once NGI is fully deployed and once each of its approximately 100 million records also includes photographs, it will become trivially easy to find and track Americans.
As we detailed in an earlier post, NGI expands the FBI’s IAFIS criminal and civil fingerprint database to include multimodal biometric identifiers such as iris scans, palm prints, photos, and voice data. The Bureau is planning to introduce each of these capabilities in phases (pdf, p.4) over the next two and a half years, starting with facial recognition in four states—Michigan, Washington, Florida, and North Carolina—this winter.
Why Should We Be Worried?
Despite the FBI’s claims to the contrary, NGI will result in a massive expansion of government data collection for both criminal and noncriminal purposes. IAFIS is already the largest biometric database in the world—it includes 70 million subjects in the criminal master file and more than 31 million civil fingerprints. Even if there are duplicate entries or some overlap between civil and criminal records, the combined number of records covers close to 1/3 the population of the United States. When NGI allows photographs and other biometric identifiers to be linked to each of those records, all easily searchable through sophisticated search tools, it will have an unprecedented impact on Americans' privacy interests.
Although IAFIS currently includes some photos, they have so far been limited specifically to mug shots linked to individual criminal records. However, according to a 2008 Privacy Impact Assessment for NGI’s Interstate Photo System, NGI will allow unlimited submission of photos and types of photos. Photos won’t be limited to frontal mug shots but may be taken from other angles and may include close-ups of scars, marks and tattoos. NGI will allow all levels of law enforcement, correctional facilities, and criminal justice agencies at the local, state, federal and even international level to submit and access photos, and will allow them to submit photos in bulk. Once the photos are in the database, they can be found easily using facial recognition and text-based searches for distinguishing characteristics.
The new NGI database will also allow law enforcement to submit public and private security camera photos that may or may not be linked to a specific person’s record. This means that anyone could end up in the database—even if they’re not involved in a crime— by just happening to be in the wrong place at the wrong time or by, for example, engaging in political protest activities in areas like Lower Manhattan that are rife with security cameras.
The biggest change in NGI will be the addition of non-criminal photos. If you apply for any type of job that requires fingerprinting or a background check, your potential employer could require you to submit a photo to the FBI. And, as the 2008 PIA notes, “expanding the photo capability within the NGI [Interstate Photo System] will also expand the searchable photos that are currently maintained in the repository.” Although noncriminal information is ostensibly kept separate from criminal, all the data will be in the NGI system, and presumably it would not be difficult to search all the data at once. The FBI does not say whether there is any way to ever have your photo removed from the database.
Technological Advancements Support Even Greater Tracking Capabilities
According to an FBI presentation on facial recognition and identification initiatives (pdf, pp. 4-5) at a biometrics conference last year, one of the FBI’s goals for NGI is to be able to track people as they move from one location to another. Recent advancements in camera and surveillance technology over the last few years will support this goal. For example, in a National Institute of Justicepresentation (pdf, p.17) at the same 2010 biometrics conference, the agency discussed a new 3D binocular and camera that allows realtime facial acquisition and recognition at 1000 meters. The tool wirelessly transmits images to a server, which searches them against a photo database and identifies the photo's subject. As of 2010, these binoculars were already in field-testing with the Los Angeles Sheriff’s Department. Presumably, the backend technology for these binoculars could be incorporated into other tools like body-mounted video camerasor the MORIS (Mobile Offender Recognition and Information System) iPhone add-on that some police officers are already using.
Private security cameras and the cameras already in use by police departments have also advanced. They are more capable of capturing the details and facial features necessary to support facial recognition-based searches, and the software supporting them allows photo manipulation that can improve the chances of matching a photo to a person already in the database. For example, Gigapixel technology, which creates a panorama photo of lots of megapixel images stitched together (like those taken by security cameras), allows anyone viewing the photo to drill down to see and tag faces from even the largest crowd photos. And image enhancement software, already in use by some local law enforcement, can adjust photos "taken in the wild" (pdf, p.10) so they work better with facial recognition searches.
Cameras are also being incorporated into more and more devices that are capable of tracking Americans and can provide that data to law enforcement. For example, one of the largest manufacturers of highway toll collection systems recently filed a patent application to incorporate cameras into the transponder that sits on the dashboard in your car. This manufacturer's transponders are already in 22 million cars, and law enforcement already uses this datato track subjects. While a patent application does not mean the company is currently manufacturing or trying to sell the devices, it certainly shows they're interested.
Data Sharing and Publicly-Available Information Will Supplement the FBI's Database
Data sharing between the FBI and other government agencies and the repurposing of photographs taken for noncriminal activities will further support the FBI's ability to track people as they move from one location to another. At least 31 states have already started using some form of facial recognition with their DMV photos, generally to stop fraud and identity theft, and the Bureau has already worked with North Carolina, one of the four states in the NGI pilot program, to track criminals using the state’s DMV records. The Department of Justice came under fire earlier this year for populating the NGI database with non-criminal data from the Department of Homeland Security through the Secure Communities program and could be considering doing the same with facial-recognition ready DMV photos. Even if the FBI does not incorporate DMV photos en masse directly into NGI, the fact that most states allow law enforcement access to these records combined with the new expansion of the FBI's own photo database, may make this point moot.
Commercial sites like Facebook that collect data and include facial recognition capabilities could also become a honeypot for the government. The FBI’s 2008 Privacy Impact Assessment stated that the NGI/IAFIS photo database does not collect information from “commercial data aggregators,” however, the PIA acknowledges this information could be collected and added to the database by other NGI users like state and local law enforcement agencies. Further, the FBI's 2010 facial recognition presentation (pdf, p.5) notes another goal of NGI is to “Identify[ ] subjects in public datasets.” If Facebook falls into the FBI’s category of a public dataset, it may have almost as much revealing information as a commercial data aggregator.
The Problem of False Positives in Large Data Sets
As the FBI's facial recognition database gets larger and as more agencies at every level of government rely on facial recognition to identify people, false positives—someone being misidentified as the perpetrator of a crime—will become a big problem. As this 2009 report (pdf) by Helen Nissenbaum and Lucas Introna notes, facial recognition
performs rather poorly in more complex attempts to identify individuals who do not voluntarily self-identify . . . Specifically, the “face in the crowd” scenario, in which a face is picked out from a crowd in an uncontrolled environment, is unlikely to become an operational reality for the foreseeable future.
(p. 3). The researchers go on to note that this is not necessarily because the technology is not good enough but because "there is not enough information (or variation) in faces to discriminate over large populations." (p.47) In layman's terms, this means that because so many people in the world look alike, the probability that any facial recognition system will regularly misidentify people becomes much higher as the data set (the population of people you are checking against) gets larger. German Federal Data Protection Commissioner Peter Schaar has noted false positives in facial recognition systems pose a large problem for democratic societies. "[I]n the event of a genuine hunt, [they] render innocent people suspects for a time, create a need for justification on their part and make further checks by the authorities unavoidable.”(p.37)
It appears it will take a few years for the FBI to bring NGI up to its full potential. In the meantime, we will continue to monitor this troubling trend.
Proponents of pseudonymity scored a major victory today, when Google executive Vic Gundotra revealed at the Web 2.0 Summit that social networking service Google+ will begin supporting pseudonyms and other types of identity.
The news comes after several months of what has been dubbed Nymwars, in which opposing parties have debated--often heatedly--the merits of the Google+ policy requiring users to identify using their "common name." While EFF recognizes the rights of companies to determine their own policies, we have repeatedlytaken the side of users who have argued that the use of a pseudonym grants them greater freedom in expressing themselves online.
According to Mashable, Google+ will be "adding features that will 'support other forms of identity' in the next few months." Mike Swift, who was present at the event, also tweeted: "Google+ will soon support pseudonyms, moving away from strict real name ID policy, says +Vic Gundotra at #w2s." Bradley Horowitz, VP for Google's social products, which includes Google+, has also alluded to the possibility that Google will drop its troublesome "common name" policy and offer support for pseudonyms. Though it is not yet clear what those features will look like, we are cautiously optimistic that Google+ will do the right thing to ensure that all of its users feel free to express themselves on the site.
EFF has grave concerns about the health of Egyptian blogger Maikel Nabil Sanad, who has now been on hunger strike for 57 days. Sanad's retrial was scheduled for October 13, but was postponed. Sanad, who was sentenced in April by a military court to three years in prison on charges of insulting the military on his blog, has stated that he will boycott any retrial.
We firmly support the statement made by Reporters Without Borders Wednesday, which reads:
“We condemn this persistence in persecuting Sanad and call for his immediate release. This military court should dismiss the charges against him. The repeated postponement of the hearings and the refusal to release him on bail are being used to prolong his detention. The original trial was unfair and violated the principles of justice. After its verdict was rightly quashed, the retrial must not be used to repeat the first trial.”
EFF reiterates our call for Sanad's immediate release.
Human Rights Organizations File Criminal Complaint Against French Company Amesys
In August, the Wall Street Journalreported that surveillance technology produced by French company Amesys (a subsidiary of Bull) was found to have been used by the Libyan government to censor and spy on its citizens. As we've previously stated, Amesys is one of several Western technology companies that exports censorship and surveillance tools to authoritarian regimes. In this case, Amesys explicitly entered into an agreement with the Libyan government to make available technology for the express purpose of intercepting communication.
On Wednesday, the International Federation for Human Rights (FIDH), along with its French member Ligue des Droits de l'Homme et du Citoyen (LDH), filed a criminal complaint, as well as an application to join the proceedings as a civil party against persons unknown before the Court of Paris concerning the responsibility of Amesys in relation to acts of torture perpetrated in Libya.
EFF applauds FIDH and LDH. We hope that a positive outcome of this suit will have a broad global impact on the issue of exporting surveillance and censorship technologies to authoritiarian regimes.
Thai Government Acknowledges Lèse Majesté Law Misused
Amidst two ongoing high-profile trials, the Thai government admitted that its strict lèse majesté laws against insulting the royal family may have been "misused," reports the AFP. The admission came in response to a call from Frank La Rue, United Nations special rapporteur on freedom of expression, to reform the laws.
Though the Thai Foreign Ministry stated that the laws are not meant to stifle free expression, it added that "there have been cases where the law has been enforced in such a way that may not be in line with its purpose of protecting the dignity of the monarchy and may in some cases inadvertently affect people's freedom of expression."
On Monday, US citizen Joe Wichai Commart Gordon pleaded guilty on charges of insulting the monarchy. Gordon's alleged crime? Translating a banned, unauthorized biography of the Thai monarch and publishing it online while living in the US. Meanwhile, the trial of Prachatai editor Jiew, which resumed in September, has been suspended until February.
Violations of Thailand's lèse majesté laws can result in prison sentences of up to fifteen years. EFF commends Rapporteur LaRue for his efforts and echoes his call on the Thai government to reform its laws to ensure the rights of Thai citizens to free expression.
Sri Lanka Blocks Anti-Government News Site
News of blocked sites in Sri Lanka has emerged this week. In Sri Lanka, the Committee to Protect Journalists reports, the anti-government site Lanka eNews has been unavailable since Tuesday. The site has been run from outside the country since 2010, when founder and editor Sandaruwan Senadheera went into exile in England after repeatedly receiving death threats. Earlier this year, the newspaper's Colombo headquarters even suffered an arson attack. Given these incidents, it's no surprise that Sri Lanka has had a dismal record when it comes to protecting its journalists and the right to free expression.
EFF condemns the blocking of Lanka eNews and calls upon the government of Sri Lanka to respect the right of free expression and the right to information.
Amazon recently announced that the new Kindle Fire tablet will ship with a brand new browser called Silk. The Silk browser works in “cloud acceleration” mode by routing most webpage requests through servers controlled by Amazon. The idea is to capitalize on Amazon’s powerful AWS cloud servers to parallelize and hence speed up downloading web page elements, and then pass that information back to the tablet through a persistent connection using the SPDY protocol. This protocol is generally faster than the standard HTTP protocol. This split-browser idea, not unique to Amazon, is a departure from the way major browsers work today.
Following the announcement, security experts as well as lawmakers have raised privacy questions and concerns about Silk. After all, while in cloud acceleration mode, the user is trusting Amazon with an incredible amount of information. This is because Amazon is sitting in the middle of most communications between a user's Fire tablet on the one hand, and the website she chooses to visit on the other. This puts Amazon in a position to track a user's browsing habits and possibly sensitive content. As there were a lot of questions that the Silk announcement left unresolved, we decided to follow up with Amazon to learn more about the privacy implications.
Our conversation with Amazon allayed many of our major concerns. Cloud acceleration mode is the default setting, but Amazon has assured us it will be easy to turn off on the first page of the browser settings menu. When turned off, Silk operates as a normal web browser, sending the requests directly to the web sites you are visiting. Regarding cloud acceleration mode, here is what we found out:
Amazon does not intercept encrypted traffic, so your communications over HTTPS would not be accelerated or tracked. According to Jon Jenkins, director of Silk development, “secure web page requests (SSL) are routed directly from the Kindle Fire to the origin server and do not pass through Amazon’s EC2 servers.” In other words, no HTTPS requests will ever use cloud acceleration mode. Given the prevalence of web pages served over HTTPS, this gives Amazon good incentive to make Silk fast and usable even when cloud acceleration is off. Turning it off completely should be a viable option for users.
For the persistent SPDY connection between the device and Amazon’s servers, Amazon assures us that the only pieces of information from the device that are regularly logged are:
URL of the resource being requested
Token identifying a session
This data is logged for 30 days. The token has no identifying information about a device or user and is only used to identify a particular session. Indeed, Jenkins said, “individual identifiers like IP and MAC addresses are not associated with browsing history, and are only collected for technical troubleshooting.” We repeatedly asked if there was any way to associate the logged information with a particular user or Amazon account, and we were told that there was not, and that Amazon is not in a position to track users. No information about the outgoing requests from the AWS servers is logged. With respect to caching, Amazon follows caching headers, which offers some protection against caching sensitive information sent over HTTP.
It is good that Amazon does not receive your encrypted traffic, and does not record any identifying information about your device. And there are other benefits to user privacy that can result from cloud acceleration mode. For one, the persistent SPDY connection between the user’s tablet and Amazon’s servers is always encrypted. Accordingly, if you are using your tablet on an open Wifi network, other users on that network will not be able to spy on your browsing behavior.
Amazon does not act like an anonymizing proxy, because it does not shield your IP address from the websites you visit or strip unnecessary information out of the outgoing request. Indeed, because the XFF header is set for HTTP requests, your IP is still passed through to the websites you visit. Other headers, such as the HTTP referer header, are set as normal. Thus, the website you are visiting using Silk has access to the exact same information that it would if you were using a normal browser.
Remaining Privacy Concerns
Though we are happy about some of the ways the browser protects the end user's privacy, a couple of serious privacy concerns remain that are worth pointing out.
First of all, Amazon stores URLs you visit, and these sometimes contain identifying information. To pick a prominent example, there is an opportunity to identify people through their search history with some degree of accuracy. Indeed, given the common practice employed by search engines of putting query terms in the URL as parameters, Amazon will effectively have a database of user search histories across many different search engines. As evidenced by the AOL search history debacle, there is always a chance that search queries--even if they are unlinkable to otherwise uniquely identifying data--can effectively identify individuals. It is worth noting that unlike that AOL data set, Amazon will only be able to link a set of queries to a given browsing session, not an anonymized user that persists indefinitely over time. Second, in addition to URLs, the content of the EC2 servers' cache might in some instances might contain information that could identify an individual.
Moreover, the data collected by Amazon provides a ripe source of users' collective browsing habits, which could be an attractive target for law enforcement. For users who are worried about these privacy issues and about putting a lot of trust in Amazon to keep their data safe, we recommend turning off cloud acceleration.
We are generally satisfied with the privacy design of Silk, and happy that the end user has control over whether to use cloud acceleration. But this new technology highlights the need for better online privacy protections. As companies continue to innovate in ways that make novel uses of--and expose much more personal data to--the internet cloud, it's critical that the legal protections for that data keep up with changes technology. That's why we have teamed up with groups like the ACLU and companies like Google and Facebook as well as Amazon to push for a digital upgrade to the Electronic Communications Privacy Act, which was signed into law 25 years ago this week. Please get involved by signing our petition and sharing it with others.
Today, Google announced that it is switching its Search service for logged-in users over from insecure HTTP to encrypted HTTPS. This is a significant win for users: HTTPS is an essential protection against surveillance and alteration of your search traffic — whether by governments, companies, or hackers. Today's change appears to be designed to end a series of attacks that identified or tracked people based on the personalized search results Google gives them — but the protection also extends to outgoing search terms in many situations.
There is one small caveat that users should be aware of with the new encrypted-when-logged-in Google. If you click on an advertisement, and the advertiser's website is HTTP rather than HTTPS, Google will send the search terms for that specific query to the advertiser over HTTP. The encrypted.google.com domain will continue to exist and will not have that behavior: on that domain, advertisers only get to see the search that lead to a click-through if they use HTTPS. Privacy conscious users should keep using HTTPS Everywhere, which will ensure that you're always using the encrypted.google.com domain. And of course, HTTPS Everywhere will also keep protecting you if you prefer to use Google Search without being logged in.
“When everything is classified, then nothing is classified…The system becomes one to be disregarded by the cynical or the careless and to be manipulated by those intent on self-protection or self-promotion.” ~ Justice Stewart, New York Times v. United States, 1971.
Last week, the White House issued the so-called ‘WikiLeaks’ Executive Order, which mandates better security for the nation’s classified computer systems. While ensuring that the government has better security over its own systems is a good goal, it fails to address an equally important problem: the American government’s addiction to overclassification, which goes far beyond the appropriate and effective means necessary to safeguard real secrets.
The Order, announced nine months ago, was put on “a relatively fast track” by the administration, according to Secrecy News, yet the much more meaningful changes to the classification system President Obama pledged to implement at the very beginning of his presidency have been all but ignored.
In 2009, President Obama famously promised “an unprecedented level of openness” in his administration, and a lynchpin in his open government plan was an overhaul of the government’s bloated secrecy system. In a memo on classification on May 27, 2009, he directed all government agencies to aggressively tackle the problem of overclassification and find ways to reduce the number of classified documents. Included in his proposals were a National Declassification Center and “the possible restoration of the presumption against classification."
He wrote the memo for good reason. The amount of sensitive information held by the government at the end of the Bush Administration was extraordinary, as Suffolk Law Professor Alastair Roberts illustrates, using the largest leak in U.S. history—the WikiLeaks cache—as a starting point:
[T]he leaked State Department cables might have added up to about two gigabytes of data—one-quarter of an eight-gigabyte memory card. By comparison, it has been estimated that the outgoing Bush White House transferred 77 terabytes of data to the National Archives in 2009. That is almost 10,000 memory cards for the White House alone. The holdings of other agencies are even larger.
And the problem is even older than that. Several US Commissions, including one chaired by Senator Moynihan in the mid-90s and the 9/11 Commission in the last decade, found that unnecessary classification was rampant. EFF’s FOIA work is often thwarted by government claims under Exemption 1 of the Freedom of Information Act, which prevents the release of classified information.
Unfortunately, besides the most peripheral and cosmetic changes, government secrecy has only increased since Obama took office. Last year, as part of their Washington Post series and subsequent book Top Secret America, Dana Priest and William Arkin reported, “An estimated 854,000 people, nearly 1.5 times as many people as live in Washington, D.C., hold top-secret security clearances.” Yet incredibly, when the government released its official count as part of an intelligence community report to Congress two months ago, the number of people holding the Top Secret clearance had ballooned to 1,419,051. And the same report noted that 4.2 million people hold some level of security clearances for access to classified information.
Document classification, already at record highs under the Bush Administration, has continued to explode as well. The government classified a staggering 77 million documents in 2010, a 40% increase over the previous year.
With so much information stamped “secret,” leaks to the media are inevitable. On October 4th, the New York Times reported on just that: the “growing phenomenon” of public but classified information.
The older and larger drone program in Pakistan, for instance, is a centerpiece of American foreign policy, discussed daily in the news media — but it cannot be mentioned at a public Congressional hearing. The State Department cables published by WikiLeaks can be found on the Web with a few mouse clicks and have affected relations with dozens of countries — but American officials cannot publicly discuss them.
Nowhere was this absurdity starker than when the media reported on the death of Yemen’s alleged al-Qaeda leader Anwar al-Awlaki, a U.S. citizen, at the hands of a (classified) C.I.A. drone. The evidence against him, the panel of U.S officials who decided he was to be put on a “kill list,” and the legal memo “authorizing” his killing were all “Top Secret,” despite the extraordinary constitutional implications of extrajudicially killing an American citizen.
While technically secret, these stories were plastered over the front pages of newspapers every day for one reason: leaks from government officials to journalists. Leaks of classified information, both helpful and damaging to administrations, have been commonplace for decades, and the Obama administration is no different.
But while high-level White House officials continually leak Top Secret information to justify their covert actions and to combat criticism, Obama’s Justice Department is also engaged in an unprecedented campaign to prosecute lower-level whistleblowers that leak information to the press in the name of public interest. This is in contradiction of another pledge Obama made to protect and strengthen whistleblower protections during his 2008 campaign. His administration, in just two and a half years, has indicted fiveleakers under the Espionage Act. That’s more than every president since Richard Nixon—combined.
In addition, the Justice Department is currently trying to indict WikiLeaks for publishing classified information—a case that has huge First Amendment implications and could potentially criminalize portions of national security journalism.
By keeping everything “secret” and selectively prosecuting leakers, Obama is, as Glenn Greenwald put it, “trumpeting information that makes the leader and his government look good while suppressing anything with the force of criminal law that does the opposite.”
The government’s secrecy obsession has many remedies, however. J. William Leonard, George W. Bush’s former “classification czar,” thinks overclassifiers should be sanctioned. The Brennen Center just released a series of innovative proposals—from requiring a written explanation every time a document is stamped ‘secret,’ to allowing authorized clearance holders to win cash prizes for successfully challenging an improperly classified document.
Or Obama could just implement the ideas he already proposed two years ago.