If you thought passing the bar was hard, try winning one of the coveted EFF Cyberlaw Pub Quiz victory steins. Last night, the best legal minds in San Francisco scrambled to answer 7 rigorous rounds of cyberlaw trivia (one of Fenwick & West's teams pictured left). EFF's attorneys, technologists and activists worked tirelessly for weeks to construct quiz questions, delving deep into the rich canon of privacy, free speech, and intellectual property law, and then uncovering the supremely trivial facts.
For many of the contestants, winning means more than just a fancy cup. It proves that you have lived and breathed the most important cases for digital rights of our time. The competition was fierce, and every team acquitted themselves well in the face of tough questions.
Please join us in congratulating this year's winners:
Honorary Mention: EFF the Children for being the highest ranked (4th place) team of EFF interns in five years of trivia nights (pictured right, sporting EFF's new t-shirt).
EFF’s Cyberlaw Pub Trivia Night is an important opportunity for us to thank our friends in the legal community who help protect online freedom in the courts. Among the many firms that dedicate their time, talent and resources to the cause, we would especially like to thank Ridder, Costa, and Johnstone LLP for sponsoring this year’s Trivia Night.
Test Your Internet Law Expertise
You too can play along at home. If you read the EFF blog regularly or recently aced EFF’s Know Your Rights Quiz, you may be feeling pretty confident about your knowledge of Internet law. But could you answer seven rounds of questions like these? The winning team (pictured right) probably answered every question below without breaking a sweat. Courtesy of EFF’s 5th Annual Cyberlaw Pub Trivia Night:
2. Justice Alito in U.S. v. Jones imagined how one might have conducted surveillance comparable to GPS tracking in 1791. Which was not part of his hypothetical:
(a) a tiny constable
(b) incredible fortitude and patience
(c) a hand-written writ
(d) a gigantic coach
People tend to think that digital copies of our biological features, stored in a government-run database, are problems of a dystopian future. But governments around the world are already using such technologies. Severalcountries are collecting massive amounts of biometric data for their national identity and passport schemes—a development that raises significant civil liberties and privacy concerns. Biometric identifiers are inherently sensitive data. As European privacy watchdogs have said, biometrics changes irrevocably the relationship between body and identity, because they make the characteristics of the human body "machine-readable" and subject to further use. This is why such identification schemes become particularly dangerous when used with unreliable biometric technologies that can misidentify individuals.
Regulators in several jurisdictions continue to romanticize the security and accuracy of face, fingerprint, and iris automatic recognition biometric technologies. But the existence of a significant amount of falsified biometric identification documents raisesquestions as to whether these technologies are toounreliable to prevent fraud, thus providing individuals and governments with a false sense of security.
Automatic Face Recognition in Border Control
Biometric data of individuals’ faces has been usedsince 2007 at various European border checks. Elevenairportsinthe UnitedKingdom now have e-passport gates that scan EU travelers’ faces and compare them tomeasurementsoftheirfacialfeatures (i.e. biometrics), stored on a chip in their biometric passports. Althougherror rates of state-of-the-art facial recognition technologies have been reduced over the past 20 years, these technologies still cannot identify individuals with complete accuracy. In an incidentin 2011, the Manchester e-passport gates let through a couple that had mixed up their passports. The UK Border Agency subsequently disabled the Manchester gates and launched an investigation.
Similare-passportgates have been introduced in Australia and New Zealand. During the early stages of testing in Australia, the technology showed a six to eight percent error rate. Moreover, this technology also misidentified two men who exchanged passports. Nevertheless, the government refused to disclose the final error rates, citing security concerns.
Digital Fingerprint Recognition
U.S. lawrequiresvisitorstosubmitbiometrics to a central database in the form of a digital fingerprint when seeking a visa or when entering the country.EUlaw further requires all passports for 26 countries in the Schengen area(the borderless zone within European countries) to contain digital fingerprint data on a chip.
The United Kingdom—a non-Schengen country—contemplated introducing fingerprints voluntarily as part of abiometricpassport 2.0, but ultimately decided against it. The UK government was preparing to launch abiometricnationalidentitycard, for which it gathered fingerprints from 15,000 volunteers for the project. But the new government“didn'tbelieveIDcardswouldwork" and physically destroyed the pilot identity databases. However, in 2010, the UK National Policing Improvement Agency also conducted apilottest to provide police officers with digital fingerprint scanners that could remotely match individuals’ fingerprints against a central database. The outcome of this project is unknown and, when questioned, the agencyrefused to disclose the error rates that resulted from its tests.
In the Netherlands, the database storage of digital fingerprinting for travel documentswas haltedfollowing questions over the reliability of the biometric technology. The Mayor of the City of Roermond reported that21 percent offingerprints collected in the city could not be used to identify any individuals. In April 2011, the Dutch Minister of Interior, in a letter to the Dutch House of Representatives, asserted that the number of false rejections (cases in which there is a "no-hit” for a lawful holder of a travel document) is too high to warrant using fingerprints for verification and identification. Currently, only fingerprints onto Radio Frequency Identification (RFID) chips in ID documents are being collected.
AGermancourt recently asked the EU Court of Justice for a preliminary ruling on the legality of biometric passports with RFID chips, which are readable from a distance. The German court questioned whether the EUregulation that requires biometric passports in Europe is compatible with Charter of Fundamental Rights of the European Union and the European Convention of Human Rights.
InFrance, a report last year disclosed the questionable security of biometric passports. It showed that 10 percent of biometric passports were fraudulently obtained for illegal immigrants or people looking for a new identity. Following the issues with respect to biometric passports in the various EU countries, Members of the EuropeanParliament have queried the European Commission about the reliability of these biometric passports.
Iris Scan Identification
In preparation for the UK’s national ID card scheme, theUKgovernmentnoted that there was little research indicating the reliability of iris scan identification. The government initially relied upon unpublished and unverified results from an airport trial. There wereconcerns that “hard contact lenses,” “watery eyes and long eyelashes” could prevent accurate scanning. The government then asked theNationalPhysicalLaboratory (NPL) totest the technology. The NPL chief research scientiststatedinthenews that “technologies like iris scanning are accurate enough for the ID cards application but only provided they are implemented properly and one has appropriate fall-back processes to deal with exceptional cases." Butastudy has shown that it is difficult to enroll disabled individuals into an iris database. The success of enrollment also significantly varies depending on race and age, suggesting further errors if the technology were implemented. Additional testing of iris scanners has been initiated by theU.S. DepartmentofHomelandSecurity.
In summary, governments have failed to support their claim that such technologies actually improve security. These governments have not proved that the technology is reliable enough to prevent fraud. Of course, the reliability of the technology is only one aspect of the different problems around governments’ collection of biometrics, including privacy, security, profiling, discrimination, and other civil liberties.EFF will continue monitoring this issue.Stay tuned!
Since March of this year, EFF has reported extensively on the ongoing campaign to use social engineering to install surveillance software that spies on Syrian activists. Syrian opposition activists have been targeted using severalTrojans, including one disguised as a Skype encryption tool, which covertly install spying software onto the infected computer, as well as a multitude of phishing attacks which steal YouTube and Facebook login credentials.
As we've tracked these ongoing campaigns, patterns have emerged that links certain attacks to one another, indicating that the same actors, or groups of actors are responsible. Many of the attacks have installed versions of the same remote access tool, DarkComet RAT, and reported back to the same IP address in Syrian address space. The latest attack covertly installs a new remote access tool, Blackshades Remote Controller, whose capabilities include keystroke logging and remote screenshots. Evidence suggests that this campaign is being carried by the same pro-Syrian-government hackers responsible for the fake YouTube attack we reported in March, which lured Syrian activists in by advertising pro-opposition videos, stole their YouTube login credentials by asking them to log in before leaving a comment, and installed surveillance malware disguised as an Adobe Flash Player update.
This malware is distributed via Skype. It is distributed in the form of a “.pif” file. This sample was sent via the compromised Skype account of an officer of the Free Syrian Army. In the conversation shown in the screenshot below, a malicious link is sent claiming to be an important new video. Two hours later his friend asks the officer if his account is ok. The officer replies that his account was compromised and this link sent out to various people from his address book.
Clicking on the link downloads a file called "new_new .pif." For those who would like to make sure that they have the correct sample of this malware, the md5sum is 0d1bd081974a4dcdeee55f025423a72b.
On execution, the following files are dropped:
C:\Documents and Settings\Administrator\Templates\VSCover.exe
And C:\Documents and Settings\Administrator\Local Settings\Temp\D3D8THK.exe
Shown in the screenshot below:
If you see these files on your computer, you have been infected with BlackShades RAT. If your computer is infected, deleting the above files or using anti-virus software to remove the Trojan does not guarantee that your computer will be safe or secure. This malware gives an attacker the ability to execute arbitrary code on the infected computer. There is no guarantee that the attacker has not installed additional malicious software while in control of the machine.
Some anti-virus vendors recognize Blackshades RAT. You may try updating your anti-virus software, running it, and using it to remove the Trojan if it comes up, but the safest course of action is to re-install the OS on your computer and change the passwords to any accounts you have logged into since the time of infection.
EFF urges Syrian activists to be especially cautious when downloading files over the Internet, even in links that are purportedly sent by friends. As members of the Syrian opposition become more savvy in using encryption, satellite networks, and other tools to evade the Assad regime's extensive Internet surveillance capabilities, pro-Syrian-government malware campaigns have increased in frequency and sophistication. For Syrian activists, poor security practices can have potentially disastrous consequences.
For a detailed technical analysis, please see this blog post from Citizen Lab.
Today, EFF launched a new campaign against software patents (https://defendinnovation.org). In this campaign, we outline seven proposals that we think will address some of the greatest abuses of the current software patent system, including making sure that folks who independently arrived at an invention can’t be held liable for infringing on a software patent. But our campaign isn't just about our proposals — we also want to hear, and amplify, the views of the technical community. Many engineers, researchers, and entrepreneurs have suggested that reform is not enough and that software should not be patentable, period. We want to record these views, which is why our Defend Innovation campaign is designed to solicit comments from all of the stakeholders. We'll incorporate what we learn into a formal publication that we can take to Congress that reflects the views of innovators, academics, lawyers, CEOs, VCs, and everyone else who is concerned about the software patent system.
People who have been following the software patent space know just how flawed the current system is and how, instead of promoting new inventions, software patents are being turned against everyday inventors. It’s got creators up in arms (and rightly so) and we’ve been working for years to bring attention to this growing crisis. A lot of people want to abolish software patents altogether, while others hold out hope that reforms can help address the situation. Well, here’s the truth of it: neither reforms nor abolition of software patents will be possible unless software patents are treated differently under the law than other types of patents.
In 2008, we fought hard to get the courts to appreciate the difference between physical inventions and software inventions, submitting an amicus brief in the famous Bilski case. Unfortunately, we lost that battle – the Supreme Court wasn’t ready to get rid of software patents altogether (recently, however, the Supreme Court has signaled that it may be uncomfortable with particularly egregious software patents). Congress, too, has failed to really help. Part of the problem is that certain entrenched interests and lobbyists — particularly in pharmaceuticals and biotech, for example — have made fundamental change to the patent system nearly impossible. So it’s time to treat software differently, get those parties out of the equation, and fix the law to reflect the realities of technology and the tech community.
Regardless of whether you think software patents should be abolished altogether or just reformed, the first step is recognizing that a one-size-fits-all patents system doesn’t make sense and that we need to treat software patents differently from other types of patents. Without that, no effort – whether reform or abolition – can be successful.
This is the basis of our Defend Innovation campaign – some proposals to help address the most egregious abuses of the software patent system and a fact-finding mission to hear from concerned individuals about whether or not the system is working at all. Of course, there are many views about the best way to fix the software patent mess. We want to hear those opinions, even (especially) if they are that software patents simply don’t make sense at all. This is a serious problem and overcoming the political obstacles is not easy. That doesn’t mean we can’t and shouldn’t work together to force Congress and the legal system to take these problems seriously.
Bahrain's Minister of State for Information Affairs, Samira Rajab, has announced that the government is preparing to introduce tough new laws to combat the "misuse" of social media. Like many Gulf states, Bahrain is doubling down on state censorship in response to a year of ongoing protests connected to the Arab Spring. In case the target of this upcoming legislation was in any way unclear, Ms. Rajab went on to call out human rights activists:
It is these activists who have labelled drowning victims as those killed by torture. They have labelled sickle cell victims as being killed by security forces and they have used these media to completely distort the true picture of Bahrain. This cannot be tolerated. The rule of law shall prevail."
Ms. Rajab justified the upcoming laws by pointing to sedition laws in the United States, United Kingdom, and France.
Meanwhile, the Bahraini government is already engaging in the kind of crackdown that the new law is supposed to enable. Activist Nabeel Rajab (no relation to the Minister of State for Information Affairs) was detained again on June 6 after complaints that he had made statements “publicly vilifying” pro-government individuals on Twitter. After the Prime Minister visited the small town of Muharraq, Mr. Rajab tweeted that he should step down. He referenced the Prime Minister’s recent visit to Muharraq in his message:
[E]veryone knows you are not popular and if it weren’t for the need for money, [the Muharraq residents] would not have welcomed you.
Mr. Rajab’s attorney notes that his second detention is extraordinary even in Bahrain, since The Bahraini Code of Criminal Procedure limits pretrial detention to exceptional cases. Authorities are not supposed to detain the accused in defamation cases, and the most severe penalty has usually been a fine.
Mr. Rajab had been previously released from jail after posting bail at the end of May. That time, the activist had also been arrested for inflammatory political comments from his Twitter account. The EFF joins other groups such as Human Rights Watch and the European-Bahraini Organization for Human Rights in demanding the immediate and unconditional release of Mr. Rajab, as well as the dismissal of all charges against him. We remain concerned we will see even more cases similar to this one once the new laws are passed.
This week the British government unveiled a bill that has a familiar ring to it. The Communications Data Bill would require all Internet Service Providers (ISPs) and mobile phone network providers in Britain to collect and store information on everyone’s internet and phone activity. Essentially, the bill seeks to publicly require in the UK what EFF and many others have long maintained is happening in the US in secret – and what we have been trying to bring to public and judicial review since 2005. Put simply, it appears that both governments want to shift from surveillance of communications and communications records based on individualized suspicion and probable cause to the mass untargeted collection of communications and communications records of ordinary, non-suspect people.
This shift has profound implications for the UK, the US and any country that claims to be committed to rule of law and the protection of fundamental freedoms.
This isn’t the first time that an Executive has seized the general authority to search through the private communications and papers without individualized suspicion. To the contrary, the United States was founded in large part on the rejection of “general warrants” – papers that gave the Executive (then the King) unchecked power to search colonial Americans without cause. The Fourth Amendment was adopted in part to stop these “hated writs” and to make sure that searches of the papers of Americans required a probable cause showing to a court. Indeed, John Adams noted that “the child Independence was born,” when Boston merchants unsuccessfully sued to stop these unchecked powers, then being used by British customs inspectors seeking to stamp out smuggling.
The current warrantless surveillance programs on both sides of the Atlantic return us to the policies of King George III only with a digital boost. In both, our daily digital “papers” — including intimate information such as who we are communicating with, what websites we visit (which of course includes what we’re reading) and our locations as we travel around with our cell phones — are collected and subjected to some sort of datamining. Then we’re apparently supposed to trust that no one in government will ever misuse this information, that the massive amounts of information about us won’t be subject to leak or attack, and that whatever subsequent measures are put into place to government access to it by various government agencies will be sufficient to protect our privacy and ensure due process, fairness and security.
On that score, at least the UK government is willing to discuss the proposal publicly and allow Parliament to vote on it. But this puts the onus on the British people to tell their representatives to soundly reject it. The message to the Executive should be clear: general warrants were a bad idea in 1760, and they are still a bad idea today.
New Draft of Vietnamese Internet Decree is Still Bad News for Freedom of Expression
The Vietnamese government’s draft of a new, problematic decree to regulate domestic Internet use is expected to become law at the end of the month. The 60-article document is filled with alarmingly vague language, including prohibitions on “abusing the provision and use of the Internet and information on the web” to “oppose the Socialist Republic of Vietnam,” “undermining the grand unity of all people” and “undermining the fine customs and traditions of the nation.” It also requires Internet filtration of all such offensive content, requires real-name identification for all personal websites and profiles, and creates legal liability for intermediaries such as blogs and ISP, for failing to regulate third-party contributors, triggering grave concerns about the decree’s impact on domestic online service providers.
The decree furthermore attempts to require all foreign and domestic companies that provide online services to cooperate with the government to take down prohibited content. For international companies without a business presence in Vietnam, the law would “encourage” them to establish offices or representatives in the country in order to hold them accountable for implementation of the decree. In an earlier draft of the law, foreign businesses would have been required to obtain legal status and set up servers in Vietnam.
In recent years, Vietnam has stepped up its incarceration of bloggers and other alternative media voices. The country is also the third worst on Reporters Without Borders’ list of “Enemies of the Internet,” following only China and Iran.
Wave of Blogger Arrests in Oman
Over a dozen bloggers, activists, and poets have been arrested in Oman over the past couple of weeks. In many cases, the charges have not even been published, although it is commonly believed that they were arrested for having expression controversial views online. Lawyer Bassma Mubarak al-Kayoumi has stated that the arrests are in violation of Omani Basic law, which stipulates that no one can be arrested without a reason, and that an arrested person “has the right to call whomever needs to be alerted about the arrest to provide assistance.”
The latest wave of protests and subsequent arrests largely stems from the Omani government’s backpedaling on legal reforms that the Sultan had announced in the wake of last year’s popular discontent. On June 4, the public prosecutor of capital city Muscat published a statement denouncing “the recent increase in defamatory statements and calls for sedition by some people under the guise of freedom of expression,” and he expressed his intention to “take all necessary legal action against those uttering, circulating, encouraging or contributing to them.” Most recently, police arrested at least 22 protesters at a sit-in in front of the Special Section, the capital’s high-security jail, on June 11. Many of the bloggers and activists who had been arrested earlier are believed to be held in the building.
New HTTP Error Code Proposed to Signal Internet Censorship
Tim Bray, a leading Android developer at Google, has proposed the creation of a new HTTP status code in order to indicate that a webpage is unavailable due to legal restrictions. The suggested HTTP code: 451 is meant to give Internet service providers the ability to serve users with more transparency. The name of the error code 451 is an allusion to the novel Fahrenheit 451 by the late Ray Bradbury, in which all books are supposed to be banned and subsequently burned by state “firemen.”
Bray credits Terence Eden for pointing out the lack of error messages for censorship when he noticed his ISP served an HTTP 403 error when he tried to access The Pirate Bay, which is blocked by government mandate in the UK. However, “the 4xx class of status code is intended for cases in which the client seems to have erred” according to World Wide Web Consortium (W3C) specifications. Currently, the most common HTTP error messages include 404 for web pages that can’t be found, 401 for pages without authorization, and 403 for pages that are supposed to be hidden from most users, such as directories. In the case of ordinary client errors, the server understands the request but refuses to fulfill it. In case of official censorship or website blocking, such as the known Pirate Bay restriction, the server doesn’t even see the request; rather, the ISP may intercept the request and reject it on legal grounds.
Drawing attention to Internet censorship when it takes place is an essential first step in fighting for freedom of expression.
Privacy loomed large as a discussion topic at the 13th Annual Meeting of the Trans Atlantic Consumer Dialogue (TACD), an event held in Washington, D.C. last week that brought together consumer advocacy organizations and regulatory agency heavyweights from both sides of the Atlantic for some in-depth policy discussions. The TACD’s annual meeting helps foster alliances between TACD member organizations (EFF is counted among them) working in the U.S. and the EU. While the overarching group tackles such broad-ranging issues as food policy and financial services, TACD’s Information Society division has been especially concerned with protecting Americans’ and Europeans’ privacy rights in the digital era.
At an overlapping event, the Consumer Federation of America (CFA) hosted a privacy roundtable to bring consumer groups together with representatives from major tech companies and online advertising associations for a frank discussion about emerging issues in online privacy. Both forums yielded some fascinating questions and debate. Here are some of the key takeaways.
Will a Privacy Bill of Rights Move Forward in the U.S.?
Much discussion revolved around the proposed “Consumer Privacy Bill of Rights,” a policy blueprint floated by the Whitehouse this past February that seeks to establish new safeguards to protect consumer data in the digital realm. As a TACD resolution on consumer privacy points out, this issue doesn’t affect Americans alone: “In the absence of legislation, the U.S. cannot offer the EU any assurance that there will be adequate protection for the personal data stored or used by U.S. companies,” TACD noted.
In an age where it’s commonplace for third-party data brokers to buy and sell individuals’ personal information without their knowledge or consent, sound policy is sorely needed. While the Whitehouse proposal could go farther on calling for limiting data collection, it nonetheless contains solid recommendations on transparency, accountability and security and would represent an important step in the right direction. (EFF, meanwhile, has devised its own Privacy Bill of Rights recommendations for mobile users and social network users.)
Unfortunately, questions arose during the TACD meeting about whether the proposal could indeed be expected to move forward as legislation anytime soon, particularly in an election year.
Commissioner Julie Brill, who serves on the FTC, endorsed the idea of converting the Whitehouse blueprint into law during one of the conference plenary sessions. “Such rapid advances in technology and marketing have led us … to conclude we’re facing potentially serious gaps in consumer privacy protection,” she noted.
But in a closed session that followed, representatives of other U.S. government agencies faced tough questions from advocates who voiced concerns that attempts to craft strong policy around consumer privacy would be waylaid and substituted with a multi-stakeholder process that has been launched concurrently to hash out industry best practices on consumer privacy.
Pressed as to whether the Whitehouse policy framework had actually been committed to draft legislative language, agency representatives acknowledged that the administration had not yet taken this step. While they offered assurances that a push for legislation is still on track, they also acknowledged that the effort likely is not going to be realized this election year.
The upshot is that the multi-stakeholder process is on the front burner while the legislative effort simmers in the background. This effort aims to facilitate collaboration with industry and other partners to pin down a code for best practices, and the FTC will be endowed with enforcement powers to hold companies accountable under the voluntary standard that is created.
Speaking of political campaigns: Investigative news outlet ProPublica put some pressure on Yahoo, Microsoft, and President Barack Obama’s reelection campaign this week with an article detailing how the companies are providing user data to political campaigns to facilitate sophisticated online voter targeting.
When Machines Decide
A number of fascinating conversations emerged from the CFA privacy dialogue, a forum held the following day that brought together representatives from industry, government, advocacy organizations and universities. One of the most intriguing (and perhaps chilling) was a presentation delivered by a representative from a prominent tech company who cheerfully described a world in which an "Internet of Things" could assist with decision-making --without any human intervention.
The Internet of Things may be thought of as “intimately networked” devices, people and computers “all talking to each other,” the company representative explained. While at present there are roughly 2 billion “things” (hint: most are smartphones) connected to the Internet, corporate researchers predict that the world will be swamped with a whopping 50 billion Internet-connected things by 2020.
As envisioned, these “things” will be wide-ranging in nature. They might include infrared sensors on doorways to tally the number of people entering a room, for example, or devices tasked with monitoring and controlling the power grid, or mitigating traffic congestion. It could even be a device worn by a patient to monitor blood pressure, equipped to automatically send the data back to a medical care provider. The long-term idea is to use vast amounts of collected data -- sent along largely invisible networks -- to enable these devices to recognize patterns over time and make decisions accordingly.
This scenario obviously raises a slew of thorny questions, but the discussion at the CFA dialogue centered on the privacy implications. Some wondered how consumers could be guaranteed agency in an intensely networked world. Others noted that it would be crucial to require adequate disclosure on who is obtaining the data that is being generated, and for what purposes it is being used. TACD, meanwhile, has also issued a resolution on the Internet of Things, which provides a useful way to think about this future scenario:
“The IoT will reveal much more about consumers’ habits, from the books they read and the medications they take to the types of transportation they use. Implementation of privacy by design will be important for the enforcement of consumer and privacy rights. In addition, the data protection principles (data collection limitations; lawful and fair collection; proportionality; finality; accuracy; transparency; right of access and rectification; confidentiality and security of processing) should be respected and implemented in the technology.”
TACD Recommendations on Consumer Privacy Rights
TACD has also issued a much broader resolution offering a set of detailed recommendations on consumer privacy in general. In it, member organizations urge the U.S. and EU governments to do the following (paraphrased and not a comprehensive list):