Back in 2007, Stephanie Lenz posted a video to YouTube of her children dancing and running around in her kitchen. Stephanie wanted to share the moment with her family and friends. But they weren't the only ones watching: a few months later, Universal Music Corp. had the video removed from YouTube, claiming that the video infringed its copyright.
With help from EFF and Keker & Van Nest, Lenz fought back. She filed a lawsuit asking a federal court to hold Universal accountable for misrepresenting that her fair use violated copyright law. Since then, Lenz has won a number of key decisions from the judge in the case, including a ruling that content owners must consider fair use before sending copyright takedown notices. On Monday, Lenz filed a motion for summary judgment asking the judge to rule that Universal violated the law.
Over the years, the case has garnered a greatdeal of mediacoverage. One reason for the interest is that Ms. Lenz was accused of infringement for doing something parents do all the time: documenting and sharing precious moments in the lives of their children. As explained in Monday's brief:
Every day, thousands of parents take pictures and make videos of their kids doing all sorts of things. Many of those pictures and videos incorporate copyrighted works in myriad ways -- a child may be wearing a t-shirt with a copyrighted character on it, or she may be standing in front of a copyrighted sculpture, or there may be copyrighted music playing in the background. This activity doesn't make parents of America copyright scofflaws. And everyone versed in copyright law (such as a major music publisher) knows why: because these examples are fair uses.
Universal filed a motion for summary judgment of its own Monday, arguing that the video was not an obvious case of fair use. (You'll notice that both of the motions have many redactions, as a protective order in the case requires some information to be treated as confidential.) A hearing on the matter is set for December 10, and hopefully we'll have a resolution to this long-running case soon.
California and other states are moving towards a "Smart Grid" -- touted by the federal government as a way to boost reliability, security, and conservation in America's electrical systems. But while a Smart Grid has great potential for consumers, there are still critical questions unanswered about the privacy and security of customers' information.
How much energy you use -- and when you use it -- can reveal surprisingly detailed information about your daily life. This wasn't true when energy usage was only measured once a month. But with shorter intervals and more frequent metering, the picture of your home life is remarkably clear. An executive with Siemens Energy recently told the Smart Grids and Cleanpower conference in Britain, "We, Siemens, have the technology to record it (energy consumption) every minute, second, microsecond, more or less live...From that we can infer how many people are in the house, what they do, whether they're upstairs, downstairs, do you have a dog, when do you habitually get up, when did you get up this morning, when do you have a shower: masses of private data." It's a virtual window into the home. The Smart Grid Interoperability Panel–Cyber Security Working Group recently issued a report on "Privacy and the Smart Grid" that covered these issues in depth.
Under EFF and CDT's proposed policies, California customers would be able to better understand what energy usage information is collected, how the information may be used and shared, and how to exercise meaningful control of the use of the data. Law enforcement would also be required to get a warrant before accessing energy usage information, and utilities would regularly report to the PUC on how often it received these requests. Our proposed rules aren't, of course, a complete answer; it would be better if energy management systems were designed to keep information about our household activities within the home in the first place.
Without strong protections, energy data can and will be used in ways that will hurt consumers. Marketing companies will desperately want to access this data to get intimate new insights into your family's day-to-day routine, and it's not hard to imagine an insurance company interpreting the data in a way that allows it to penalize you. Our privacy rights should be strongest in our home. The states -- and the federal government -- should ensure that energy customers get the protection they deserve.
UPDATE: EFF and CDT filed similar comments with the Department of Energy on November 1.
They can promise strong encryption. They just need to figure out how they can provide us plain text. - FBI General Counsel Valerie Caproni, September 27, 2010
[W]e're in favor of strong encryption, robust encryption. The country needs it, industry needs it. We just want to make sure we have a trap door and key under some judge's authority where we can get there if somebody is planning a crime. - FBI Director Louis Freeh, May 11, 1995
As noted in late September, the FBI is on a charmoffensive, seeking to ease its ability to spy on Americans by expanding the reach of the Communications Assistance to Law Enforcement Act (CALEA). Among other things, the government appears to be seriously discussing a new requirement that all communications systems be easily wiretappable by mandating "back doors" into any encryption systems.
If this sounds familiar, it's because regulating encryption was a monstrous proposal officially declared dead in 2001 after threatening Americans' privacy, free speech rights, and innovation for nearly a decade. But like a zombie, it's now rising from the grave, bringing the same disastrous flaws with it.
For those who weren't following digital civil liberties issues in 1995, or for those who have forgotten, here's a refresher list of why forcing companies to break their own privacy and security measures by installing a back door was a bad idea 15 years ago. We'll be posting more analysis when more details on the "new" proposal emerge, but this list is a start:
It will create security risks. Don't take our word for it. Computer security expert Steven Bellovin has explained some of the problems. First, it's hard to secure communications properly even between two parties. Cryptography with a back door adds a third party, requiring a more complex protocol, and as Bellovin puts it: "Many previous attempts to add such features have resulted in new, easily exploited security flaws rather than better law enforcement access."It doesn't end there. Bellovin notes:
Complexity in the protocols isn't the only problem; protocols require computer programs to implement them, and more complex code generally creates more exploitable bugs. In the most notorious incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called 'lawful intercept' mechanisms in the switch — that is, the features designed to permit the police to wiretap calls easily — was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister's. This attack would not have been possible if the vendor hadn't written the lawful intercept code.
More recently, as security researcher Susan Landau explains, "an IBM researcher found that a Cisco wiretapping architecture designed to accommodate law-enforcement requirements — a system already in use by major carriers — had numerous security holes in its design. This would have made it easy to break into the communications network and surreptitiously wiretap private communications."
This isn't just a problem for you and me and millions of companies that need secure communications. What will the government itself use for secure communications? The FBI and other government agencies currently use many commercial products — the same ones they want to force to have a back door. How will the FBI stop people from un-backdooring their deployments? Or does the government plan to stop using commercial communications technologies altogether?
It won't stop the bad guys. Users who want strong encryption will be able to get it — from Germany, Finland, Israel, and many other places in the world where it's offered for sale and for free. In 1996, the National Research Council did a study called "Cryptography's Role in Securing the Information Society," nicknamed CRISIS. Here's what they said:
Products using unescrowed encryption are in use today by millions of users, and such products are available from many difficult-to-censor Internet sites abroad. Users could pre-encrypt their data, using whatever means were available, before their data were accepted by an escrowed encryption device or system. Users could store their data on remote computers, accessible through the click of a mouse but otherwise unknown to anyone but the data owner, such practices could occur quite legally even with a ban on the use of unescrowed encryption. Knowledge of strong encryption techniques is available from official U.S. government publications and other sources worldwide, and experts understanding how to use such knowledge might well be in high demand from criminal elements. — CRISIS Report at 303
None of that has changed. And of course, more encryption technology is more readily available today than it was in 1996.
It will harm innovation. In order to ensure that no "untappable" technology exists, we'll likely see a technology mandate and a draconian regulatory framework. The implications of this for America's leadership in innovation are dire. Could Mark Zuckerberg have built Facebook in his dorm room if he'd had to build in surveillance capabilities before launch in order to avoid government fines? Would Skype have ever happened if it had been forced to include an artificial bottleneck to allow government easy access to all of your peer-to-peer communications?This has especially serious implications for the open source community and small innovators. Some open source developers have already taken a stand against building back doors into software.
It will harm US business. If, thanks to this proposal, US businesses cannot innovate and cannot offer truly secure products, we're just handing business over to foreign companies who don't have such limitations. Nokia, Siemens, and Ericsson would all be happy to take a heaping share of the communications technology business from US companies. And it's not just telecom carriers and VOIP providers at risk. Many game consoles that people can use to play over the Internet, such as the Xbox, allow gamers to chat with each other while they play. They'd have to be tappable, too.
It will cost consumers. Any additional mandates on service providers will require them to spend millions of dollars making their technologies compliant with the new rules. And there's no real question about who will foot the bill: the providers will pass those costs onto their customers. (And of course, if the government were to pay for it, they would be using taxpayer dollars.)
It will be unconstitutional.. Of course, we wouldn't be EFF if we didn't point out the myriad constitutional problems. The details of how a cryptography regulation or mandate will be unconstitutional may vary, but there are serious problems with nearly every iteration of a "no encryption allowed" proposal that we've seen so far. Some likely problems:
The First Amendment would likely be violated by a ban on all fully encrypted speech.
The First Amendment would likely not allow a ban of any software that can allow untappable secrecy. Software is speech, after all, and this is one of the key ways we defeated this bad idea last time.
The Fourth Amendment would not allow requiring disclosure of a key to the backdoor into our houses so the government can read our "papers" in advance of a showing of probable cause, and our digital communications shouldn't be treated any differently.
The Fifth Amendment would be implicated by required disclosure of a private papers and the forced utterance of incriminating testimony.
Right to privacy. Both the right to be left alone and informational privacy rights would be implicated.
It will be a huge outlay of tax dollars. As noted below, wiretapping is still a relatively rare tool of government. Yet the tax dollars needed to create a huge regulatory infrastructure staffed with government bureaucrats who can enforce the mandates will be very high. So, the taxpayers would end up paying for more expensive technology, higher taxes, and lost privacy, all for the relatively rare chance that motivated criminals will act "in the clear" by not using encryption readily available from a German or Israeli company or for free online.
The government hasn't shown that encryption is a problem. How many investigations have been thwarted or significantly harmed by encryption that could not be broken? In 2009, the government reported only one instance of encryption that they needed to break out of 2,376 court-approved wiretaps, and it ultimately didn't prevent investigators from obtaining the communications they were after.The New York Times reports that the government officials pushing for this have only come up with a few examples (and it's not clear that all of the examples actually involve encryption) and no real facts that would allow independent investigation or confirmation. More examples will undoubtedly surface in the FBI's PR campaign, but we'll be watching closely to see if underneath all the scary hype there's actually a real problem demanding this expensive, intrusive solution.
The real issue with encryption may simply be that the FBI has to use more resources when they encounter it than when they don't. Indeed, Bellovin argues: "Time has also shown that the government has almost always managed to go around encryption." (One circumvention that's worked before: keyloggers.) But if the FBI's burden is the real issue here, then the words of the CRISIS Report are even truer today than they were in 1996:
It is true that the spread of encryption technologies will add to the burden of those in government who are charged with carrying out certain law enforcement and intelligence activities. But the many benefits to society of widespread commercial and private use of cryptography outweigh the disadvantages.
The Wall Street Journal is reporting today on yet another major Facebook privacy blunder. Despite Facebook's various polices and promises about users' privacy when using apps, apps have been feeding Facebook users' information to advertisers and Internet tracking companies regardless of the individual user's Facebook privacy settings.
Internet advertising networks claim to track users "anonymously," but the Facebook leak allows these web marketing snoops to associate Facebook users with the supposedly-anonymous browsing-history cookies that trackers use to see a user's movements across the web. Based on the WSJ's reporting, the leak has the potential to affect tens of millions of Facebook users, as all of the top ten Facebook apps — like Farmville and Mafia Wars — were found to be violating the Facebook app developer agreement and users' privacy by handing their personal data over to advertising and data aggregation companies.
If this outrageous episode sounds familiar, it's because earlier this year, Facebook was caught leaking the exact same data to advertisers. At the time, Facebook promised to fix the problem, but it's clear that their so-called fixes failed to apply to the more than half a million apps available on the site. EFF and other privacy advocates have long warned Facebook that apps are the weakest link in the Facebook privacy ecosystem, and this report from the Wall Street Journal overwhelmingly validates that concern.
Facebook reassures privacy-conscious users by pointing to the developer agreement that requires app providers to take strong steps to protect privacy. But given that Facebook apps have been found to be leaking data that Facebook promised to protect five months ago, it's obvious that Facebook has no way of effectively enforcing those rules for the countless apps on the Facebook Platform.
Facebook simply can't claim that apps are safe to use when serious privacy issues around apps — like this referrer security breach — are abundant and endemic.
If you're a Facebook user concerned about apps leaking your data, the most straightforward fix at the moment is to turn off apps completely. To do this, log in to Facebook, open up the "Account" menu in the upper-right corner, chooose "Privacy Settings," choose to edit your settings for "Applications and Websites" in the lower-left corner, and click on the option to "Turn off all platform applications." Also, check out EFF's earlier blog post on How to Get More Privacy From Facebook's New Privacy Controls to minimize the information on your Facebook account that's accessible to others.
While the overall picture of what is happening is clear, the as-yet murky details will have a serious impact on understanding the breadth and depth of the breach, the roster of companies involved, and the list of the best solutions. EFF is looking into these factors and will follow up with our findings here on Deeplinks. In the meantime, Harlan Yu has posted a clear explanation of the suspected technical details of the leak on the Freedom to Tinker blog. Facebook application developers should consider catching up on a best-practices paper by Justine Osborne describing Secure Application Development on Facebook.
Last Friday, in a brief filed with the Ninth Circuit Court of Appeals, the Obama Administration continued the government's half-decade-long battle to ensure that no judge ever rules on the legality of the National Security Agency's warrantless dragnet surveillance program, a program first revealed in 2005 by the New York Times and detailed by technical documents provided by former AT&T technician Mark Klein.
The brief filed Friday is the government's response to EFF's appeal of the Jewel v. NSA case, a lawsuit brought against the government and government officials on behalf of AT&T phone and internet customers whose communications have been swept up in the mass surveillance program along with those of millions of other Americans.
In January, the district court issued an order dismissing the case based on the incorrect argument that, because so many Americans have had their communications and communications records illegally obtained by the government, no single person has legal "standing" to challenge the ongoing program of government surveillance. In other words: if everyone is being spied on, no one can sue. As EFF argued to the Ninth Circuit in its opening appellate brief, that ruling "risks creating a perverse incentive for the government to violate the privacy rights of as many citizens as possible in order to avoid judicial review of its actions."
The government in Friday's brief gives some lip service to the district court's conclusion that the plaintiffs claims were merely a "generalized grievance" that cannot be litigated. However, and emboldened by the Ninth Circuit's recent and dangerously misguided decision in Mohamed v. Jeppesen Dataplan, the government dedicates most of its brief to arguing the same thing it has been arguing for the past five years in every other warrantless wiretapping case: that any attempt by the courts to judge the legality of the alleged surveillance would violate the state secrets privilege and harm national security.
We've heard the government sing this tune before, most notably in our other warrantless wiretapping case Hepting v. AT&T where we withstood a motion to dismiss based on the state secrets privilege before Congress stepped in to try and shut down the case with a new immunity law for the telecoms that helped the NSA break the law. Just as we're vigorously appealing the dismissal of Hepting based on that unconstitutional law, we also look forward to responding to the government's latest attempt to sweep the NSA program under the rug, both in our reply brief due to be filed with the Ninth Circuit next month and eventually at oral argument before the court.
EFF has long pointed out that technology companies are complicit in human rights violations when they knowingly sell customized human surveillance technologies to repressive regimes that are then used to target people for arrest, torture, and disappearance. Now a lawsuit filed recently against Nokia Siemens in Virginia by Isa Saharkhiz, an imprisoned Iranian dissident, and his son Mehdi Saharkhiz, brings this issue to the fore. The lawsuit accuses the Nokia Siemens Network of:
"knowingly, negligently and willfully provid[ing] the infamous, abusive and oppressive Iranian government with sophisticated devices for monitoring, eavesdropping, filtering, and tracking mobile phones."
This case brings home the human costs of the corporate sale of surveillance technologies to repressive regimes. The European Parliament declared in its Resolution on Iran, the customized sale of its technology was “instrumental in the persecution and arrest of Iranian dissidents." Even Nokia agrees, noting, "There have been credible reports from Iran that telecommunications monitoring has been used as a tool to suppress dissent and freedom of speech."
The facts of the case are troubling. Isa Saharkhiz, a distinguished Iranian journalist and a key political reformer behind the 1999 Tehran Spring of press freedom, was arrested on June 20, 2009 in the small village of Tirkadeh in northern Iran where he had been hiding. The intelligence agents found him by tracking his mobile phone using the powerful surveillance capabilities of Nokia's Intelligence Solutions tools, a mass surveillance product it sold to the state-owned telecommunications provider allegedly controlled by the Iranian Revolutionary Guard and freely accessible to notoriously brutal Iranian intelligence agencies.
Nokia's comprehensive human surveillance system, which it sold and tailored to the Iranian government’s needs in 2008, has two main parts: the Monitoring Center which enables centralized deep packet inspection of voice and data communications; and the Intelligence Platform, which provides real-time data mining intelligence. These services, in whole or in part, are what enabled the Iranian authorities to find and arrest Mr. Saharkhiz.
To its credit, Nokia now states in its "pressroom" and has told European Parliament that it "exited the monitoring center business" in March 2009 due to concerns about human rights issues. Others dispute how true this is, but at least that's a start. Nokia's public statements, however, are sharply different than those it has made in court, where it had boldly claimed that because it is a corporation, it is categorically immune from responsibility for its role in aiding and abetting torture and illegal arrest.
Even if they have now "exited the business," at least in part, Nokia's decision to maximize profit by selling customized tools of repression deserves close scrutiny and those hurt by its decision deserve their day in court. Nokia claims that it is "in the process of assessing our policies and processes," and if this is genuine, we have some suggestions which we will be blogging about in the days and weeks to come and welcome direct discussion on these issues. We also invite Nokia to join the Global Network Initiative, and seriously consider its core commitments to human rights as part of your assessment process.
In the meantime, Mr. Saharkhiz rots in jail and his family suffers, in part due to Nokia's desire to make a quick buck. As Nokia itself admits:
"misuse of communication technologies to infringe human rights is wrong and, ultimately, that those who do so must be accountable for their actions."
Again, if this is more than just public relations spin, the time is now for Nokia to "be accountable" for its role in the repression of Mr. Saharkhiz and likely thousands of others. And it must do so not just in the press room, but in the court case, dropping its cynical claims that corporations should never be held accountable for their role in human rights violations.
Access, a new organization devoted to global Internet Freedom, has launched a campaign today to support the Saharkhiz case and to hold Nokia accountable. We urge EFFers who are concerned about misuse of technologies to aid repression to join their fight against selling surveillance technologies to repressive countries by signing the No to Nokia petition. Let Nokia know that they should do their part to help free the political dissidents and that they should stop making broad claims of corporate irresponsibility in court.
As noted in our first post, EFF recently received new documents via our FOIA lawsuit on social network surveillance, filed with the help of UC Berkeley’s Samuelson Clinic, that reveal two ways the government has been tracking people online: Citizenship and Immigration’s surveillance of social networks to investigate citizenship petitions and the DHS’s use of a “Social Networking Monitoring Center” to collect and analyze online public communication during President Obama’s inauguration. This is the second of two posts describing these documents and some of their implications.
In addition to learning about surveillance of citizenship petitioners, EFF also learned that leading up to President Obama’s January 2009 inauguration, DHS established a Social Networking Monitoring Center (SNMC) to monitor social networking sites for “items of interest.” In a set of slides [PDF] outlining the effort, DHS discusses both the massive collection and use of social network information as well as the privacy principles it sought to employ when doing so.
While it is laudable to see DHS discussing the Fair Information Practice Principles [PDF] as part of the design for such a project, the breadth of sites targeted is concerning. For example, among the key “Candidates for Analysis” were general social networking sites like Facebook, MySpace, Twitter, and Flickr as well as sites that focus specifically on certain demographic groups such as MiGente and BlackPlanet, news sites such as NPR, and political commentary sites DailyKos. According to the slides, SNMC looks for “‘items of interest’ in the routine of social networking posts on the events, organizations, activities, and environment” of important events. While the slides indicate that DHS scrutinized the information and emphasized the need to look at credible sources, evidence, and corroboration, they also suggest the DHS collected a massive amount of data on individuals and organizations explicitly tied to a political event.
In addition, while the slides do emphasize the minimization and elimination of “Personally Identifiable Information” (PII) from the public data, the slides note that “[o]penly divulged information excluding PII will be used for future corroboration purposes and trend analysis during the Inauguration period.” Thus, it is unclear whether or not the information was deleted permanently after the inauguration proceedings were complete. Moreover, there have been several recent studies and papers showing how, even without PII, comments and information about people online can be “re-identified” through the use of sophisticated computational techniques and thus create privacy concerns.
Finally, while there have been some reports in the past year of similar social network monitoring for large-scale public events, to date the public has not seen such detailed information about the government’s approach to monitoring, especially on its data preservation practices. As our FOIA lawsuit continues, we hope to learn more about such activities and help bring further transparency and accountability to the ways in which government agencies and law enforcement officials collect and analyze information about us online.
One great trend for Internet users' privacy and security has been that search engines — among other popular sites — are making their services available in a secure HTTPS form.
But users can still run into a privacy problem when they click on search results: the destination page could be unencrypted, potentially revealing lots of information to eavesdroppers about a user's interests and activities. For instance, suppose you search for [coronary artery disease] on a search engine, and you click on the search engine's outbound result link to Wikipedia's page at http://en.wikipedia.org/wiki/Coronary_artery_disease. Even if your connection to the search engine was protected by HTTPS, your connection to Wikipedia won't be!
This week the developer of the search engine Duck Duck Go let us know that Duck Duck Go is doing exactly that, using EFF's HTTPS Everywhere rules to automatically generate secure outbound links where possible. (For example, Duck Duck Go is rewriting not only links to Wikipedia but also links to sites like Twitter and Facebook into HTTPS.)
This is a great step toward making HTTPS use much more routine and ubiquitous. We were also thrilled to discover that StartPage, a pioneer in search privacy, is also generating secure outbound Wikipedia links. Hopefully more search engines will adopt this practice soon!