Chris Riley, Policy Counsel for Free Press (and former EFF legal intern), has worked up an illuminating multi-part series of blog posts explaining some of the key issues that have been raised in the FCC's net neutrality proceedings (EFF's comments to the FCC echo many of the points discussed).
If you don't have time to dig through the huge volume of submissions piling up on the FCC's servers, his blog posts are a good place to start:
The next round of negotiations on ACTA start today in Guadalajara, Mexico. This week’s negotiations will apparently focus on civil enforcement, border measures, and enforcement procedures in the digital environment, and briefly, transparency.
One of the main goals of ACTA is creating new harmonized international IP enforcement standards above those in the 1994 TRIPs agreement. Thirty-seven countries with 37 different national laws are negotiating ACTA, so reaching agreement on new substantive IP enforcement standards will inevitably involve compromises. Some countries will be required to change their national law to bring them closer to other countries' approaches to IP regulation. Since two of the major powers negotiating ACTA are the US and the European Union (and its 27 Member States), there is much scope for different approaches and disagreements to arise. This is particularly true for Internet intermediary liability — where laws in the US and the various EU Member States take quite different approaches.
Which country prevails in this battle of legal wills will have tremendous consequences for citizens' access to knowledge and the future of the Internet as a powerful tool for communication, cross-border collaboration and a platform for innovation.
The EU has indicated that it is unwilling to agree to anything that requires changes to European Community law. EU negotiators would probably not be able to do so under their (still secret) negotiation mandate. On January 14, EU Commissioner-delegate for the Digital Agenda, Neelie Kroes stated that "The objective of ACTA negotiations is to provide the same safeguards as the EU did in the telecoms package... So we stick to our line and that's it."
For its part, the USTR has repeatedly said that ACTA will only "color within the lines of existing US law". Indeed, this is the justification for negotiating ACTA as a sole Executive Agreement, therefore bypassing the checks and balances of the usual Congressional oversight process applied to other recent free trade agreements, such as the US-South Korea FTA.
Given this, it is interesting to reflect on the leaked European Commission’s analysis of the US's Internet Chapter. Although draft text of the Internet chapter has not yet surfaced, the EU analysis discloses what the chapter covers: increased Internet intermediary liability, three strikes Internet disconnection obligations for ISPs, and civil and criminal technological protection measure laws modeled on the US DMCA.
(More after the Jump ..)
As the EU analysis notes, the goal of ACTA is to create an internationally harmonized intermediary liability regime based on US standards: the contributory copyright liability doctrine, and liability for inducing others to engage in copyright infringement as enunciated by the US Supreme Court in the 2005 MGM et al. v. Grokster et al decision;. While many ACTA negotiating countries have safe harbor or limitation of liability regimes for Internet intermediaries, national standards for Internet intermediary liability vary greatly across those countries. The inducement standard is unique to US law. As the EU analysis notes: "This concept does not exist in the current Acquis communautaire and in the law of several Member States." In other Commonwealth countries (Great Britain, Australia and New Zealand) intermediary liability exists only where intermediaries are found to authorize specific infringing activity.
Requiring other countries to harmonize with the US secondary liability standards via ACTA is dangerous for several reasons. First, it would change the existing relationships and balance of power between content providers, intermediaries and their users, with unpredictable consequences for citizens' access to knowledge and Innovation policy. Second, it overrides other countries’ national sovereignty and the public policies reflected in their national liability standards.
Third, it will reduce flexibility and harm the ongoing development of these concepts in both the US, and in other countries. The US secondary liability doctrines are not found in its copyright statutes. Courts have evolved them over time and in response to technological developments. US legislators have resisted introducing "inducement" liability concepts into US legislation on several occasions. Including a particular formulation of these concepts in ACTA risks entrenching a 2005 standard that will not serve the interests of US innovation policy. As Ville Okansen of EFFi notes this is also a danger for citizens in the EU.
In other countries, ACTA could effectively create an end-run around issues that are currently before national courts or the subject of national policy-making consultations. As Nic Suzor from Electronic Frontiers Australia states in his analysis of the impact of the leaked ACTA Internet chapter on Australian law, ACTA could overturn the Australian Federal Court’s consideration of secondary liability standards in the currentlt pending iiNet case:
The iiNet litigation is currently before the courts, and there is a chance that the Federal Court will find that iiNet had no obligation to pass on infringement notices or to terminate repeat infringers. It is conceivable that the ACTA will require higher standards of ISP liability, which could potentially see Australia introduce legislation to essentially overturn any verdict in the iiNet case.
As Jonathan Penney notes, New Zealand already has well-developed secondary liability law, and is in the midst of a public consultation about a Cabinet discussion paper looking at whether ISPs should be required to adopt a controversial three strikes policy under section 92A of the New Zealand law, something that was previously put on hold indefinitely because of the widespread public outcry. Now ACTA could force the government's hand.
Internet intermediary standards are different in different countries because they take account of differing national priorities and legal traditions. It is for this reason that intermediary liability standards are not readily susceptible to international harmonization. ACTA could completely override current national developments in other countries and require countries to adopt US standards that may suit US copyright owners, but would be ill-fitted to other countries’ national systems and priorities.
As David Fewer of CIPPIC notes in his guest post today, Canada has its own unique liability regime in "notice and notice", and challenges to this established system have failed to pass the Canadian parliamentary process not once but twice.
What's at stake here is the future of the Internet as we know it now. If national policy makers get this wrong, they could reduce incentives for Internet intermediaries to do the innovative sorts of things they’ve done in the past two decades, chill citizens' free expression and limit user generated content online, in the name of building a "more secure" environment for movie companies to sell us movies wrapped in technological protection measures.
All of this underlines the importance of transparency in the ACTA negotiations. As citizens around the world are proclaiming in unison today, balanced policy-making requires transparency and the participation of all affected stakeholders. This is particularly true because ACTA threatens to have such a significant impact on the global Internet and on citizens' rights in countries across the world.
ACTA negotiators have refused to make the text available for public comment, despite the fact that negotiations have been under way since 2008, and notwithstanding the USTR’s willingness to show the text to a cherry-picked handful of lobbyists. Citizens have had to rely on leaked texts to try to understand how ACTA will affect the Internet and impact their lives. This is not the way to create sound international IP enforcement rules, nor to ensure the legitimacy of this process.
The next round of negotiations on the Anti-Counterfeiting Trade Agreement (ACTA) — the secret copyright treaty that targets the Internet — starts tomorrow in Guadalajara, Mexico. From January 26-29, negotiators from Australia, Canada, the European Union, Japan, Jordan, Mexico, Morocco, New Zealand, the Republic of Korea, Singapore, Switzerland, and the United States will discuss civil enforcement, border measures, enforcement procedures in the digital environment (a.k.a. "the Internet chapter" of ACTA) and transparency.
It's been over two years since the ACTA negotiations were first announced in October 2007, and yet no one outside of these negotiators and a cherry-picked handful of U.S. lobbyists have seen the draft ACTA text. However, leaked information shows that ACTA raises significant concerns for citizens' rights and the future of the open Internet.
Because ACTA is intended to create new global IP enforcement norms above those in the 1994 agreement on Trade Related Aspects of IP, it threatens citizens' access to knowledge across the world. With that in mind, this week we are inviting expert commentators from other countries to share their perspective on how ACTA is likely to affect their national law and policy, and their citizens' rights. We will also be highlighting commentary and analysis from others following ACTA in negotiating countries.
ACTA has been on the radar of CIPPIC since the negotiations for the nebulous trade agreement were first announced. CIPPIC’s very first submission to the Canadian government on the topic (in April of 2008) identified three concerns: (1) venue, (2) process, and (3) substance.
On venue and process, we had concerns about any process outside of traditional international multilateral intellectual property forum, and particularly about a process whose content was to be deliberately hidden from public view (although not, even in ACTA’s earliest days, from view of senior rightsholders). ACTA’s lack of transparency was troubling from the start. And where was the articulation of any justification for the agreement in the first place? No one ever pointed to a remedial failing of participants’ intellectual property statutes that required a new international trade agreement to address.
On substance, we expressed a shared concern over genuine counterfeiting, but saw very few opportunities to improve existing intellectual property laws to address them. Simply, counterfeiting is already illegal, and most counterfeiters already know that. Saying so, again, in an international agreement would likely have no impact on such activity. Similarly, Canadian IP laws enjoy a robust array of remedies, including statutory damages for copyright infringement, sufficient to offer whatever deterring effect such remedies might. We saw no evidence that counterfeiters saw intellectual property as rights without remedies. Accordingly, we suspected that counterfeiting was always a stalking horse, an excuse for a bit of policy laundering that could radically overhaul Canadian intellectual property laws in ways that domestic political parties wouldn’t, and couldn’t, dare. We argued that international intellectual property agreements should not be "norm-setting exercises" that create new rights, undermine or eliminate defences, exceptions and limitations to rights, or otherwise circumvent domestic legislatures. Such exercises are profoundly undemocratic. Canada’s last such venture into international IP norm-setting — signing the WIPO Copyright Treaty and the WIPO Performers and Phonograms Treaty — had resulted in a decade of debate and policy angst as domestic policy makers struggled to find a way to implement Canada’s obligations under the treaties in a manner consistent with Canadian values — values that demonstrate a commitment to taking user rights seriously. Finally, the nebulous "Internet Chapter" just made us nervous. We felt negotiators could pack a great deal of privacy-invading and speech-suppressing mischief into such a chapter.
Two years on, those concerns have been largely borne out. Venue and process concerns — particularly around the lack of transparency characterizing all thing ACTA — continue to occupy us. The steady drip of leaks has, ironically, at least partially addressed transparency concerns, but what we have learned through those leaks substantiates our concerns with the content of the Agreement.
The leaks have also substantiated our concerns with ACTA’s potential to undermine Canadian sovereignty over domestic intellectual property policy and the Canadian values they express. Interestingly, secret ACTA negotiations proceeded alongside public consultations about Canadian copyright policy. "Copyright Consultations" held by Canadian Industry Minister Tony Clement and Minister of Canadian Heritage James Moore provoked over 8000 submissions from the public, a simply overwhelming response. Those submissions have been posted online and are available for review. Simply, the content of many of those submissions is at odds with the content of the leaked ACTA Internet Chapter. Policy decisions Canadians presumed were under the microscope were at the same time holding the anvil of ACTA.
Consider Internet Service Provider (ISP) liability. In Canada, the debate over the liability of ISPs has been settled in a unique manner: the creation of a "notice and notice" system whereby ISPs are not liable for the infringing activity of their customers provided certain conditions are in place, including that they pass on to the customer any allegation of infringement. There is neither a takedown requirement nor a termination obligation. Two bills — C-60 and C-61, neither of which became law due to changes in government — have sought to codify this system, indicating a political consensus rejecting more extreme systems such as that of the American DMCA or the various "three strikes" or "graduated response" proposals seen elsewhere. While some rightsholder groups continue to advocate for a more aggressive approach to ISP liability, we see no evidence that such approaches have any traction in Canada. And why would they? There is no evidence that Canada’s notice and notice system is any less effective than other systems in addressing infringement issues. Yet ACTA would apparently sweep away this political consensus in favour of a much more radical system for addressing ISP liability. The European Commission analysis of ACTA's Internet chapter, leaked in November, 2009, discusses a very different system, involving both mandatory notice and takedown and, as a condition of eligibility of the immunity, the ISP must employ a policy for addressing "unauthorized storage or transmission of materials protected by copyright or related rights". This sort of requirement is not a part of any law, and looks to be a mechanism for imposing – as a matter of the policy of private actors, rather than as a statutory requirement – filtering, termination, and other controversial policies.
Returning to the question of implementation of the requirements of the WIPO Copyright and Performers and Phonograms Treaties, the leaked EU analysis describes a global DMCA — anti-circumvention laws that prohibit circumventing a technical measure to access a protected work, even if for a lawful purpose. This again contradicts the state of debate over Canadian policy: two different copyright bills have proposed two different approaches to circumvention liability, with Bill C-60 imposing liability for circumvention for an infringing purpose, and C-61 favouring the DMCA’s "access" model of liability. The touchstone of all Canadian governments has been to find an approach that reflects Canadian values and is "made in Canada". ACTA puts the lie to such aspirations.
The ACTA negotiations have made a mockery of Canada’s copyright consultations. And ACTA is not alone in this respect: Canada is currently a party to international trade agreement negotiations with the European Union. The end product of these discussions also threatens to gut domestic IP policy: the EU’s proposed IP Chapter leaked online in December, and it doesn’t even vaguely resemble Canadian law. It apparently even goes so far as to require copyright term extension to life of the author plus 70 (from Canada’s current life +50 default) — a proposal that no Canadian politician would dare float.
Both ACTA and the Canada-EU trade discussions threaten to displace domestic control over IP policy. While it might be said that it is better for Canadian negotiators to be at the table influencing developments than left on the outside looking in (like the rest of us), it might be better for Canada to walk away from the entire process. Participation merely threatens to lend the process a legitimacy that, from Canadian eyes, it currently lacks.
Masnick writes that the mainstream entertainment industry's formula for contending with the Internet — desperately trying to invent "new copyright laws or new licensing schemes or new DRM or new lawsuits or new ways to shut down file sharing" — is counterproductive.
However, there is another solution. Stop worrying and learn to embrace the business models that are already helping musicians make plenty of money and use file sharing to their advantage, even in the absence of licensing or copyright enforcement.
In simplest terms, the model can be defined as:
Connect with Fans (CwF) + Reason to Buy (RtB) = The Business Model
He lists a dozen artists who've done well for themselves through various permutations of this model. Everyone knows about the efforts of big names like Trent Reznor and Radiohead, but Mike also draws attention to less-famous success stories like Josh Freese, Jill Sobule, Corey Smith, Jonathan Coulton, Moto Boy, Amanda Palmer, Matthew Ebel, Moldover and K-Os.
As you look through all of these, some patterns emerge. They're not about getting a fee on every transaction or every listen or every stream. They're not about licensing. They're not about DRM or lawsuits or copyright. They're about better connecting with the fans and then offering them a real, scarce, unique reason to buy -- such that in the end, everyone is happy. Fans get what they want at a price they want, and the musicians and labels make money as well.
These stories stand in stark contrast to the problems that major labels' copyright-enforcement efforts can cause for their artists. Just last week, we noted how even talented and popular bands like OK Go and Death Cab For Cutie have seen their promotion actively undercut by their own labels' copywars.
The memo is a great roundup of clever new business models that music fans and and aritsts alike will find worth reading.
Secretary Clinton's speech last week on Internet Freedom was an important step in bringing online free expression and privacy to the forefront of the United States' foreign policy agenda.
But for all the strong language, it was also a speech of caveats: powerful statements like "we stand for a single internet where all of humanity has equal access to knowledge and ideas" sat close to hedges about the dangers of anonymous speech, and how it might be used to distribute "stolen intellectual property". Clinton expressed concern at those who "violate the privacy of citizens who engage in non-violent political speech", but she also spoke of "redoubl[ing] efforts" similar to the Convention on Cybercrime, a document which provides scant protections for the privacy of anyone being investigated by a foreign government.
Enacting policy has a way of clarifying these ambiguities, for good or ill. Many of the projects that the State Department says it will encourage and fund, including systems to allow whistleblowers to expose corruption, and permit citizens fighting drug-related violence in Mexico to make "make untracked reports to reliable sources to avoid having retribution visited against them", require a strong anonymous infrastructure. The State Department's work will depend on anonymity, so we hope it will defend it. The US government could also take a diplomatic lead in requiring high standards for evidence and due process in international cybersecurity treaties, and we hope they will.
Perhaps the most significant shift in the State Department's attitude, however, concerned its attitude to domestic companies and their relationship with repressive architecture abroad.
Censorship should not be in any way accepted by any company from anywhere. And in America, American companies need to make a principled stand. This needs to be part of our national brand. I'm confident that consumers worldwide will reward companies that follow those principles.
Now, we are reinvigorating the Global Internet Freedom Task Force as a forum for addressing threats to internet freedom around the world, and we are urging U.S. media companies to take a proactive role in challenging foreign governments' demands for censorship and surveillance. The private sector has a shared responsibility to help safeguard free expression. And when their business dealings threaten to undermine this freedom, they need to consider what's right, not simply what's a quick profit.
Secretary Clinton has put her stamp of approval on voluntary initiatives to help global companies create global standards of privacy and free expression that are consistent with international human rights documents. She cites the work of the Global Network Initiative, an organization that includes companies like Google, Microsoft and Yahoo! and human rights groups like EFF, Human Rights Watch, and the Committee to Protect Journalists.
But Secretary Clinton went even further here. She specifically included a proactive role in challenging illegitimate government demands for "surveillance". This is important and should be the start of a broader conversation inside and outside the government. Because in the case of both filtering and spying on the Net, the risk is not only from companies that comply with illegitimate requests: it is from companies that activelyprofit from those authoritarian government's demands.
Secretary Clinton's call for a public-private partnership in building tools to support Internet freedom should include a call for attention, and action to stop another set of troubling public-private partnerships: between authoritarian governments and private companies willing to build their Great Firewalls and dragnet surveillance systems for them. If Internet freedom really is part of America's national brand, we could start by alerting consumers of not only the victims of Internet censorship and control, but those who help build the technology that enables it.
Earlier this week, the DOJ’s Inspector General issued a heavily redacted report about the FBI’s Communications Analysis Unit (CAU), which found "shocking" violations, including embedded telecom employees providing customer phone records in response to post-it notes.
While the underlying violations are egregious enough, the report itself is problematic because it redacts huge swaths of information that is already publicly known.
As we discussed in our last blog post, the report cryptically refers to AT&T, Verizon and MCI as Company A, B and C. Yet, the source that identified the telecoms embedded with the CAU was none other than FBI General Counsel Valerie Caproni, in sworn testimony before Congress. Moreover, information in the IG report combined with letters to Congress from the telecoms themselves shows that Company A is AT&T.
The IG report also redacts the amount paid to the telecoms when we already know they were paid $1.8 million a year, and that, in 2008, the FBI asked Congress for $5.3 million for further "funding for the telecommunications industry participation in the Telecommunications Data Collection Center (TDCC)."
The IG Report discusses the unlawful use of exigent letters to obtain phone records of Washington Post and New York Times reporters, but redacts the reporters’ names, the year it took place, and the location where the reporters operated. Yet, in 2008, FBI Director Robert Mueller apologized to the newspapers for the incident, and confirmed that the reporters were the Post staff writer Ellen Nakashima and Indonesian researcher Natasha Tampubolon, and New York Times reporters Raymond Bonner and Jane Perlez. These reporters were in the Jakarta bureau, and the incident took place in 2004.
The report reveals that AT&T routinely provided the FBI with the "community of interest" profiles of its customers without any legal process. However, the DOJ redacted a large section of the report that discusses what a "community of interest" is, including an explanatory diagram. Yet, AT&T itself has published severalresearchpapers extensively discussing communities of interest. Basically, your community of interest includes the people you call and who call you, and the people with whom this group communicates. It is sometimes refined by frequency or by time period. AT&T even published the Hancock programming language, which AT&T designed to analyze communities of interest, and "sift calling card records, long distance calls, IP addresses and internet traffic dumps, and even track the physical movements of mobile phone customers as their signal moves from cell site to cell site." AT&T published this graphic, which illustrates AT&T using what they call "guilt by association" to determine fraud within a community of interest (the shaded boxes).
Finally, the IG report redacts a new legal theory proferred by the FBI, which purports to allow telecoms to disclose your phone records without legal process or any emergency. While this theory has not been published elsewhere (the Office of Legal Counsel opinion was written earlier this month), controversial legal theories should not be kept secret. Senators Feingold, Durbin and Wyden have asked Attorney General Eric Holder to publish the OLC memo.
We urge the Obama Administration to follow through on its commitment to openness and transparency, and release an unredacted version of the IG report.
Every year we put together a birthday fund-raiser to commemorate another 365 days of fighting for your digital civil liberties. This year, we're celebrating two decades of determined advocacy for freedom wherever bits are found, and the revelry will be unmatched by celebrations past!
So on February 10, 2010, come join the celebration of EFF's 20th year defending your digital rights! The fundraiser will be hosted by beloved TV geek Adam Savage at the DNA Lounge in San Francisco, where he will celebrate EFF's two decades as only he can, with the help of many EFF legends and luminaries.
Adrian & the Mysterious D (A+D), the DJ duo that founded the seminal, globe-trotting mashup party "Bootie," will get people moving with their genre-mashing blend of tracks, with guest DJs dropping sets throughout the evening.
Doors open at 8 p.m. We'll be asking for a $30 donation at the door to fund our work defending your digital freedom. Please RSVP to firstname.lastname@example.org. This is an all ages event. Advance reservations available.
We'll also be holding a special VIP event before the party with Adam Savage, Steve Wozniak, John Perry Barlow, John Gilmore, Mark Klein and other EFF friends and Internet luminaries. Support EFF and come meet many of the amazing people who helped us reach this historic milestone. Make your donation in advance to secure your spot.
Over the weekend, there was an odd story about people using AT&T's wireless network trying to log in to Facebook, and suddenly finding themselves logged in to somebody else's Facebook account. What could have caused such a strange phenomenon to occur? What does it tell us about the innards of the mobile web, and what lessons might it convey for network and application design?
Ars Technica had a good post documenting some of the possibilities, and AT&T has now made some publicstatements containing a few key clues about the problem. We have a few things to add.
[Warning - this post gets fairly technical]
1. Facebook. Facebook needs to start using HTTPS for everything! Without HTTPS and secure cookies, the private and sensitive information in their users' accounts is vulnerable to being mixed up by ISPs' proxy servers, logged, eavesdropped or pilfered by hackers.1 Google now uses HTTPS by default for every interaction with Gmail, and there's no excuse for Facebook not to do the same.
2. AT&T. Here, the story is more complicated, but the short summary is that AT&T (and all other ISPs) really need to migrate away from using proxy and gateway servers to perform complicated software tasks.
The problem at the ISP's end appears to have been a manifestation of an engineering hangover from WAP 1.0, which was the first attempt to bring the Web to mobile phones. WAP made a number of design decisions intended to work around the limitations of 1990s-era cell phones, including tiny storage space, limited bandwidth, and small keypads. In retrospect, some of those design decisions appear to have been unwise. A relevant example was the decision to involve the wireless carrier in website authentication. Where the normal HTTP Web stores authentication cookies on users' computers, early versions of WAP specified that cookies should be stored on proxy servers called WAP gateways, operated by wireless carriers.2 Another practice was to try to avoid ever having to make the user type a username and password with only a numeric keypad, by circulating URLs that contained automatic authentication parameters.
It was this WAP tradition of getting ISPs intimately involved in authentication that led to a situation today where a malfunction on AT&T's proxies could let one user log in to another's Facebook account. This situation is bad for the privacy and security of mobile web users, and it carries some important lessons about the division of responsibility between ISPs and web and application providers.
Wherever possible, ISPs should try to avoid solving complicated problems — like web authentication — by using proxy and gateway servers on their network. Inevitably, having an extra machine in the loop raises the complexity of the solution and increases the number of possible points of failure. If this had been a problem with a website smaller than Facebook, the chances are that it would have remained undiagnosed and unfixed for much longer.
There is a lot of engineering controversy about whether it's ever appropriate for complex application functions to be performed by proxies, gateways or transcoders operated by ISPs. One key argument is that if the ISPs pick a poor solution, or don't all implement exactly the same thing, then developers and users will be worse off than if the ISP had done nothing at all.
Whether or not this is true in all cases, it's clear, at the very least, that ISPs need to be extremely cautious in this space. They need to only deploy a proxy-type solution when it is certain that clients and servers can't solve the problem for themselves. They need to be transparent: follow well-established standards, clearly document their practices, and answer technical questions promptly. Lastly, they should offer users and application providers a standardised way to opt-out of the proxies if they might cause technical or security problems.
Even as mobile phones and mobile browsers are approaching the sophistication of desktop PCs, many mobile carriers are continuing to play strange and undocumented tricks with subscribers' data communications.
And AT&T in particular still has a way to go with respect to transparency. Their public statements indicated that they had deployed some new security measures in the wake of the Facebook affair. When we asked them what those measures were, their spokesperson's response was:
In terms of the new security measures AT&T has put into place, due to security sensitivity, we aren't providing specifics.
AT&T's disappointing response is to retreat to security through obscurity. But long experience teaches that security through obscurity is usually no security at all.
2. In practice, this made cookie authentication unusable in WAP, because the way that WAP gateways were implemented and configured was insufficiently standardized, and because many developers realised that it was unacceptable to trust carriers' gateway servers with so much of their authentication housekeeping. This meant that websites had to fall back to a practice known as "URL rewriting" or "URL decoration", which meant adding an authentication token to every URL. In practice, this is frequently equivalent to putting the user's password in the URL.