A new project aimed at “countering illegal use of the Internet” is making headlines this week. The project, dubbed CleanIT, is funded by the European Commission (EC) to the tune of more than $400,000 and, it would appear, aims to eradicate the Internet of terrorism.
European Digital Rights, a Brussels-based organization consisting of 32 NGOs throughout Europe (and of which EFF is a member), has recently published a leaked draft document from CleanIT.
On the project’s website, its stated goal is to reduce the impact of the use of the Internet for “terrorist purposes” but “without affecting our online freedom.” While the goal may seem noble enough, the project actually contains a number of controversial proposals that will compel Internet intermediaries to police the Internet and most certainly will affect our online freedom. Let’s take a look at a few of the most controversial elements of the project.
Privatization of Law Enforcement
Under the guise of fighting ‘terrorist use of the Internet,' the “CleanIT project," led by the Dutch police, has developed a set of ‘detailed recommendations’ that will compel Internet companies to act as arbiters of what is “illegal” or “terrorist” uses of the Internet.
Specifically, the proposal suggests that “legislation must make clear Internet companies are obliged to try and detect to a reasonable degree … terrorist use of the infrastructure” and, even more troubling, “can be held responsible for not removing (user generated) content they host/have users posted on their platforms if they do not make reasonable effort in detection.”
EFF has always expressed concerns about relying upon intermediaries to police the Internet. As an organization, we believe in strong legal protections for intermediaries and as such, have often upheld the United States’ Communications Decency Act, Section 230 (CDA 230) as a positive example of intermediary protection. While even CDA 230’s protections do not extend to truly criminal activities, the definition of “terrorist” is, in this context, vague enough to raise alarm (see conclusion for more details).
Erosion of Legal Safeguards
The recommendations call for the easy removal of content from the Internet without following “more labour intensive and formal” procedures. They suggest new obligations that would compel Internet companies to hand over all necessary customer information for investigation of “terrorist use of the Internet.” This amounts to a serious erosion of legal safeguards. Under this regime, an online company must assert some vague notion of “terrorist use of the Internet,” and they will have carte blanche to bypass hard-won civil liberties protections.
The recommendations also suggest that knowingly providing hyperlinks to a site that hosts “terrorist content” will be defined as illegal. This would negatively impact a number of different actors, from academic researchers to journalists, and is a slap in the face to the principles of free expression and the free flow of knowledge.
Internet companies under the CleanIT regime would not only be allowed, but in fact obligated to store communications containing “terrorist content,” even when it has been removed from their platform, in order to supply the information to law enforcement agencies.
Material Support and Sanctions
The project also offers guidelines to governments, including the recommendation that governments start a “full review of existing national legislation” on reducing terrorist use of the Internet. This includes a reminder of Council Regulation (EC) No. 881/2002 (art. 1.2), which prohibits Internet services from being provided to designated terrorist entities such as Al Qaeda. It is worth noting that similar legislation exists in the US (see: 18 U.S.C. § 2339B) and has been widely criticized as criminalizing speech in the form of political advocacy.
The guidelines spell out how governments should implement filtering systems to block civil servants from any “illegal use of the Internet.”
Furthermore, governments’ criteria for purchasing policies and public grants will be tied to Internet companies’ track record for reducing the “terrorist use of the Internet.”
Notice and Take Action
Notice and take action policies allow law enforcement agencies (LEAs) to notify and act against Internet companies, who must remove “offending” content as fast as possible. This obligates LEAs to determine the extent to which content can be considered “offensive.” An LEA must “contextualize content and describe how it breaches national law.”
The leaked document contains recommendations that would require LEAs to, in some cases, send notice that access to content must be blocked, followed by notice that the domain registration must be ended. In other cases, sites' security certificates would be downgraded.
Real Identity Policies
Under the CleanIT provisions, all network users, whether in social or professional networks, will be obligated to supply their real identities to service providers (including social networks), effectively destroying online anonymity, which EFF believes is crucial for protecting the safety and well-being of activists, whistle-blowers, victims of domestic violence, and many others (for more on that, see this excellent article from Geek Feminism). The Constitutional Court of South Korea found an Internet "real name" policy to be unconstitutional.
Under the provisions, companies can even require users to provide proof of their identity, and can store the contact information of users in order to provide it to LEAs in the case of an investigation into potential terrorist use of the Internet. The provisions will even require individuals to utilize a real image of him or herself, destroying decades of Internet culture (in addition to, of course, infringing on user privacy).
The plan also calls for semi-automated detection of “terrorist content.” While content would not automatically be removed, any searches for known terrorist organizations’ names, logos or other related content will be automatically detected. This will certainly inhibit research into anything remotely associated with what law enforcement might deem “terrorist content,” and would seriously hinder normal student inquiry into current events and history! In effect, all searches about terrorism might end up falling into an LEA’s view of terrorist propaganda.
LEA Access to User Content
The document recommends that, at the European level, browsers or operating systems should develop a reporting button of terrorist use of the Internet, and suggests governments draft legislation to make this reporting button compulsory for browser or operating systems.
Furthermore, the document recommends that judges, public prosecutors and (specialized) police officers be able to temporarily remove content that is being investigated.
Frighteningly, one matter up for discussion within the CleanIT provisions is the banning of languages that have not been mastered by “abuse specialists or abuse systems.” The current recommendation contained in the document would make the use of such languages “unacceptable and preferably technically impossible.”
With more than 200 commonly-used languages and more than 6,000 languages spoken globally, it seems highly unlikely that the abuse specialists or systems will expand beyond a select few. For the sake of comparison, Google Translate only works with 65 languages.
While the document states that the first reference for determining terrorist content will be UN/EU/national terrorist sanctions list, it seems that the provisions allow for a broader interpretation of “terrorism.” This is incredibly problematic in a multicultural environment; as the old adage goes, “one man’s terrorist is another man’s freedom fighter.” Even a comparison of the US and EU lists of designated terrorist entities shows discrepancies, and the recent controversy in the US around the de-listing of an Iranian group shows how political such decisions can be.
Overall, we see the CleanIT project as a misguided effort to introduce potentially endless censorship and surveillance that would effectively turn Internet companies in Internet cops. We are also disappointed in the European Commission for funding the project: Given the strong legal protections for free expression and privacy contained in the Charter of Fundamental Rights of the European Union [PDF], it’s imperative that any efforts to track down and prosecute terrorism must also protect fundamental rights. The CleanIT regime, on the other hand, clearly erodes these rights.
Update: Fabio José Silva Coelho, President of Google Brazil, has now been arrested by federal police in São Paulo. The federal police say that he will be released on his own recognizance if he agrees in writing to appear in court for face the charges.
"Judge orders arrest of president of Google's operation in Brazil"
The headline was bizarre enough, but what followed seemed to come straight out of The Onion: "... for failure to remove YouTube videos that attacked a mayoral candidate." And the cherry on top: the judge also ordered "a statewide, 24-hour suspension of Google and YouTube."
In other words, after someone had posted questionable videos, the law came down on none other than the middleman. Simply for hosting an allegedly defamatory video, it seems absurd for a judge to order Google to be shut down for a day—and on top of that for an executive to get arrested. Brazil proves once again why countries around the world need strong intermediary safe harbor laws like those in the United States.
Judge Flavio Peren from the Brazilian state Mato Grosso do Sul ordered for the arrest of Google Brazil's president after the company refused to take down two videos that targeted a mayoral candidate. The critical videos, according to the judge, were insulting and defamatory, therefore violating Brazil's severe election laws. The judge ruled that Google had committed the crime of "disobedience" (see Art. 330) by not taking down the videos. Google appealed the ruling, which had also ordered the company to shut down its site for twenty-four hours in the state. This isn't the first time Google was hit with such charges; earlier this month, a court in the state of Paraná issued a similar arrest order on another Google exec for not taking down a YouTube clip aimed at a separate mayoral candidate. This ruling, however, was overturned.
Brazil has a notably strict election code. The videos in question were found to violate the law's Article 326, which criminalizes the violation of one's "dignity" during an election. (Despite the stringent law's focus on "dignity," Brazilian elections are currently host to dozens of people running under assumed superhero and celebrity names.) Though Google served as merely a platform for the videos in question, it was nonetheless the target of the election court's orders.
Google's recent Transparency Report, which exposes the number of government takedown requests, ranks Brazil at the top of the list, with 194 content removal requests in the second half of 2011. As Google's notes show, a number of those requests were defamation charges around election season.
In the United States, we have a crucial law that protects online services that host speech: Section 230 of the Communications Decency Act (CDA 230). Though most of the CDA harmed speech and was found to be unconstitutional, Section 230 survived, relieving sites of liability for their users' content. In other words, under CDA 230, only users have legal responsibility over what they post.1 This not only allows sites to innovate and scale without spending hoards of money on vetting all of their content (or, worse, automatically censoring it), but it also prevents critics from sending improper notices to the content host in an attempt to censor.
Brazil has been trying to address this issue through Marco Civil, a sweeping law that aims to promote Internet freedom, protect speech and privacy, and establish much-needed safe-harbor protections for intermediaries. Both Google and Facebook spoke out in support of the bill, though a vote on the bill was recently postponed. While there are a number of ways that Marco Civil could be improved to be more in alignment with international standards for protecting freedom of expression, it would at least provide some legal shield for Internet intermediaries that host controversial content.
1. CDA 230, however, does not apply to intellectual property claims and federal criminal law.
We’ve been seeing a range of reports about Facebook partnering up with marketing company Datalogix to assess whether users go to stores in the physical world and buy the products they saw in Facebook advertisements. A lot of the reports aren’t getting into the nitty gritty of what data is actually shared between Facebook and Datalogix, so the goal of this blog post is to dive into the details. We’re glad to see that Facebook is taking a number of steps to avoid sharing sensitive data with Datalogix, but users who are uncomfortable with the program should opt out (directions below). Hopefully, reporting on this issue will make more people aware of how our shopping data is being used for a lot more than offering us discounts on tomato soup.
Datalogix is an advertising metrics company that describes its data set as including “almost every U.S. household and more than $1 trillion in consumer transactions.” It specifically relies on loyalty card data – cards anyone can get by filling out a form at a participating grocery store.
These loyalty card programs have long been criticized by consumer advocates, who point out that they create a long data trail of our everyday purchases. Concern over these cards spurred the creation of advocacy group Consumers Against Supermarket Privacy Invasion and Numbering (C.A.S.P.I.A.N.), which argues that grocery stores falsely inflate prices for those not participating in the programs and that the programs themselves are expensive to run. Concern over these programs also prompted the state of California to enact a law preventing supermarkets from 1. requiring drivers’ licenses or social security numbers as a condition of issuing loyalty cards, and 2. sharing or selling cardholders’ personal information, with a few limited exceptions. (This blog post doesn’t attempt to compare Datalogix’s practices with the California law.)
Data from such loyalty programs is the backbone of Datalogix’s advertising metrics business.
What data is actually exchanged?
In order to assess the impact of Facebook advertisements on shopping in the physical world, Datalogix begins by providing Facebook with a (presumably enormous) dataset that includes hashed email addresses, hashed phone numbers, and Datalogix ID numbers for everyone they’re tracking. Using the information Facebook already has about its own users, Facebook then tests various email addresses and phone numbers against this dataset until it has a long list of the Datalogix ID numbers associated with different Facebook users.
Facebook then creates groups of users based on their online activity. For example, all users who saw a particular advertisement might be Group A, and all users who didn’t see that ad might be Group B. Then Facebook will give Datalogix a list of the Datalogix ID numbers associated with everyone in Groups A and B and ask Datalogix specific questions – for example, how many people in each group bought Ocean Spray cranberry juice? Datalogix then generates a report about how many people in Group A bought cranberry juice and how many people in Group B bought cranberry juice. This will provide Facebook with data about how well an ad is performing, but because the results are aggregated by groups, Facebook shouldn’t have details on whether a specific user bought a specific product. And Datalogix won’t know anything new about the users other than the fact that Facebook was interested in knowing whether they bought cranberry juice.
In addition to technical privacy protections, Facebook has a contractual relationship with Datalogix to try to make sure that user privacy isn’t violated. Through this relationship, Datalogix promises to keep all the data processing they do for Facebook separate from the rest of their data. (This means you couldn’t approach Datalogix and ask them to, say, give you a list of all the profiles queried by Facebook.) And Facebook promises to discard any hashed data it receives that isn’t about Facebook users1.
We were also initially concerned that Facebook could test a number of small, overlapping data sets to hone in on individual user behaviors. We raised this concern with Facebook, and Facebook responded that, due to the large sample sizes that were being tested, it would be impossible to figure out whether a specific individual bought a specific item. Apparently Facebook also sent in a privacy and security auditor to assess this issue, and was satisfied with the results. We’ve also reached out to Datalogix to talk to them about what formal rules they have regarding small, overlapping data sets. Given the large amount of sensitive data Datalogix maintains, we’re hoping they’ve got appropriate rules in place to prevent people from testing small, similar groups to figure out a particular individual’s actions.
But even with these technical and legal safeguards, many people may be concerned because the shopping data compiled by loyalty programs can be quite sensitive. A New York Times article earlier this year showed how Target was able to identify and target an expectant mother long before she started showing visible signs of pregnancy (and, in at least one case, before her father realized she was expecting). Loyalty card programs have been used by the CDC to track down cases of salmonella, and data collected through these programs has even been sought by law enforcement. In one unfortunate incident, a man was wrongfully charged with arson in part because he had used his loyalty club card to buy fire starters (thankfully, the charges were eventually dropped).
Many people who sign up for loyalty programs may not realize the data amassed on them will be shared with entities outside of the store. And if they do realize it, they might not be comfortable with it. A 2009 academic study found that 86% of those surveyed did not want websites to show them advertisements tailored to them based on their offline activities; perhaps more studies are necessary to see whether users are similarly uncomfortable with data shared from offline retailers to online entities, regardless of whether the advertisements are individually targeted.
All Facebook users are automatically opted in to this program. So if you’re uncomfortable with it, you need to opt out.
How to Opt Out
To opt out of this program, visit the Datalogix.com privacy page. Scroll down to the word “Choice” and the last sentence in the first paragraph says:
If you wish to opt out of all Datalogix-enabled advertising & analytic products, click here.
Click there and a little form will pop up that asks for your name, address, and email address. Datalogix promises that the opt-out will take effect within 30 days. Once you’ve been opted out, Datalogix will no longer include your information in the hashed data they provide to Facebook. (NB: There are a few different options under the “Choice” subheading. You want the one that says “opt out of all Datalogix-enabled advertising & analytic products” and then gives you a form to fill out.)
In addition to opting out via the Datalogix page, many people may want to consider how comfortable they are with loyalty card programs at all. Before you hand these programs your real name, phone number, and email address, consider whether you want every bag of Dorritos, over the counter medication, and box of tampons you buy associated with your identity in a marketing database for years to come.
1. This is important because hashing data values that come from a relatively small data set, like phone numbers, isn't an effective way of hiding the original values. For example, a computer program could check every possible phone number's hash in just a few seconds to see which phone number matches a particular hash. E-mail addresses may be hidden better, but it would still be possible for Facebook to guess the original values of a substantial fraction of e-mail address hashes in a short time (for example, trying all 1-8 letter addresses at gmail.com). That’s why additional protections, like the contractual relationship Facebook has with Datalogix, are important.
The news that Iran might be seeking to create a 'halal Internet' isn't new. But while speculation about Iran's withdrawal from the online world abounds, the country's recent move to block Gmail and—though inconsistently—Google Search, is one of the first concrete measures to indicate just how serious the plans may be.
As Reuters reported on Sunday, a deputy government minister, Abdolsamad Khoramabadi announced that both services would be blocked "within a few hours." That news was followed by reports on social media that Gmail was in fact blocked, and Google Search appeared only to be blocked in some areas, or on some ISPs. A report on Monday from Global Voices Online confirmed that information.
At least one report (in Farsi) suggests that the block was due to requests "by the public" to oppose Google for its refusal to take down the film 'Innocence of Muslims' from YouTube, but considering YouTube has been blocked in the country for several years, that seems highly unlikely.
These new filtering measures coincide with what Global Voices' Fred Petrossian calls a "new wave of repression" against Iranian bloggers, citing several recent convictions of bloggers as well as the beating of one blogger's wife after she allegedly complained about the behavior of security forces.
While these new measures indicate an unprecedented level of control over the country's more than 36 million Internet users, some analysts have suggested that such heavy restrictions will force a larger swath of the population to seek out means of circumventing the controls and, potentially, politicize a greater portion of the population. Though this may be true, we have grave concerns about the severity of the restrictions, which hamper Iranians' ability to connect with each other and the outside world, and stand against the Iranian regime's attempts to sequester their population.
This November, voters across the United States have an important chance to improve how elected officials approach legislating the Internet. Starting today, a coalition of Internet rights groups are starting a voter registration drive all over the country with the hope of making the voice of the Internet—heard so loudly during the SOPA debate—a permanent stakeholder in the halls of Congress.
For too long, Congress has ignored the basic digital rights of Internet users in favor of deep-pocketed special interests. But on January 18th, Congress got a wakeup call in the form tens of millions of citizens protesting the dangerous Stop Online Piracy Act (SOPA), a bill that would have allowed corporations and the government to censor large swaths of the Internet with little or no oversight.
It was a great victory for the Internet, and we killed a bill that DC insiders thought could not be stopped. But it’s important to remember that Congress spent much of 2012 attempting to pass misguided and rights-restricting bills affecting the Internet, despite the protests. Without your voice to counter the special interests, they’ll continue to do so.
In June, the House of Representatives passed CISPA, a bill intended to address cybersecurity concerns, but which was written so broadly it would have carved out a massive exception to all existing privacy laws. We stopped this bill in the Senate, though, in part thanks to the outcry of Internet users who didn’t want their online privacy sacrificed.
And just a couple weeks ago, the House also voted for a five-year extension for the dangerous FISA Amendments Act, which was used to sweep the NSA warrantless wiretapping program—that collects and stores copies of Internet traffic—under the rug. Despite extensive and incontrovertible evidence that the law allows the warrantless wiretapping of American citizens, members of both parties refused to add common sense privacy safeguards to stop Americans’ emails and other Internet activities from being collected and reviewed by the millions.
And Congress isn’t finished trying to mess with the Net. Just this weekend, CNET reported the FBI is renewing its push for a broad Internet surveillance bill that would force companies like Facebook, Google, and Skype to install backdoors into all their software to allow law enforcement real-time access to communications. We also know the content industry wants Congress to take another crack at a SOPA-like bill, as the MPAA has recently been handing out talking points to representatives extolling copyright maximalism, perhaps paving the way for SOPA 2.0.
But what could a more Internet-friendly Congress do? Well, there is a ton of common sense legislation waiting in the wings that could prevent censorship, improve user privacy online and spark innovation. By voting, you could help make these bills law next session. Take a look:
Patent reform: A new bill sponsored by Rep. DeFazio would fix much of the broken patent system that is engulfing giant tech companies in billion dollar patent suits and paralyzing up-and-coming companies with legal costs. The only parties benefiting from software patent wars seem to be big law firms and patent trolls. Visit EFF’s site defendinnovation.org for more.
Email privacy: Both the House and Senate have ECPA reform bills that would finally bring warrant protection to emails. We saw just last week that the Senate delayed this bill yet again after law enforcement expressed concerns it would hinder their investigations. Of course, this bill wouldn’t create any new rights; it would just bring the protections for our email into alignment with our rights with physical mail and phone calls.
Cell phone privacy: The GPS Act in the Senate and a corresponding bill in the House would force law enforcement to get a warrant for our cell phone location data as well. Your cell phone, which pings a cell phone tower every seven seconds, is one of the most privacy invasive tools out there; it can give your precise location to authorities twenty-four hours a day. And law enforcement made a staggering 1.3 million requests for such data last year—a vast majority of the time without a warrant.
Congress needs to know that the Internet is watching and that users won’t sit on the sidelines as technology intended to connect us and bring knowledge to people worldwide is turned against us for the purposes of censorship and surveillance. A movement of informed, passionate Internet users exists and we want Congress to hear loud and clear that we’re willing to cast their votes in defense of Internet freedom. And it starts today, with a few clicks of a button. Visit InternetVotes.org and register to vote.
Throughout Latin America, new surveillance practices threaten to erode individuals' privacy, yet there is limited public awareness about the civil liberties implications of these rapid changes. Some countries are pursuing cybercrime policies that seek to increase law enforcement power without strong legal safeguards. In other nations, government-run biometric identification systems are on the rise, while certain governments are even turning to drones to aid in their surveillance activities. A culture of secrecy surrounds these surveillance practices, and citizens remain largely unaware of what types of information are being collected and how it is being used against them.
For Latin American privacy advocates, all of this makes for an uphill battle. There are relatively few NGOs working in the region specifically on privacy and surveillance, and the lack of specialization is further complicated by a pervasive societal attitude that security trumps privacy. Despite the inherent difficulties, the fledgling privacy movement has been working tirelessly to shed light on overarching surveillance practices and to preserve civil liberties in the face of these changes. Social media and blogs have made a huge impact in activism work in several countries throughout the region.
Below, we present a quick snapshot of some privacy groups, academic institutions, and dedicated individuals working in the field.
Advocacy by specialized NGOs
Let’s begin with Via Libre Foundation. An Argentinian digital rights advocacy group founded in 2000, Via Libre has advocated against mandatory biometric identification systems and data retention mandates. Via Libre has challenged Argentina's "electronic crime" bill, fighting draconian provisions to limit coders rights. Via Libre has also trained activists and journalists on secure communications, such as mastering the use of encryption and anonymity tools.
In Brazil, Movimiento Mega Nao is a grassroot movement responding to threats to Internet rights. Mega Nao recently fought an invasive cybercrime bill by advocating a civil rights framework for the Internet that includes safeguards for free expression and privacy. The Brazilian Institute of Consumer Protection (IDEC) has also launched a similar campaign. IDEC, which was founded in 1987, specializes in consumer privacy and other Internet-related issues. Another important Brazilian NGO, Instituto NUPEF, educates policymakers and civil society on Internet rights, including privacy. NUPEF also publishes a specialized Internet policy magazine.
There are also longstanding human rights NGOs who are beginning to focus more on Internet policy (including privacy). For instance: civil rights advocates Asociación por los Derechos Civiles (ADC, or Civil Rights Association in English) in Argentina, has now begun turning its attention to Internet freedom. This group of Argentinean lawyers works on defending free expression and access to information at the national level and within Inter American Human Rights System. Instituto Prensa y Sociedad (IPYS), an NGO working on investigative journalism, freedom of expression, and access to public information in Latin America, has long been fighting governmentsurveillance and protecting journalists' free expression rights. Like IPYS, Asociación Pro Derechos Humanos (Aprodeh) has challenged illegal government surveillance in Peru during the Presidency of Alberto Fujimori. Fujimori has since been jailed for human rights violations after being tried for violating the secrecy of communication and other human rights abuses during his Presidency. It marked the first time a democratically elected former president was prosecuted at home for serious human rights violations, including the violation of privacy.
In 2007, ARTICLE 19 regionalized, moving from a single office to a growing number of regional offices supported by an international office in London. Article 19 in Latin American does litigation in precedent-setting cases defending free speech, and makes recommendations for improvement of draft laws. The organization has also called attention to the civil liberties implications of cybercrime proposals under discussion in the region. In Venezuela, a human rights organization called Espacio Publico is working to protect freedom of expression and access to information, while also offering trainings in privacy and security.
There is also a group of dedicated individuals, academics and bloggers with technical and legal backgrounds in the region who've dedicated time and effort to exploring the topics and increasing awareness on Internet policy.
Global Voices Advocacy also reports regularly about privacy topics in Latin America, both in regular articles and in its Latin American Netizen report.
Privacy activism in Latin America is on the rise, and several countries still lack strong civil society groups working in this area. In Central America and the Caribbean, online privacy and surveillance remain largely unexplored topics, disconnected from the larger human rights agenda. Human rights NGOs in the region tend to prioritize traditional human rights causes such as health, education, citizen security and ongoing battles surrounding forced disappearances and torture. While privately funded organizations work passionately on privacy-related topics, privacy is not their sole priority. Unpaid volunteers are driving much of this activism, and the organizations struggle with limited resources.
Despite these challenges and limited coverage of their efforts in the mainstream media, support for their campaigns has continued to grow. EFF will continue to work alongside civil society groups in Latin America, and to help their efforts by sharing knowledge on core Internet rights issues with policymakers throughout the region.
Open access to scientific literature plays a crucial role in the development of a digital knowledge commons, benefiting scholars, patients, researchers, and therefore, the public at large. We owe many thanks to the global open access movement that has been working hard to improve access to knowledge for over a decade. EFF welcomes the new recommendations launched last week in celebration of its 10 year anniversary. Translations of the recommendations have already been made in several languages, with more to follow.
The BOAI@10 is a set of recommendations that build upon the first 10 years of the OA movement and expand its original ideas to not just include open access, but the means to make it possible. It covers metrics and professional incentive systems, repository infrastructure and functioning, open licensing (with a clear recommendation for the most permissive Creative Commons license, CC-BY), and more. The recommendations also urge the government and funders to require that the results of research they fund are published by the copyrightholders as open access. They go further and also urge the OA movement to cooperate and reach out to other movements, such as those addressing digital freedoms, open educational resources, and open government.
The open access (OA) movement is a reaction to the persistence of traditional control-based models of scientific knowledge distribution, characterized by high cost subscription journals. The reaction of these scholarly publishers to the Internet has been characterized by capture and lockdown through digital rights management (DRM), rather than the maximization of distribution of knowledge, which is their supposed reason for existence. The problem is that publishers have hampered knowledge distribution of vital research to institutions and libraries, not to mention individual consumers and patients around the world, because they are concerned with revenues, not scholarly progress.
OA is part of a larger revolution in knowledge generation and distribution, but it’s specific to peer-reviewed scholarly literature. The goal of the OA movement is a knowledge distribution model where scholarly, peer-reviewed journal articles are made freely available to anyone, anywhere over the Internet, with no copyright constraints beyond attribution and no costs beyond those involved in connecting to the Internet. Its ultimate aim is to empower individuals, researchers, communities, and institutions to share and participate in the knowledge society. Most universities today maintain online repositories in addition to the thousands of subject specific archives in the sciences and social sciences. Universities, research institutions, and foundations have also developed policies recommending or mandating open access to publication, data, and software. (Check out the Open Access Map and the Registry of Open Access Repositories Mandatory Archiving Policies to see where OA materials are available).
In the era of print, open access was economically and physically impossible. The lack of physical access and the lack of knowledge access were directly correlated: without physical access to a well-stocked library, knowledge access was impossible. Information communication technologies (ICTs) like the Internet have changed that. ICTs have come to facilitate infosharing in a way that was once not possible, thereby breaking down physical barriers to access to information in an unprecedented way. The increase standarization of open copyright licenses also came to provide the legal infrastructure to make OA possible.
Melissa Hagmann of the Open Society Foundation affirms that OA is core to the exercise of digital freedoms: "In the Internet era, people demand and need access to knowledge to make informed decisions. OA to publications, law and data are crucial to empower people and our youth." Peter Suber, one of the founders of the movement, also adds: "When researchers discover relevant new work online, OA frees them to retrieve and read it. In addition, open access frees them to use and reuse that research without slowing down to ask for permission, taking the risk of proceeding without it, or erring on the side of non-use."
2012 already saw two important faces of this OA movement in addition to the anniversary of the main declarations that define the OA movement. Thousands of researchers signed a petition earlier this year pledging to boycott journals published by Elsevier, the world’s largest journal publisher, unless it dropped support for US legislation aimed at curbing government-mandated open access. Elsevier quickly withdrew its support for the bill in the face of researcher opposition. Earlier last June, an Access2Research petition supporting open access—specifically free access over the Internet to academic articles arising from taxpayer-funder research—crossed its target of 25,000 signatures, two weeks ahead of schedule.
OA has helped to create a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. In the OA world, all creations of the human mind may be reproduced and distributed infinitely at no cost. OA—which has conquested the world through individual action, as well as governmental and institutional policies—has created an unprecedented public good of peer-reviewed quality and trustworthy research.
There can be little doubt that the promotion of digital liberty is deeply connected to this public good. EFF is therefore committed to be a part of the OA movement and help it achieve its goals by fostering access to knowledge and empowering users freedoms. For too long, the movements for digital liberty and the movements for knowledge access have run on parallel tracks. Let the world know that now, we are on the same track.
A new study from Australia presents the latest evidence that loosening copyright restrictions not only enables free speech, but can improve an economy as well. The study, published by the Australian Digital Alliance, indicated that if Australia expanded copyright exceptions like fair use, along with strengthening safe harbor provisions, the country could potentially add an extra $600 million to their economy.
In addition, the report details how vital copyright exceptions are to the Australian economy as a whole. As ADA’s executive officer and copyright advisor Ellen Broad told EFF, "Australia's sectors relying on copyright exceptions currently contribute 14% of our GDP, around $182 billion and they're growing rapidly. It's essential that Australia's copyright policy framework adequately support innovation and growth of these sectors in the digital environment.”
Given how much Australia’s burdensome and confusing copyright law has held up innovation, EFF is encouraged by the fact that copyright reform is being considered and debated in the public sphere.
But more broadly, this is just the latest evidence disproving a major talking point used by the MPAA and RIAA anytime copyright laws come up for a vote: that tough copyright laws are good for the economy. During the SOPA debate, organizations such as the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA) claimed over and over again that the restrictive law are needed to save and create jobs. Yet the Australian study confirms similar research done by CIAA in the US, showing how important fair use exceptions are to the economy. In fact, fair use accounted “for more than $4.5 trillion in annual revenue” in the US and exceeding the economic benefits of copyright laws themselves.
Unfortunately, this new evidence probably won’t stop the MPAA and RIAA from continuing to peddle misinformation about the economics of copyright law in Australia, the US, or elsewhere. Currently, the MPAA is distributing materials to members of the US Congress—perhaps in another attempt to gin up support for SOPA 2.0—extolling how important new, restrictive laws will allegedly to help them create jobs.
Since the economic numbers don’t add up, advocates for draconian copyright laws have resorted to other misleading arguments. For example, this week, a Fox News editorial erroneously argued that intellectual property protection is a “forgotten” constitutional right and “it is the obligation” of Congress to pass laws like SOPA to protect rightsholders. Of course, the problem with SOPA was that it was written so broadly it would’ve ended up censoring millions of Americans who never even thought about copyright, but that’s beside the point. The US Constitution doesmention intellectual property but not in the context of an individual right or mandate to Congress. Specifically, it says:
Congress shall have power . . . To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.
A plain reading of the clause indicates Congress has the authority to use copyright law to promote creativity—if they so choose. There’s no mandate for Congress to pass any copyright law that comes their way, and there’s no clause guaranteeing the rights of movie studios and record labels to maximize their profits. Meanwhile, creativity—far from being stifled without more copyright laws on the books—is currently thriving. There’s been a market increase in the amount of movies, music, and books produced over the last decade, as this comprehensive study done by CCIA and Techdirt’s Mike Masnick shows.
So while huge legacy corporations may find it harder to keep a grip on their market share, it’s not because people have stopped creating and selling art. It’s quite the opposite: they’re creating more by incorporating fair use, cutting out the middlemen, and bringing their art directly to their fans through the Internet.
Unfortunately, all too often copyright maximalists, like the author in the Fox News editorials, put forth the idea that “lawlessness” prevails on the Internet, even though in the US and abroad there are many copyright laws already on the books. In the US alone, Congress has passed fifteen separate laws in the last thirty years alone strengthening the powers of rightsholders.
Most notably, the US DMCA gives power to copyright holders to force websites to take down any of their protected material. In fact, the DMCA gives disproportionate power to the rightsholders, often leading to abuse, and in turn, censoring material that is clearly protected free speech. As Techdirt noted, in Australia, their outdated and burdensome copyright system “is ill-equipped to cope with key Internet activities like search and indexing, caching and hosting, since they all involve incidental copying.”
Both countries would be better served by evidence-based policy that promoted the intended balance of copyright. After decades of unbalanced legislation, the evidence is clear, and points to relaxing copyright restrictions, not strengthening them.
For more on the debate over the economics of copyright see here and here.