While copyright owners claim that they need anti-circumvention laws to address copyright infringement, twelve years’ experience with the U.S. DMCA provisions demonstrates that overbroad digital locks laws can wreak havoc on lawful, non copyright-infringing activities, stifle free speech and scientific research, and harm innovation and competition. The issue is that overbroad anti-circumvention bans can override exceptions and limitations in national copyright laws, restricting or eliminating perfectly lawful non-copyright infringing uses of copyrighted works.
Small wonder that a broad range of groups in Canada have come out against the unforgiving nature of these provisions in Canada’s C-11 Bill, including librarians, content creators, rights advocates and others. We’ve been documenting the collateral damage caused by the US’s overbroad DMCA provisions in our unintended consequences [pdf] report over more than ten years. Now Canada has a chance to avoid repeating the U.S.’s mistakes.
Here are some of the areas in which Bill C-11’s digital locks provisions are likely to be problematic:
Like the U.S. DMCA, C-11 prohibits the act of circumventing a TPM and bans the circumvention tools, devices, and services that citizens would need to circumvent for non-infringing purposes. As a result the digital locks provisions in C-11 will supersede all user rights found in the Copyright Act except for in a few narrowly defined exceptions. This lets rights holders turn acts that are legal into illegal ones with the simple addition of a lock. In the U.S. for instance, many consumers want to make a copy of DVDs that they’ve purchased for personal use on their mobile devices, but the DMCA outlaws the tools needed to break DVDs’ DRM. Major movie studios have sued numerous DVD ripping software manufacturers using the DMCA to remove the tools from the marketplace.
Though C-11 has some exemptions for circumvention for certain purposes, they do not go far enough to accommodate all the legitimate and non-infringing reasons for breaking these locks. The existing exceptions are far too narrow and have become outdated even before the legislation could be passed. An exception exists, for example, permitting users to bypass locks on mobile devices (e.g. cell phones) in order to change service providers. However, bypassing those same locks in order to install an un-approved application (jailbreaking) might be illegal. A general process exists for adding new exceptions, but relying on examples means, at best, that exceptions will always be 10 steps behind legitimate uses. This lag in recognizing new services and uses could very well stifle innovative products out of existence altogether.
Innovation and Creativity
Digital locks provisions can make it illegal for innovators, creators, and artists to take apart and understand how a product works or to transform content. People learn to create new products and works of art by taking things apart. With the growing importance of user-generated content, we are becoming a “society of remixers and tinkerers”. Even though remixing and reusing is often 'fair use/dealing' [pdf] and not a violation of copyright, C-11 would prevent individuals from circumventing a digital lock in order to create a remix. Individuals must be free to use what they have to experiment with their ideas and engage their curiosity.
C-11 includes an exception [pdf] that allows circumvention for the purposes of protecting personal information, but most of the technologies that allow people to do so remain illegal. Someone can develop a method or device to break a digital lock to gain access to data, but that method cannot be shared. Therefore, such an exception does little to give users and consumers the ability to protect themselves from possible privacy violations by these companies.
Anti-circumvention measures directly impact [pdf] scholarly research and independent analysis. In applying for grants for example, researchers may lose or not even receive funding when they go under the legal examination of their project if it necessitates the circumvention of digital locks. Additionally, while C-11 contains exemptions for security researchers, it still requires them to inform the target product-maker of their plan or get prior consent from the service provider. This consent requirement can force them to sacrifice discretion or deter their efforts altogether.
Education and Access to Knowledge
Libraries and schools will be limited in their ability to provide content to their members and students. Libraries will be prevented from backing up resources and creating digital archives, while educational institutes will be prevented from using many education-specific copyright exceptions by the presence of a lock. Not only would it be a drain of resources for these institutions to have to do so, it harms their ability to have access to crucial learning resources.
Visual and Hearing Impaired
Unlike the US DMCA which does not have a permanent exception for circumvention to provide access to the visually impaired, Canada’s C-11 includes a limited exception permitting circumvention for those with perceptual difficulties. In the same way that the exception for privacy protection essentially renders itself moot, the law could legalize circumvention while keeping the distribution or sharing of most of those tools illegal. If those with perceptual disabilities want access to their content, it seems that they would need to have the know-how to create tools to circumvent the measures themselves, or otherwise face charges for using or distributing tools that allow other impaired individuals to do the same.
It doesn’t have to be this way
There are several changes to C-11’s digital locks provisions that would go a long way to avoiding the pitfalls of the U.S. experience and would allow Canada to implement a law that complies with the WIPO treaties. First, C-11 should make it clear that it’s lawful to circumvent TPMs to make non-infringing uses of works. In other words, legal protection for copyright holders’ TPMs should follow the scope of national copyright laws but go no further. Second, if it regulates the manufacture and distribution of circumvention tools, C-11 should permit trusted third party intermediaries, such as educational institutions and libraries, to be authorized to circumvent to give effect to existing copyright exceptions, as New Zealand’s TPM law does. Third, C-11 should include provisions to prevent TPMs being used for non-copyright infringing geographic market segmentation and to address possible anti-competitive misuse of TPMs as Australia’s revised TPM law does.
If you live in Canada or are a Canadian citizen, you can take the following action:
In countries across the world, content copyright industries have been lobbying for laws that would break the Internet in the name of copyright enforcement. Such regulations could terminate user access to the Internet on an allegation of copyright infringement, enact website blocking powers that would make parts of the global Internet disappear from view, and impose digital locks laws that stifle online innovation and restrict the ability to use lawfully acquired digital content. Canada is the latest target. With Canada’s Copyright Modernization Act (Bill C-11) returning to committee in the Canadian Parliament, now is the time for Canadian netizens to take action to protect the free and open Internet by signing the petition jointly supported by OpenMedia.ca and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC).
Misleadingly called the “Canadian SOPA” in many online circles, this most recent iteration of the bill in fact contains positive provisions that protect consumer rights, place limits on penalties for less serious infringements, and include a more balanced approach to Internet intermediary liability. However, U.S. content industry lobbyists are reportedly pushing for SOPA-like changes to the bill, which would be dangerous for online free speech and innovation. These would include new powers to force intermediaries to block websites and increased liability for online intermediaries such as Google. Even without the SOPA-like amendments, there is much cause for concern in C-11. The major problem lies in its digital locks provisions, which would undermine the other more positive aspects of the bill by strengthening the power of rightsholders to limit user rights over content.
Digital locks, (sometimes called Digital Rights Management [DRM], or technological protection measures [TPMs]), are measures added to copyrighted products to restrict copying and other uses of works and to control who can access the work. These include the Content Scramble System on DVDs, copy protection on CDs and some digital music downloads, and digital controls on e-books, videogames and software. Some countries’ laws prohibit the breaking or bypassing of these digital locks, and may also restrict or ban the creation and distribution of circumvention tools, devices and services that would be needed to circumvent the TPMs to make lawful uses of technologically protected copyrighted works.
WIPO’s 1996 Copyright Treaty and the Performances and PhotogramsTreaty require signatories to provide “adequate legal protection and effective legal remedies against the circumvention of effective technological measures” used to protect copyright and related rights in works. While Canada has not yet ratified these treaties and is not presently required to implement these obligations, it would be bound to do so under terms of the Anti-Counterfeiting Trade Agreement (ACTA). It has however, been looking at ways to do so that protect citizens’ rights of free expression and innovation.
The WIPO treaties leave much discretion on how to implement these obligations to member countries. The U.S. content lobby, however, continues to pressure countries around the world to implement the same type of digital locks provisions contained in the Digital Millennium Copyright Act (DMCA). This is despite the fact that US government officials have acknowledged the DMCA goes beyond what was required to comply with the treaties, and even while other countries have chosen different approaches to implementation. Over the years, Canada has tried to implement the TPM obligations differently from the U.S. through successive copyright reform bills (C-60, C-61, C-32, and now C-11) but U.S. copyright holders have not been satisfied with anything short of the DMCA-style implementation. As a result, the U.S. Trade Representative (USTR) has listed Canada, along with other noncompliant nations on the Priority Watch List in the USTR’s last three annual Special 301 reports, a process that could1 open the door to trade sanctions.
Tens of thousands of Canadians and dozens of interests groups have come out against any bill that would put such overbroad controls on the use of content. Only the content industries, many of which are U.S.-based, benefit from these laws. As we will outline in a separate post, digital circumvention provisions affect everyone. When content industry lobbyists continue to try and export these harmful policies and shove them down the throats of governments internationally, it is crucial for concerned citizens to be vocal in their opposition to such overreaching legislative proposals.
If you live in Canada or are a Canadian citizen, you can take the following action:
For the hundreds of thousands of users searching for that special someone through one of the largest free online dating sites, the love fest may be coming to an end. OkCupid is putting users’ privacy in danger by failing to support secure access to its entire website through HTTPS. Every OkCupid email, chat session, search, clicked link, page viewed, and username is transmitted over the Internet in unencrypted plaintext, where it can be intercepted and read by anyone on the network.
Screen shot from OkCupid Help Forum. While passwords after inital signup aren’t sent in the clear, there are other severe security problems with OkCupid.com.
“HTTPS” is standard web encryption that ensures information sent and received online is encrypted instead of as plaintext. OkCupid does not enable HTTPS across the site, which means that while OkCupid doesn’t leak passwords entered during log in over plaintext, it does leak a lot of other sensitive data. OkCupid’s failure to offer HTTPS support potentially exposes:
Email content from within OkCupid
Content of online chats on OkCupid
Searches conducted on the site
Every unique page viewed, and thus all profiles looked at
Content of “hidden” questions–questions a user responds to in order to improve match results but then marks as “private” so others cannot see his or her response
Failing to offer HTTPS is particularly unfortunate because OkCupid offers a variety of privacy-enhancing ways of limiting who can access your profile. For example, users who mark their sexual orientation as gay or bisexual may opt not to allow their profile to be seen by straight individuals. This feature might be useful for someone who is looking to date a same-sex partner but is not openly queer among others in their community. Unfortunately, your profile data, including the fact that you identify as gay and don’t wish to be seen by straight people, is transmitted over plaintext.
OkCupid provides privacy controls to limit who sees your profile, including limiting whether heterosexual users can see your profile.
Other privacy-enhancing features such as limiting who can view your profile (to everyone, members of OkCupid, your favorites, or no one at all) can be circumvented easily by someone monitoring your plaintext communication with OkCupid.
It’s even worse than you imagined.
The failure to encrypt your communications exposes sensitive data in online profiles to eavesdroppers, who could snoop on the content of your profile to learn about sensitive topics like religious and political beliefs, drug use, and sexual practices. The failure to encrypt also exposes the HTTP cookie that’s used to authenticate you to the site, which means that the eavesdropper can actually take over your account and impersonate you, even without knowing your password.
OkCupid lets users answer questions to help them improve their matches. Users are given privacy controls to answer questions "privately"—though the data is still transmitted in plaintext.
Although security experts have warned about this problem for over a decade, this attack was sometimes dismissed as theoretical or difficult to pull off. But all that changed with the release of Firesheep, a simple tool that can be used on shared wifi networks to take over web-based accounts on non-HTTPS sites. This type of eavesdropping is trivial for someone with even basic skills.
Firesheep lets an attacker take over an account by stealing a cookie without actually knowing the account password. For example, when you sit in a coffee shop using a shared network and log into a site that does not have HTTPS enabled, someone using the same networking could monitor what you are doing and even impersonate you.
Because OkCupid’s login form is also delivered over insecure HTTP, a more sophisticated attacker could also tamper with the login form itself, replacing it with a version that disables HTTPS entirely in order to learn the user’s password.
Major sites like Facebook and Twitter have come to appreciate these threats and offered meaningful, comprehensive HTTPS support to protect their users. These actions are in alignment with former Federal Trade Commissioner Pamela Jones Harbour’s call for websites to adopt HTTPS. Unfortunately, dating sites like OKCupid are lagging behind—way behind.
Tell OkCupid to protect your privacy
Many avid fans of OkCupid want to let the service know that they shouldn’t cut corners when it comes to security. Send OkCupid a message here.
Individual emails from users can be effective in prompting a website to improve their security practices, as shown by @MayMaym's successful campaign to get HTTPS support on Fetlife.com.
HTTPS by default
Free of mixed content
Uses secure cookies or HSTS
Delete data after closing account
Plenty of Fish
Adult Friend Finder
Please read below for more details about the sites' policies on deleting data after an account is closed.
HTTPS by default
HTTPS is standard web encryption–often signified by a closed lock in one corner of your browser and ubiquitous on sites that allow financial transactions. As you can see, most of the dating sites we examined fail to properly secure their site using HTTPS by default. Some sites protect login credentials using HTTPS, but that’s generally where the protection ends. This means individuals who use these sites can be vulnerable to eavesdroppers when they use shared networks, as is typical in a coffee shop or library. Using free software such as Wireshark, an eavesdropper can see what data is being transmitted in plaintext. This is particularly egregious due to the sensitive nature of information posted on an online dating site–from sexual orientation to political affiliation to what items are searched for and what profiles are viewed.
In our chart, we gave a heart to the companies that employ HTTPS by default and an X to the companies that don’t. We were shocked to find that only one site in our study, Zoosk, uses HTTPS by default.
Free of mixed content
We gave a heart to the websites that keep their HTTPS websites free of mixed content and an X to the websites that don’t.
Uses secure cookies or HSTS
For sites that require users to log in, the site may set a cookie in your browser containing authentication information that helps the site recognize that requests from your browser are allowed to access information in your account. That’s why when you return to a site like OkCupid, you might find yourself logged in without having to provide your password again.
If the site uses HTTPS, the correct security practice is to mark these cookies "secure," which prevents them from being sent to a non-HTTPS page, even at the same URL. If the cookies are not "secure," an attacker can trick your browser into going to a fake non-HTTPS page (or just wait for you to go to a real non-HTTPS part of the site, like its homepage). Then when your browser sends the cookies, the eavesdropper can record and then use them to take over your session with the site.
Session hijacking was once (wrongly) dismissed as a sophisticated attack; however, Firesheep, a straightforward and freely available online tool, makes this type of attack simple even for individuals with mediocre skills. Any site that provides insecure cookies at login could be vulnerable to session hijacking.
HSTS (HTTPS Strict Transport Security) is a new standard by which a web site can request that users automatically always use HTTPS when communicating with that site. The user's browser will remember this request and automatically turn on HTTPS when connecting to the site in the future, even if the user didn't specifically ask for it.
We gave a heart to the websites that use secure cookies or HSTS, and an X to the websites that don’t.
Delete data after closing account
Here are the details you need to know about each dating service's policies. We have individually contacted each of the companies listed below to ask them to clarify their policies on deleting data after an account is closed; we’ll update this chart if we learn more from the companies.
Note that this text is taken from their policies as of the publication of this post, and these policies can change at any time!
Withdrawing Your Consent. You may notify us at any time that you wish to withdraw or change your consent to our use and disclosure or your information. We will accommodate your request subject to legal and contractual restrictions.
Click on the “unsubscribe” link on the bottom of the e-mail;
Send mail to the following postal address letting us know which promotional e-mails you wish to opt-out of: eHarmony, Inc. P.O. Box 3640 Santa Monica, CA 90408 USA
For the eHarmony Singles service, select our Help link from your account home page and search our FAQ's to find the answer you are looking for, or send us an e-mail and our Customer Care agents will be happy to assist you; or
For any services that allow you to control which e-mails you receive, go to the e-mail settings page from your account home page, and un-check the undesired promotions.
Millions of people are using online dating sites to search for love or connection, but users should beware: many online dating sites are taking short cuts in safeguarding the privacy and security of users. Whether it’s due to counter-intuitive privacy settings or serious security flaws, users of online dating profiles risk their privacy and security every day. Here are six sobering facts about online dating services and a few suggestions for routing around the privacy pitfalls.
1. Your dating profile—including your photos—can hang around long after you’ve moved on. Whether you signed up on a lark or maintained an active profile for several years, your online dating profile can be lurking around long after you’ve cancelled the account. In fact, dating sites have an impetus for maintaining your information—what if things don’t work out and you want to reactivate your profile in a few months? But having your data hanging around on a company’s servers, even if they aren’t actively serving that content to the web at large, raises a host of privacy issues. The most pressing concern is that information about you may be exposed to future legal requests that might involve a criminal investigation, a divorce case, or even a legal tussle with an insurance company.
Photos in particular can linger long after you’ve deleted them or closed your account due to many large websites hosting user-uploaded photos with Content Delivery Networks. In short, photos are hosted on an outside company’s servers. As Joseph Bonneau explained, the main website provides an obfuscated URL for the photo to anyone it deems has permission to view it. But in Bonneau’s experiment with 16 popular websites, removing the photo from the main website didn't always remove it from the Content Delivery Network; in those cases, anyone who still had the destination URL would be able to view the photo. This means that Content Delivery Networks can maintain caches of sensitive photos even after users “delete” them, leaving photos vulnerable to being rediscovered or even hacked in the future.
2. Gaping security holes riddle popular mobile dating sites-still. In January, an Australian hacker exploited a security flaw in Grindr, the mobile app that allows gay and questioning men to find sexual partners nearby through the use of GPS technology. The vulnerability allows an attacker to impersonate another user, send messages on his behalf, access sensitive data like photos and messages, and even view passwords. Grindr acknowledged the vulnerability on January 20th and promised a mandatory update to their software “over the next few days.” To date, Grindr's blog and Twitter profile do not mention a security fix for the flaw. While there haven’t been reports about a hack of the straight-themed sister app, Blendr, security experts speculate that it suffers from a similar vulnerability.
What you can do about it: For right now, we have to agree with Sophos security: if you’ve got a Grindr or Blendr account, you should close it at least until the security vulnerability is addressed; then keep an eye on the Grindr blog for news of a security update.
3. Your profile is indexed by Google. While this isn’t the case for every online dating site, OkCupid profiles are public by default and indexed by Google. It’s a simple privacy setting, but it can trip up even advanced users, as Wikileaks' Editor-in-Chief Julian Assange learned last year when his publicly-accessible OkCupid profile was discovered. Even something as small as a unique turn of phrase could show up in search results and bring casual visitors to your page.
What you can do about it: Some people don’t mind having an online dating site publicly indexed and searchable, but if you find the thought disquieting, then dig into your privacy settings and make sure that your profile is only viewable to other logged-in users on the site. It’s good to familiarize yourself with the other available privacy settings regardless of which site you are using.
4. Your pictures can identify you. Photo identification services like TinEye and Google Image Search make it a trivial matter to re-identify photos that you’ve posted online. Users hoping to create a barrier between their real identities and their online dating profiles might use strategies such as pseudonyms and misleading information in a profile to obfuscate their identity. However, just changing your name and a few facts about your life may not be enough. If you use a photo on your dating site that can be associated with one of your other online accounts—for example, if it had previously been shared on your Facebook profile or LinkedIn profile – then your real identity could be easily discovered.
What you can do about it: Face it (no pun intended): there are a number of ways your online dating profile can be connected to your real identity, especially if you have a robust online life. Photos are a particular vulnerability. Before uploading a photo, consider whether you’ve used it in other contexts. Try searching for the image using TinEye and Google Image Search before uploading it. And be aware that search technology and facial recognition technology is rapidly evolving. At least one study suggests that it’s possible that even photos you have never uploaded before could be used to figure out your identity. So think hard about how you’d feel if a potential employer or acquaintance found personal data about you on a dating site. This might be a particular concern for individuals who use niche dating sites, such as HIV-positive or queer dating sites.
5. Your data is helping online marketers sell you stuff. The cynics among us might think this is the primary purpose of an online dating site. The operators of these sites cull vast amounts of data from users (age, interests, ethnicity, religion, etc.), then package it up and lend or sell the data to online marketers or affiliates. Often, this transaction is gift-wrapped with the promise that your individual data is “anonymized” or sold in aggregate form, yet users should be wary of such promises. Using data from social networking sites sold to advertisers, Stanford researcher Arvind Narayanan demonstrated that it’s hard to truly anonymize data before it’s packaged and sold. In addition, last October researcher Jonathan Mayer discovered that OkCupid was actually leaking1 personal data to some of its marketing partners. Information such as age, drug use, drinking frequency, ethnicity, gender, income, relationship status, religion and more was leaked to online advertiser Lotame.
What you can do about it: You should consider contacting the sites you use to clarify their practices and letting them know your concerns. If you are dissatisfied with a company's practices with sharing data, you might also consider filing a complaint with the Privacy Rights Clearinghouse's Online Complaint Center. Remember, part of what helps companies change practices is public interest in an issue, so blog posts and public discussion can help push companies to adopt better practices.
6. HTTPS support is a wreck on many of the popular online dating sites, meaning you risk exposing your browsing history, messages, and much more when you use them. Unfortunately, our recent survey of major online dating sites found that most of them were not properly implementing HTTPS. Some online dating sites offer partial support for HTTPS, and some offer none at all. This leaves user data exposed. For example, when a user is on a shared network such as a library or coffee shop, she may be exposing sensitive data such as a username, chat messages, what pages she views (and thus what profiles she is viewing), how she responds to questions, and more to an eavesdropper monitoring the wireless connection. Even worse, poor security practices leave her vulnerable to having her entire account taken over by an attacker. More so, since the advent of Firesheep, an attacker doesn’t need any particular skill to perpetrate such attacks. See our in-depth post on OkCupid to learn more.
What you can do about it: Start protecting yourself immediately by installing HTTPS Everywhere, a Firefox addon created and maintained jointly by EFF and the Tor Project. When you use Firefox, HTTPS Everywhere will automatically change URLs from HTTP to HTTPS on over a thousand sites. As more dating sites begin to provide support for HTTPS, we’ll expand the ruleset for HTTPS Everywhere to include those sites so you’ll be better protected.
EFF is individually contacting online dating sites to get them to step up their security practices, but we could use your help. Please send an email to OkCupid to tell them to safeguard user privacy and security.
1. Mayer clarified: "Leakage, in common parlance, implies unintentionality. In computer security, leakage is a term of art for an information flow—some instances of leakage are entirely intentional." Learn more: http://cyberlaw.stanford.edu/node/6740
Earlier this week, a Singapore-based iOS software developer made a startling discovery while working with the popular social-networking app Path: in the course of every new account creation, Path uploads the new user’s entire iPhone address book to their servers. To its credit, Path responded quickly, with its CEO and co-founder Dave Morin explaining that they use the address book data for “friend-finding” and “nothing more.” He also asserted that this technique was an industry standard for social iOS apps.
That response wasn’t enough to contain the firestorm of angry user reactions. Within a day, news of the address book upload had spread, and researchers discovered evidence of similar behavior by other apps, like the photo-sharing service Hipster. Path publicly apologized and promised to delete the address book data stored on their servers, and to begin using an opt-in system immediately. Hipster has also apologized, and plans to host an “Application Privacy Summit” at their office this month.
In their apology, Path acknowledged that the way they designed the “Add Friends” feature was wrong, which is true. As they acknowledged, they could have generated a “hash” of the e-mail addresses to provide a unique identifier. This would have allowed the matches necessary for friend finding, while being incapable of being converted back into the original address. Hopefully they will adopt this protection soon.
They also could have provided reasonable disclosure of the information they were collecting, but even that is not enough — applications on Android OS allow granular permission control, for example, but many users simply click through the installation process. Users need information present in a clear and understandable manner that allows them to make intelligent choices.
Setting aside the question of whether Apple should even allow application free access to sensitive user data like contact information, the route Path has now chosen — an affirmative opt-in process that explains what Path will collect — is certainly a good start.
Regardless of whether practices like checking addresses for friend-finding are “industry standard” in social apps, users expect and deserve respect from the providers of the services they use, and that means protecting personal data needed to use the service. Hiding behind the rationale that a certain functionality is commonplace among similar apps is not sufficient, the process must be proper whether it’s the uploading of data in the first place or its long-term storage.
Path is taking the right steps to recover from a public relations disaster, but providers of social services should take note: these problems are avoidable. Innovative products and rapid development are great, but service providers need to respect their users or be prepared to face the fallout.
How India is losing its footing on free expression.
The world’s biggest democracy is a formidable power in the IT sector. With software exports comprising approximately ten percent of India’s total GDP and a technology sector that employs more than 2.5 million people, India is poised to become a global industry leader. Over the past ten years, India has also experienced a rapid increase in Internet penetration, growing from 5.5 million users in 2000 to 61.3 million in 2009, and government initiatives have brought the Internet to rural areas by way of setting up cybercafés, in the hopes of closing the country’s digital divide.
Despite such growth, or perhaps because of it, India has struggled to strike a balance between its security concerns and online freedom. As we’ve previously noted, India has been known to censor online content, typically under the guise of national security or obscenity. Though the country’s constitution guarantees the right to freedom of expression, the State is given the right to impose "reasonable restrictions ... in the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or in relation to contempt of court, defamation, or incitement to an offence."1
As such, the 2000 Information Technology Act allows for the blocking of certain content online. In 2003, the Indian government created the Indian Computer Emergency Response Team (CERT-IN) to issue blocking orders of websites. Another provision, Section 144 of the Code of Criminal Procedure, allows police commissioners to identify and order the blocking of material that contains a threat or nuisance to society.
In recent years, online censorship has become part of national discourse in India. In particular, a set of regulations that went into effect in April 2011--the 'Intermediary Guidelines' Rules [pdf] and Cyber Cafe Rules [pdf]--have inspired new dialogue in India around limits to speech. The broad Intermediary Guidelines give power to citizens to submit complaints, upon which intermediaries are required to take down offensive content within thirty-six hours. With no transparency requirement, says Pranesh Prakash of the Centre for Internet & Society--which tested the regulations by submitting "frivolous" requests--"If we hadn't kept track [of their fulfilled takedown requests], it would be as though that content never existed."
Google India Removes Content
On Monday, reports emerged that Google India had removed web pages deemed offensive to Indian political and religious leaders to comply with a court case, filed by journalist Vinay Rai, who demands regulation of "offensive and objectionable" material. Rai's case followed a widely-publicized December meeting in which Indian Telecommunications Minister Kapil Sibal met with top executives of Internet companies and social media sites in an attempt to compel them to pro-actively filter certain content. Though at the time, the companies stated that such a move would be impossible, a January Delhi High Court decision issued by Justice Suresh Kait has apparently forced their hands. Issuing his decision, Justice Kait told lawyers for several of the companies that, unless they develop the capability to regulate "offensive and objectionable" material on their sites, the Indian government would block their websites "like China [does]."
The Delhi court gave Google--as well as 21 other websites--two weeks to present further plans for policing their networks, according to an AP report. Facebook, Yahoo, and Microsoft have reportedly questioned their inclusion in the case on the basis that no specific complaints have been presented against them.
In response to the case, Communications Minister Sachin Pilot claimed that "there is no question of any censorship," arguing that foreign companies must be responsible and "operate within the laws of the country."
A Shattered Web
As we have written before, when a company has employees in a given country it has little choice when faced with a legal order. Apart from leaving the country altogether, the company can refuse to comply and put its employees at risk of arrest (or worse), or it can comply with the order and risk backlash from users. Censorship therefore becomes a necessary tradeoff a company must make in order to continue its operations, a chilling effect of choosing to operate somewhere where freedom of expression is under threat.
Many companies, including Google and Twitter, have developed mechanisms by which they can locally censor content. This means that when companies comply with legal orders, content is removed on a country-per-country basis, as opposed to being taken down across the entire site. EFF views this as a good thing in that it minimizes censorship, however with the caveat that transparency in such decisions is vital.
Google, for its part, publishes a tranparency report, in which the company shares information about requests for user data and content removals. In respect to India, the company reports that, from January to June 2011, it declined the majority of YouTube takedown requests, but "locally restricted videos that appeared to violate local laws prohibiting speech that could incite enmity between communities." The report shows that Google complied with 51% of the 68 requests it received during that period.2 Twitter has also vowed to be transparent in its per-country takedowns, reporting requests to the Chilling Effects Clearinghouse. Other companies, such as Facebook, have not offered transparency reports to the public.
These mechanisms for transparency are vital to all citizens' ability to seek, receive, and impart information and ideas, regardless of borders. Despite the transparency, EFF has concerns that these localized content removals are leading to a fractured Web, in which different countries have different views of the Internet. To that end, we encourage companies considering opening foreign offices to think carefully about a given country's track record on freedom of expression.
As for India, we believe that by placing such pervasive restrictions on free expression, the Indian government is losing an opportunity to be an important part of the digital revolution. The inhibition of free speech to such a degree poses a real threat to India's once-thriving democracy. As UN Special Rapporteur on freedom of expression Frank LaRue stated last year in his widely-cited report,3 "By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an 'enabler' of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole."
EFF calls upon the government of India to respect the principles of free expression laid out in Article 19 of the Universal Declaration of Human Rights and halt further regression of rights and freedoms.
1. Article 19 of The Constitution of India, http://lawmin.nic.in/coi.htm [PDF]
2. The Centre for Internet & Society did an analysis comparing the Google Transparency report and reports from the Indian Department of information on website blocking, which demonstrated a lack of transparency on the part of the Indian authorities: http://www.cis-india.org/internet-governance/blog/analysis-dit-response-2nd-rti-blocking
3. Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, www2.ohchr.org/english/bodies/hrcouncil/.../A.HRC.17.27_en.pdf [PDF]
We really have to wonder when the message is going to sink in.On January 18, millions of Internet users spoke out together in one of the most profound and effective uses of technology to organize political opposition in U.S. history, sending a clear message to Congress that voters will not tolerate crippling of the Internet. But big content remains tone deaf to this chorus of Internet users.
This morning, the New York Times published a lengthy screed from Cary Sherman, president of the Recording Industry Association of America, complaining about how “Google and Wikipedia” got in the way of efforts to ram through the Internet blacklist bills, never mind the massive collateral damage to Internet security, expression, and innovation those bills would have caused.Techdirt's Mike Masnick has a great point-by-point response(noting, among other things, the profound hypocrisy of SOPA/PIPA proponents claiming the tide of opposition to the bills was based solely on “misinformation,” given that they have been feeding Congress and the public overblown statistics for years).
But it seems to us that the op-ed's really unfortunate message is that Hollywood still thinks the way forward is for a few executives to sit down together and make a deal. He calls on “the companies” that opposed the bills to come up with “constructive alternatives” and then have a "fact-based conversation" with the entertainment industries. MPAA chair Chris Dodd made a similar calla few weeks ago.Even New York Times op-ed columnist Bill Keller seems to think this comes down to a few "players": in his own piece on the battle against the bills, he seemed to assume that Wikipedia's Jimmy Wales is the only person who matters on the other side of this debate.
That’s precisely the wrong approach. It was great to see technology companies and platform hosts like Wikipedia stand up against SOPA and PIPA. But the people Hollywood most needs to consult now are the users of the internet– the millions of people who have found their voice due, in part, to the emergence of technologies and platforms that allow them to speak to a bigger audience then ever before.
The truth is that a broad swath of public interest, consumer rights, and human rights groups were fighting these bills from the get-go, because we saw how they would harm users, not just technology companies and platforms.Due in part to the hard work of this coalition in raising public awareness, millions of those users saw that, too, and that’s why they contacted their Congressional representatives. We weren’t scared by rhetoric, we were scared by what the bills actually proposed, and we were really scared that the proponents didn’t seem to understand their own legislation.
Having succeeded in halting the runaway SOPA/PIPA train, Internet users don’t intend to just stand down and let a few tech companies, who need to worry about their bottom line along with the needs of users, or even crucial nonprofit organizations like Wikipedia, speak for everyone.Indeed, it’s pretty ironic, and telling, that Sherman’s piece points to the “six-strikes” deal big content made with ISPs last year as a model for the “voluntary cooperation.” Users weren’t at the table when that deal was struck either, even though they’ll be stuck with much of the bill.If they had been, that deal could have been very different, and a lot more fair.
So, Cary and Chris and even Bill, tell you what: when you are ready to have a “fact-based conversation” with the folks who opposed the bill, let’s do it.But let’s include the users who are going to feel the real effects of attacks on the platforms and services that they rely on to create, innovate, and communicate.
Oh, and one more thing: if we’re really going to have a fact-based conversation, let's include the technologists who actually understand the collateral damage that can result when you interfere with Internet architecture, and the economic analysts who are developing real numbers based on hard data, not spin. Thanks.