Long before Cameroon or Ethiopia cut off their citizens from the internet, a small south Asian country was pioneering the practice. Back in 2004, then-President Maumoon Abdul Gayoom shut down internet access in the wake of protests against his government. The incident was a mere blip in the news, occurring just a few years after the country gained access to begin with.
Since then, the island country, which has fewer than 400,000 residents, has had a contentious relationship with the online world. In 2012, a popular blogger was stabbed in the throat and forced to leave the country; the response from government officials was tepid at best. Despite its sophisticated communications infrastructure and a high rate of internet penetration, the political environment in the Maldives has led many users to self-censor. Others have left the country, seeking greater freedom.
Four such activists living overseas found themselves under threat last week, after Maldivian police issued warrants for their arrest, through public statements and via their Twitter account. Warrants were issued for the arrests of the bloggers and online writers Muzaffar ‘Muju’ Naeem, Hani Amir, Dr. Azra Naseem, and Aishath Velezinee via separate press releases and in public tweets. According to Global Voices, the arrest warrants “appear intended to silence voices that are critical of the government or deemed ‘irreligious.’”
I spoke to Muju Naeem, one of the threatened activists, via email. Naeem writes about politics, science and technology on his blog, but says the warrant for his arrest came after he spoke on a podcast about his secularism. He explained to me why he chooses to raise his voice despite the threats:
By all accounts I am definitely not the first Maldivian ex-Muslim in the country. And I am certainly not the first ex-Muslim to declare their disbelief publicly. But I maybe the first ex-Muslim who has chosen to be openly engaged in the Maldivian public discourse about civil and minority rights. I have chosen to do so knowing fully well the implications of my doing so. I hope to embolden others who are also in a similar situation to do the same. That's why I am speaking out. And when the police threaten to prosecute me, it is an attempt to silence the rights of minorities in the country in the name of religious harmony. There isn't going to be any harmony and social cohesion as long as minorities are oppressed.
Asked about the threats faced by Maldivian activists, Naeem wrote:
Writers in the Maldives always have self-censored themselves if they are based in the country. This is not to say that older generation of writers didn't express themselves freely. Former President Nasheed comes to mind as such a rebellious writer of the older generation. But since the initial democracy movement toppled the dictatorship of Gayoom,1 the topics of discussions had evolved over the years moving on to much more complicated issues such as universal human rights, minority rights including LGBT rights, anti-radicalization, even criticism of Islam and the mullah class. Very few brave people like Hilath, Rilwan, and Yameen, dared to break away from the culture self-censorship to write about these important social issues while remaining in the country. And they all ended up paying hefty prices for the risks they took. Myself, Dr. Azra Naseem, Hani Amir - all three of us were issued summons recently, and threats of prosecution by the police - and others had chosen a safer distance to do our writing unlike those that I previously mentioned for reasons of safety. Maybe we didn't have as much courage as the others. But the threats have always been there and real. Now the problem has been compounded further by belligerent government impunity and the rise of violent religious extremists. The Maldives is a very dangerous place indeed for those who want to express themselves freely.
For countries like the Maldives, the Internet offers the possibility of free and uncensored communication: but that freedom can only work if speakers are safe from harm and and the threat of arbitrary unjust legal prosecution. For more information on our support of jailed and threatened voices, see Offline.
1. The corrupt government of autocrat Maumoon Abdul Gayoom ended in 2008 with free and fair elections, following a de-legitimization campaign by civic activists.
Today the Supreme Court announced it will review United States v. Carpenter, a case involving long-term, retrospective tracking of a person’s movements using information generated by his cell phone. This is very exciting news in the world of digital privacy. With Carpenter, the Court has an opportunity to continue its recent pattern of applying Fourth Amendment protections to sensitive digital data. It may also limit or even reevaluate the so-called “Third Party Doctrine,” which the government relies on to justify warrantless tracking and surveillance in a variety of contexts. EFF filed an amicus brief urging the Supreme Court to take Carpenter and a related case, so we’re hopeful the Court will rule in favor of strong constitutional protections.
The Fourth Amendment in an Age of Ubiquitous Connected Devices
The cell site cases are important because where we travel can reveal very sensitive details about our lives. As Justice Sotomayor noted in her concurring opinion in United States v. Jones[.pdf], location information can provide the government with a “precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.”
The sheer volume of data the government had access to in Carpenter and Graham—three to four months in Carpenter and more than seven months in Graham—far eclipsed the 28 days of surveillance at issue in Jones. And while CSLI—records of cell phone towers your phone connects to at a given time and date—is not currently as precise as information generated by the GPS tracking device in Jones, it is nevertheless revealing.
In Carpenter, the three to four months of CSLI data collected was precise enough for the government to convince a jury that the defendants were at each of the specific robbery locations. And, as we also noted in an amicus brief, the data was precise enough to place one of the defendants at church every Sunday. And in Graham, the 221 days worth of data officers obtained on the two defendants contained nearly 30,000 data points for each defendant—data that the ACLU discovered could reveal when the defendants were home and when they left home, when their travel patterns changed from the norm, and even that Mr. Graham’s wife was pregnant.
Despite the sensitive nature and sheer volume of this information, the appellate courts held that it wasn’t protected by the Fourth Amendment. The courts relied on a legal principle from two 1970s Supreme Court cases called the “Third Party Doctrine.” This principle holds that information you voluntarily share with someone else—whether that “someone else” is your bank (such as deposit and withdrawal information) or the phone company (the numbers you dial on your phone)—isn’t protected by the Fourth Amendment because you can’t expect that third party to keep the information secret.
We are not alone in believing the Third Party Doctrine is outdated. Justice Sotomayor has said the Third Party Doctrine “is ill suited to the digital age.” This is because we live in an era “in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” We use cell phones to stay in touch with friends and family on the go, rely on GPS mapping technologies to find our way about town, and wear Fitbits to try to improve our health. It’s impossible to use any of these technologies without sharing data with third parties, but choosing to rely on 21st-century technology shouldn’t mean we have to relinquish our constitutional rights.
These cases have important ramifications for the future, especially as our phones generate more—and more precise—location information every year, which is shared with third parties. Although Graham and Carpenter involve only data generated when a phone makes or receives a call, future cases will also rely on location data generated every time our phones connect with cell towers to send and receive any kind of data. As more Americans have switched to smartphones, the amount of data transferred over wireless networks has increased significantly—2,400% between 2010 and 2015 alone. This has led to an increase in the number of cell towers—especially in cities—and will only ensure that CSLI becomes more and more precise.
Other increasingly popular technologies will force courts to consider these issues as well. For example, as we adopt and rely on “Internet of Things” technologies like smart lightbulbs and clothing that tracks our emotions and communicates directly with retail stores, these sensors and devices may constantly generate and share data about us with little to no volition on our part, other than, perhaps, the initial decision to purchase or use the device.
We think it’s more than time the Supreme Court stepped in to clarify that the Fourth Amendment applies to all of this data, and we hope Carpenter is the case it chooses to do so.
In the early legal battles over network neutrality, Comcast challenged a Republican FCC's ability to enforce open Internet principles. In repeated legal filings, the company made clear that it did not believe the FCC could prevent providers from data discrimination unless it reclassified them as common carriers. After all, Comcast itself said in court that "nondiscrimination obligations are the hallmark of common carrier regulation (page 12)." In other words, Comcast was saying that the FCC couldn't impose nondiscrimination rules unless it reclassified Comcast as a common carrier - which is exactly what the FCC did in 2015 and exactly what Comcast is fighting now. "Common carrier regulation" is code for Title II of the Communications Act. "nondiscrimination obligations are the hallmark of common carrier regulation" -Comcast's 2009 court filing in Comcast vs FCC
The Comcast Plan If Network Neutrality Is Repealed
At the FCC, Comcast doubled down. In 2010, Comcast told the agency that one of the "benefits" that would be lost under an Open Internet Order would be the ability for cable and telephone to strike exclusive deals with Internet companies - in other words, paid prioritization, or "fast lanes" for those who can afford them.
"The proposed rule could prohibit Internet content, application, and service providers from improving their existing offerings with the assistance of a broadband ISP, regardless of whether doing so would be pro-competitive and beneficial to consumers." -Comcast FCC filing, Jan 14, 2010 (page 40).
While Comcast attempted to make paid prioritization sound like something that would be good for online service competition, it is pretty obvious how these types of exclusives and priority access deals will play out in reality. In practice, what we will see is the biggest Internet companies getting premium access to bandwidth while every mom-and-pop business and tech startup will get relegated to inferior infrastructure because they do not have the excess capital to pay for access. For example, even as the FCC was actively pushing a new Open Internet Order in 2014, Comcast started rerouting and degrading Netflix traffic despite the demand coming from Comcast's customers. Today, Netflix says it can pay for fast lanes - but the next Netflix won't be able to survive in that world.
Setting up the FCC to Fail
In its PR campaign, Comcast claims that its decision not to challenge the 2010 Open Internet Order is evidence of its support for network neutrality. In reality, it's likely the company stayed quiet because shortly after the Open Internet Order was approved Comcast was required to operate neutrally as a condition of its merger with NBC Universal. It had little to gain from publicly opposing the 2010 Order because they could not lift network neutrality obligations over their network even if they won in court due to merger conditions. Those Comcast NBCU merger conditions will expire in 2018. Here is what they said following the merger during consideration of the FCC's second defeat under Verizon vs. Comcast as they were asking for approval of yet another merger (this time with Time Warner Cable).
"Comcast agreed to be bound by the FCC's Open Internet rules until 2018. These protections will now extend to the acquired TWC systems, giving the FCC ample time to adopt (and, if necessary, to defend) legally enforcement Open Internet rules applicable to the entire industry." -Joint statement by David L. Cohen (Comcast) and Arthur T. Minson (Time Warner Cable) to the Senate Judiciary Committee regarding the Comcast-Time Warner Cable merger
Translation: Don't worry about our merger because we are bound to respect the Open Internet Rules for now, and by the time the agreement expires, the FCC will have found a legally enforceable basis for net neutrality protections. As Comcast indicated way back in 2009, that path required the FCC to do exactly what it did in 2015: reclassify broadband as a common carrier service. So Comcast's record is pretty clear: the cable behemoth has known for years what the FCC had to go to get legally sound neutrality rules. Now the FCC has done it, Comcast is fighting tooth and nail to reverse it.
If we want to stop the Comcast plan to repeal network neutrality and convert the Internet into a pay-to-win system where only the largest players can compete for access to subscribers, squeezing out innovative and competing services (not to mention libraries, hospitals, schools, and political organizations), then we must act now.
Update [6/8/2017]: This post was updated to include a quote from a local organizer and the names of several supporting local organizations.
On Thursday night, the capital of the smallest state in the union adopted a wide-ranging police reform measure with national and historic implications. The Providence City Council voted 13-1 to adopt the Providence Community-Police Relations Act, which had generated controversy for the very same reason that it was ultimately adopted: it protects a sweeping array of civil rights and civil liberties (including digital rights championed by EFF) from various kinds of violations by police officers, all in a single measure.
Included within the Act are protections to prevent police from arbitrarily adding young people to gang databases, providing notice to youth under 18 if they are so designated, and allowing adults an opportunity to learn whether they have been included. It also forces police to justify any use of targeted electronic surveillance by imposing a requirement that officers first establish reasonable suspicion of criminal activity. Last but far from least, the Act protects the civilian right to observe and record police activities, which—combined with technology such as cell phones, video, and social media—has recently proven crucial in inspiring a multi-racial social movement responding to long festering abuses.
Providence may now plausibly claim to lead the nation in embracing policy reforms responding to the contemporary movement for civil rights.
Beyond those concerns shared by EFF, the Act also includes a range of further elements protecting civil rights. Visionary measures to address discriminatory profiling prohibit police from considering racial, religious, and gender characteristics when assessing suspects unless “the officer’s decision is based on a specific and reliable suspect description as well.” The Act also prohibits police from inquiring about immigration status, preserving community trust and protecting both families from being torn apart and police departments from being commandeered to do the federal government’s work enforcing non-criminal civil code violations.
Earlier this spring, the Council unanimously approved a slightly different version of the measure, then known as the Community Safety Act, in a unanimous vote. Only a week later, it delayed its prior decision, deferring until June 1 a final vote on proposed recommendations from a working group it established to bring together stakeholders including community advocates and police officials.
After meeting five times over the course of the past month, the working group issued its recommendations, with the support of Police Chief Hugh Clements and other officers included in the working group. Yet the Fraternal Order of Police remained intransigent in its opposition, issuing formal condemnations of the policy process at the eleventh hour for the second time in only a few weeks.
In the wake of the Council's approval, Mayor Jorge Elorza pledged to sign the bill into law. But long before it gained the approval of policymakers, a proposal for intersectional policing reforms united community organizations in and around Providence, including Rhode Island Rights, a member of the Electronic Frontier Alliance. Organizations coordinating the campaign, including the Providence Youth Student Movement (PrYSM), Direct Action for Rights and Equality (DARE), American Friends Service Committee (AFSC), and Olneyville Neighborhood Association (ONA), together formed the STEP UP Network.
Groups of residents promoted both formal and informal discussions of the issues. They educated their neighbors, drew together a remarkably broad coalition of local groups, and even hosted a street festival “to use music, dance and art to bring attention to injustices and inequalities in our city and encourage people from across Providence to stand behind this legislation so that we can ban racial profiling and build a safer city, specifically for youth, immigrants and people of color.”
Reflecting on the Council's vote, the STEP UP Network's Campaign Coordinator, Vanessa Flores-Maldonado, noted that the coalition's campaign “was crafted by the people, and the people have continued to fight and overcome countless obstacles in the past four years that sought to silence us. Today is proof that the people will be heard and that communities can succeed on our own terms.”
The movement for police accountability has drawn viral participation in some cities driving national news cycles, including St. Louis, Baltimore, and Charlotte. But Providence may now plausibly claim to lead the nation in embracing policy reforms responding to those social movements.
By working to secure the near-unanimous support of their elected municipal representatives, grassroots groups who championed the new Act have conclusively demonstrated the viability of expansive local reforms combining measures to limit police profiling, surveillance, and retaliation all at once. Where concerned residents in other parts of the country learn from their examples, they might create new policy opportunities for civil rights and civil liberties, and together, even shift the national landscape.
NLPC’s report is false. Not one name, email address, or email domain cited in the report matches to any of the comments that came through EFF's comment tool.
NLPC’s report is false. Not one name, email address, or email domain cited in the report matches to any of the comments that came through EFF's comment tool.
Unfortunately, NLPC didn’t reach out to us before publishing its report. If they had, we would have been able to share our evidence with them and they could have avoided publishing a flawed report.
Before we explain how we know that NLPC’s accusations are false, we want to say a few words about our DearFCC tool. The FCC proposal to toss net neutrality guidelines would open the door to ISPs creating fast lanes for some content and slow lanes for others. It would leave consumers at the mercy of throttling and rob them of the meaningful access guarantees and privacy protections we fought for and won two years ago. DearFCC was created to provide consumers like you a tool for making your voices heard. Your Internet rights are at stake, and you deserve to be heard. We take very seriously claims of fake comments, and our analysis shows that none of the supposedly fake comments in the NLPC report were submitted through DearFCC.
So how do we know NLPC’s report is wrong? For one thing, we counted the number of comments people have submitted to the FCC through our system. That number is nowhere near the 100,000 comments NLPC said we filed.
Further, just before the sunshine period—when the FCC stopped accepting comments—we started storing copies of comments submitted through our system, because we weren’t sure how the FCC would treat comments submitted during that period. This week, we searched through all of the stored comments for the names and email address domains listed in NLPC’s report, and didn’t find a single match.
Finally, the text that NLPC found in the 100,000 comments in question isn’t identical to the text our system uses. It’s close, but there’s a subtle difference.
One of the sentences our comment system generates is:
It’s subtle, but in the word “I’m” in EFF’s text, the apostrophe is actually a “right single quotation mark.” In the text from the 100,000 comments mentioned in NLPC’s report, the apostrophe is a neutral “typewriter apostrophe.” For more information on the difference, see this Wikipedia article.
Why does the difference matter? Because it shows that whoever submitted the 100,000 identical comments the NLPC report mentions copied and pasted the text to make the comments look like they came through EFF’s DearFCC.org site, when they did not. If NLPC had looked closely at the comments they would have noticed the difference, and realized that the comments weren’t generated by EFF’s website. Apparently, they did not.
As we said last month, “Digital democracy is not easy. The FCC can’t just count comments for and against net neutrality as though they were ballots in a ballot box. But neither can Chairman Pai ignore the opinions of Internet users in the U.S., the majority of whom want to [continue] being protected against data discrimination by ISPs like Comcast, AT&T, and Verizon.” If the FCC and opponents of net neutrality respond to attacks on the “public comment system not by defending the system, but by discounting and ignoring public opinion expressed through that system, then the agency is answerable to no one.”
Help us hold the FCC accountable. Submit your comments to the FCC at https://www.dearfcc.org, and we’ll work as hard as we can to make sure the FCC listens. And you can be sure that we won’t let anyone drown out your voice.
In May, Montana adopted a new statute that limits government access to the contents of electronic communications stored by service providers. EFF applauds this new privacy safeguard. We thank the Governor for cutting two flawed terms concerning the level of judicial review and records stored abroad, as we requested in a letter with the Center for Democracy & Technology.
Unfortunately, the new Montana law also allows the government to prevent providers from notifying their customers of records demands. Much like federal law that authorizes gags on service providers, this provision violates the First Amendment on its face.
What Montana Got Right
Service providers maintain an ever-growing volume of our highly private digital content. If government wants to seize this content from the providers, it should first get a search warrant from a court based on probable cause of crime. In the watershed Warshak decision in 2010, the U.S. Court of Appeals for the Sixth Circuit held that the Fourth Amendment requires a warrant in these circumstances. For many years, EFF and other privacy advocates have asked legislators to codify this critical standard. We succeeded in California. Congress remains a workinprogress.
The bill that the Montana Legislature initially sent its Governor was significantly less protective than Warshak. Specifically, it would have allowed government to seize digital content from service providers based either on a court warrant, or on “an investigative subpoena,” which in Montana can be issued on a lesser standard than probable cause.
EFF and CDT sent the Governor a letter seeking an amendatory veto of this provision. The Governor issued such a veto, requiring a court finding of probable cause for either a warrant or an investigative subpoena. The Legislature then enacted this change.
The Governor also fixed a second problem that we raised. The original bill would have required service providers to turn over digital content “regardless of where the information is held.” In other words, a Montana warrant would purportedly empower police to seize digital content stored outside the United States, including the content of foreign persons living in foreign countries.
EFF and CDT objected that a comparable rule by a foreign country would intrude on the digital liberties of U.S. persons. The Governor vetoed this problematic extraterritoriality clause, and the Legislature enacted the statute without it. We hope other states and Congress will follow this lead.
Montana, Meet Microsoft v. DOJ
Unfortunately, the Montana law also replicates a serious constitutional flaw in federal law and promises to violate service providers’ First Amendment rights. The law allows Montana authorities to ask a court to gag providers and prevent them from informing users that their data has been turned over. The wording of this provision is quite similar to a gag order provision in the federal Stored Communications Act, 18 U.S.C. § 2705; both laws allow the government to get a gag based on the assertion that notice to the user might interfere with an investigation or cause other harms. However, a federal court recently allowed a First Amendment challenge to Section 2705 by Microsoft to proceed. The court found that Microsoft had plausibly argued that Section 2705 violates the First Amendment because it does not require the government to demonstrate that a gag is truly necessary. If anything, the Montana law is even weaker on this ground because it requires only that the government claim that notification of a user “may” cause a specific harm, where Section 2705 requires that harm “will” result.
In light of the Microsoft case, the Montana gag order provision may be vulnerable to a constitutional challenge, and EFF will be keeping a close eye on it. As always, we’re happy to hear from anyone who might need our assistance in this regard.
This week, EFF joined Creative Commons, Wikimedia, Mozilla, EDRi, Open Rights Group, and sixty other organizations in signing an open letter [PDF] addressed to Members of the European Parliament expressing our concerns about two key proposals for a new European "Digital Single Market" Directive on copyright.
These are the "value gap" proposal to require Internet platforms to put in place automatic filters to prevent copyright-infringing content from being uploaded by users (Article 13) and the equally misguided "link tax" proposal that would give news publishers a right to compensation when snippets of the text of news articles are used to link to the original source (Article 11).
The joint letter addresses these two muddle-headed proposals by stating:
The provision on the so-called "value gap" is designed to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they want to have any chance of staying in business. ...
More and more voices have joined the protest by academics and a variety of stakeholders (including some news publishers) against this [link tax] provision. The Council cannot remain deaf to these voices and must remove any creation of additional rights such as the press publishers’ right.
IMCO Proposal Lowers the Bar for Awful Copyright Policy
Incredibly, since the letter was drafted, the proposals have gotten even worse. In our last post on this topic, we highlighted some of the atrocious amendments to the original text being pushed by the CULT (Committee on Culture and Education) of the European Parliament, notably to give producers and performers an unwaiveable copyright-like power to demand additional payments for the use of their work by online streaming services. But the CULT committee doesn't have a monopoly on bad ideas for European copyright.
One of the other committees, the Internal Market and Consumer Protection Committee (IMCO), will be finalizing its own recommendations for the amendment of the Digital Single Market Directive in a vote on 8 June. On Wednesday Member of the European Parliament (MEP) Julia Reda sounded the alarm about a sly move by EPP Group Shadow Rapporteur to IMCO, MEP Pascal Arimont, to propose that the committee accept an alternative "compromise" text, that far from being a compromise, checks off every item on the copyright maximalist wish-list.
As regards the upload filtering mandate, the "compromise" would extend this mandate to cover not only content hosts, but also "any service facilitating the availability of such content," apparently including search engines and link directories. Only small startups would be exempt from this filtering requirement, and only for a maximum period of five years.
The safe harbor that protects Internet platforms from copyright liability for users' content would also be abolished for any Internet platform that uses an algorithm to improve the presentation of such content—and it's hard to imagine any platform that doesn't do that. As such, it is no exaggeration to say that if this became law, it would no longer be safe to operate a user-generated content website in Europe.
If this wasn't bad enough, the Shadow Rapporteur's proposal goes all out in favor of the Commission's unpopular link tax proposal, extending it even further so that it would cover all uses of news snippets both online and offline, and last for 50 rather than 20 years. The new monopoly right would also be extended to scientific and academic journals, allowing them to demand fees for the use of abstracts of articles. Only the use of single words and bare hyperlinks would be exempted from the link tax, meaning that as few as two words quoted from a news article would raise liability for copyright-like fees payable to publishing organizations.
A Ban on Image Search
Another of the senseless proposals that has been published this week, this time not by the Shadow Rapporteur of IMCO but by the CULT committee again, is a proposed amendment to extend the "value gap" proposal to include a new tax on search engines that index images, as for example Google and Bing do. This proposed amendment [PDF] provides:
Information society services that automatically reproduce or refer to significant amounts of visual works of art for the purpose of indexing and referencing shall conclude licensing agreements with right holders in order to ensure the fair remuneration of visual artists.
It's unclear how much support this particular proposed amendment may have, but should it find its way into the final CULT committee report, we can well imagine that just as Google News shut down in Spain following Spain's implementation of a link tax for news publishers, the closure of image search services won't be far behind. It's hard to see that outcome as being any less detrimental to artists as it would be for users.
European lawmakers need to draw a line in the sand, and stop giving oxygen to copyright holders' most fanciful demands. If you are European or have friends in Europe, you can help deliver this message by contacting members of the IMCO committee and of the CULT committee to urge them to oppose such extremist rent-seeking proposals.
Since last year, Indian citizens have been required to submit their photograph, iris and fingerprint scans in order to access legal entitlements, benefits, compensation, scholarships, and even nutrition programs. Submitting biometric information is needed for the rehabilitation of manual scavengers, the training and aid of disabled people, and anti-retroviral therapy for HIV/AIDS patients. Soon police in the Alwar district of Rajasthan will be able to register criminals, and track missing persons through an app that integrates biometric information with the Crime and Criminal Tracking Network Systems (CCTNS).
These instances demonstrate how intrusive India’s controversial national biometric identity scheme, better known as Aadhaar has grown. Aadhaar is a 12-digit unique identity number (UID) issued by the government after verifying a person’s biometric and demographic information. As of April 2017, the Unique Identification Authority of India (UIDAI) has issued 1.14 billion UIDs covering nearly 87% of the population making Aadhaar, the largest biometric database in the world. The government asserts that enrollment reduces fraud in welfare schemes and brings greater social inclusion. Welfare schemes that provide access to basic services for marginalized and vulnerable groups are essential. However, unlike countries where similar schemes have been implemented, invasive biometric collection is being imposed as a condition for basic entitlements in India. The privacy and surveillance risks associated with the scheme have caused much dissension in India.
Identity and Privacy in India
Initiated as an identity authentication tool, the critical problem with Aadhaar is that it is being pushed as a unique identifier to access a range of services. The government continues to maintain that the scheme is voluntary, and yet it has galvanized enrollment by linking Aadhaar to over 50 schemes. Aadhaar has become the de-facto identity document accepted at private, banks, schools, and hospitals. Since Aadhaar is linked to the delivery of essential services, authentication errors or deactivation has serious consequences including exclusion and denial of statutory rights. But more importantly, using a unique identifier across a range of schemes and services enables seamless combination and comparison of databases. By using Aadhaar, the government can match existing records such as driving license, ration card, financial history to the primary identifier to create detailed profiles. Aadhaar may not be the only mechanism, but essentially, it's a surveillance tool that the Indian government can use to surreptitiously identify and track citizens.
This is worrying, particularly in context of the ambiguity regarding privacy in India. The right to privacy for Indian citizens is not enshrined in the Constitution. Although, the Supreme Court has located the right to privacy as implicit in the concept of “ordered liberty” and held that it is necessary in order for citizens to effectively enjoy all other fundamental rights. There is also no comprehensive national framework that regulates the collection and use of personal information. In 2012, Justice K.S. Puttaswamy challenged Aadhaar in the Supreme Court of India on the grounds that it violates the right to privacy. The Court passed an interim order restricting compulsory linking of Aadhaar for benefits delivery, and referred the clarification on privacy as a right to a larger bench. More than a year later, the constitutional bench is yet to be constituted.
The delay in sorting out the nature and scope of privacy as right in India has allowed the government to continue linking Aadhaar to as many schemes as possible, perhaps with the intention of ensuring the scheme becomes too big to be rolled back. In 2016, the government enacted the 'Aadhaar Act' passing the legislation without any debate, discussion or even approval of both houses of Parliament. In April this year, Aadhaar was made compulsory for filing income tax or PAN number application and the decision is being challenges in Supreme Court. Defending the State , the Attorney-General of India claimed that the arguments on so-called privacy and bodily intrusion is bogus, and citizens cannot have an absolute right over their body! The State’s articulation is chilling, especially in light of the Human DNA Profiling Bill seeking the right to collect biological samples and DNA indices of citizens. Such anti-rights arguments are worth note because biometric tracking of citizens isn't just government policy - it is also becoming big business.
Role of Private Companies
Private companies supply hardware, software, programs, and the biometric registration services for rolling out Aadhaar to India’s large population. UIDAI’s Committee on Biometrics acknowledges that biometrics data are national assets though American biometric technology provider L-1 Identity Solutions, and consulting firms Accenture and Ernst and Young can access and retain citizens' data. The Aadhaar Act introduces electronic Know-Your-Customer (eKYC) that allows government agencies and private companies to download data such as name, gender and date of birth from the Aadhaar database at the time of authentication. Banks and telecom companies using authentication process to download data and auto-fill KYC forms and to profile users. Over the last few years, the number of companies or applications built around profiling of citizens’ personally sensitive data has grown exponentially.
A number of people linked with creating the UIDAI infrastructure have founded iSPIRT, an organisation that is pushing for commercial uses of Aadhaar. Private companies are using Aadhaar for authentication purposes and background checks. Microsoft has announced SkypeLite integration with Aadhaar to verify users. Others, such as TrustId and Eko are integrating rating systems into their authentication services and tracking users through platforms they create. In essence such companies are creating their own private database to track authenticated Aadhaar users and they may sell this data to other companies. The growth of companies that share and combine databases to profile users is an indication of the value of personal data and its centrality for both large and small companies in India.
Integrating and linking large biometrics collections to each other, which are then linked with traditional data points that private companies hold such as geolocation or phone number enables constant surveillance to take over. So far, there has been no parliamentary discussion on the role of private companies. UIDAI remains the ultimate authority in deciding the nature, level and cost of access granted to private companies. For example, there is nothing in Aadhaar Act that prevents Facebook from entering into an agreement with the Indian government to make Aadhaar mandatory to access WhatsApp or any of its other services. Facebook could also pay data brokers and aggregators to create customer profiles to add to its ever growing data points for tracking and profiling its users.
Security Risks and Liability
A series of data leakages have raised concerns about which private entities are involved, and how they handle personal and sensitive data. In February, UIDAI registered a complaint against three companies for storing and using biometric data for multiple transactions. Aadhaar numbers of over 130 million people and bank account details of about 100 million people have been publicly displayed through government portals owing to poor security practices. A recent report from Centre for Internet and Society (CIS) showed that a simple tweaking of URL query parameters of the National Social Assistance Programme (NSAP) website could unmask and display private information of a fifth of India's population.
Such data leaks pose a huge risk as compromised biometrics can never be recovered. The Aadhaar Act establishes UIDAI as the primary custodian of identity information, but is silent on the liability in case of data breaches. The Act is also unclear about notice and remedies for victims of identity theft and financial frauds and citizens whose data has been compromised. UIDAI has continued to fix breaches upon being notified, but maintains that storage in federated databases ensures that no agency can track or profile individuals.
After almost a decade of pushing a framework for mass collection of data, the Indian government has issued guidelines to secure identity and sensitive personal data in India. The guidelines could have come earlier, and given large data leaks in the past may also be redundant. Nevertheless, it is reassuring to see practices for keeping information safe and the idea of positive informed consent being reinforced for government departments. To be clear, the guidelines are meant for government departments and private companies using Aadhaar for authentication, profiling and building databases fall outside its scope. With political attitudes to corporations exploiting personal information changing the world over, the stakes for establishing a framework that limits private companies commercializing personal data and tracking Indian citizens are as high as they have ever been.