HTTPS by default
Free of mixed content
Uses secure cookies or HSTS
Delete data after closing account
Plenty of Fish
Adult Friend Finder
Please read below for more details about the sites' policies on deleting data after an account is closed.
HTTPS by default
HTTPS is standard web encryption–often signified by a closed lock in one corner of your browser and ubiquitous on sites that allow financial transactions. As you can see, most of the dating sites we examined fail to properly secure their site using HTTPS by default. Some sites protect login credentials using HTTPS, but that’s generally where the protection ends. This means individuals who use these sites can be vulnerable to eavesdroppers when they use shared networks, as is typical in a coffee shop or library. Using free software such as Wireshark, an eavesdropper can see what data is being transmitted in plaintext. This is particularly egregious due to the sensitive nature of information posted on an online dating site–from sexual orientation to political affiliation to what items are searched for and what profiles are viewed.
In our chart, we gave a heart to the companies that employ HTTPS by default and an X to the companies that don’t. We were shocked to find that only one site in our study, Zoosk, uses HTTPS by default.
Free of mixed content
We gave a heart to the websites that keep their HTTPS websites free of mixed content and an X to the websites that don’t.
Uses secure cookies or HSTS
For sites that require users to log in, the site may set a cookie in your browser containing authentication information that helps the site recognize that requests from your browser are allowed to access information in your account. That’s why when you return to a site like OkCupid, you might find yourself logged in without having to provide your password again.
If the site uses HTTPS, the correct security practice is to mark these cookies "secure," which prevents them from being sent to a non-HTTPS page, even at the same URL. If the cookies are not "secure," an attacker can trick your browser into going to a fake non-HTTPS page (or just wait for you to go to a real non-HTTPS part of the site, like its homepage). Then when your browser sends the cookies, the eavesdropper can record and then use them to take over your session with the site.
Session hijacking was once (wrongly) dismissed as a sophisticated attack; however, Firesheep, a straightforward and freely available online tool, makes this type of attack simple even for individuals with mediocre skills. Any site that provides insecure cookies at login could be vulnerable to session hijacking.
HSTS (HTTPS Strict Transport Security) is a new standard by which a web site can request that users automatically always use HTTPS when communicating with that site. The user's browser will remember this request and automatically turn on HTTPS when connecting to the site in the future, even if the user didn't specifically ask for it.
We gave a heart to the websites that use secure cookies or HSTS, and an X to the websites that don’t.
Delete data after closing account
Here are the details you need to know about each dating service's policies. We have individually contacted each of the companies listed below to ask them to clarify their policies on deleting data after an account is closed; we’ll update this chart if we learn more from the companies.
Note that this text is taken from their policies as of the publication of this post, and these policies can change at any time!
Withdrawing Your Consent. You may notify us at any time that you wish to withdraw or change your consent to our use and disclosure or your information. We will accommodate your request subject to legal and contractual restrictions.
Click on the “unsubscribe” link on the bottom of the e-mail;
Send mail to the following postal address letting us know which promotional e-mails you wish to opt-out of: eHarmony, Inc. P.O. Box 3640 Santa Monica, CA 90408 USA
For the eHarmony Singles service, select our Help link from your account home page and search our FAQ's to find the answer you are looking for, or send us an e-mail and our Customer Care agents will be happy to assist you; or
For any services that allow you to control which e-mails you receive, go to the e-mail settings page from your account home page, and un-check the undesired promotions.
Millions of people are using online dating sites to search for love or connection, but users should beware: many online dating sites are taking short cuts in safeguarding the privacy and security of users. Whether it’s due to counter-intuitive privacy settings or serious security flaws, users of online dating profiles risk their privacy and security every day. Here are six sobering facts about online dating services and a few suggestions for routing around the privacy pitfalls.
1. Your dating profile—including your photos—can hang around long after you’ve moved on. Whether you signed up on a lark or maintained an active profile for several years, your online dating profile can be lurking around long after you’ve cancelled the account. In fact, dating sites have an impetus for maintaining your information—what if things don’t work out and you want to reactivate your profile in a few months? But having your data hanging around on a company’s servers, even if they aren’t actively serving that content to the web at large, raises a host of privacy issues. The most pressing concern is that information about you may be exposed to future legal requests that might involve a criminal investigation, a divorce case, or even a legal tussle with an insurance company.
Photos in particular can linger long after you’ve deleted them or closed your account due to many large websites hosting user-uploaded photos with Content Delivery Networks. In short, photos are hosted on an outside company’s servers. As Joseph Bonneau explained, the main website provides an obfuscated URL for the photo to anyone it deems has permission to view it. But in Bonneau’s experiment with 16 popular websites, removing the photo from the main website didn't always remove it from the Content Delivery Network; in those cases, anyone who still had the destination URL would be able to view the photo. This means that Content Delivery Networks can maintain caches of sensitive photos even after users “delete” them, leaving photos vulnerable to being rediscovered or even hacked in the future.
2. Gaping security holes riddle popular mobile dating sites-still. In January, an Australian hacker exploited a security flaw in Grindr, the mobile app that allows gay and questioning men to find sexual partners nearby through the use of GPS technology. The vulnerability allows an attacker to impersonate another user, send messages on his behalf, access sensitive data like photos and messages, and even view passwords. Grindr acknowledged the vulnerability on January 20th and promised a mandatory update to their software “over the next few days.” To date, Grindr's blog and Twitter profile do not mention a security fix for the flaw. While there haven’t been reports about a hack of the straight-themed sister app, Blendr, security experts speculate that it suffers from a similar vulnerability.
What you can do about it: For right now, we have to agree with Sophos security: if you’ve got a Grindr or Blendr account, you should close it at least until the security vulnerability is addressed; then keep an eye on the Grindr blog for news of a security update.
3. Your profile is indexed by Google. While this isn’t the case for every online dating site, OkCupid profiles are public by default and indexed by Google. It’s a simple privacy setting, but it can trip up even advanced users, as Wikileaks' Editor-in-Chief Julian Assange learned last year when his publicly-accessible OkCupid profile was discovered. Even something as small as a unique turn of phrase could show up in search results and bring casual visitors to your page.
What you can do about it: Some people don’t mind having an online dating site publicly indexed and searchable, but if you find the thought disquieting, then dig into your privacy settings and make sure that your profile is only viewable to other logged-in users on the site. It’s good to familiarize yourself with the other available privacy settings regardless of which site you are using.
4. Your pictures can identify you. Photo identification services like TinEye and Google Image Search make it a trivial matter to re-identify photos that you’ve posted online. Users hoping to create a barrier between their real identities and their online dating profiles might use strategies such as pseudonyms and misleading information in a profile to obfuscate their identity. However, just changing your name and a few facts about your life may not be enough. If you use a photo on your dating site that can be associated with one of your other online accounts—for example, if it had previously been shared on your Facebook profile or LinkedIn profile – then your real identity could be easily discovered.
What you can do about it: Face it (no pun intended): there are a number of ways your online dating profile can be connected to your real identity, especially if you have a robust online life. Photos are a particular vulnerability. Before uploading a photo, consider whether you’ve used it in other contexts. Try searching for the image using TinEye and Google Image Search before uploading it. And be aware that search technology and facial recognition technology is rapidly evolving. At least one study suggests that it’s possible that even photos you have never uploaded before could be used to figure out your identity. So think hard about how you’d feel if a potential employer or acquaintance found personal data about you on a dating site. This might be a particular concern for individuals who use niche dating sites, such as HIV-positive or queer dating sites.
5. Your data is helping online marketers sell you stuff. The cynics among us might think this is the primary purpose of an online dating site. The operators of these sites cull vast amounts of data from users (age, interests, ethnicity, religion, etc.), then package it up and lend or sell the data to online marketers or affiliates. Often, this transaction is gift-wrapped with the promise that your individual data is “anonymized” or sold in aggregate form, yet users should be wary of such promises. Using data from social networking sites sold to advertisers, Stanford researcher Arvind Narayanan demonstrated that it’s hard to truly anonymize data before it’s packaged and sold. In addition, last October researcher Jonathan Mayer discovered that OkCupid was actually leaking1 personal data to some of its marketing partners. Information such as age, drug use, drinking frequency, ethnicity, gender, income, relationship status, religion and more was leaked to online advertiser Lotame.
What you can do about it: You should consider contacting the sites you use to clarify their practices and letting them know your concerns. If you are dissatisfied with a company's practices with sharing data, you might also consider filing a complaint with the Privacy Rights Clearinghouse's Online Complaint Center. Remember, part of what helps companies change practices is public interest in an issue, so blog posts and public discussion can help push companies to adopt better practices.
6. HTTPS support is a wreck on many of the popular online dating sites, meaning you risk exposing your browsing history, messages, and much more when you use them. Unfortunately, our recent survey of major online dating sites found that most of them were not properly implementing HTTPS. Some online dating sites offer partial support for HTTPS, and some offer none at all. This leaves user data exposed. For example, when a user is on a shared network such as a library or coffee shop, she may be exposing sensitive data such as a username, chat messages, what pages she views (and thus what profiles she is viewing), how she responds to questions, and more to an eavesdropper monitoring the wireless connection. Even worse, poor security practices leave her vulnerable to having her entire account taken over by an attacker. More so, since the advent of Firesheep, an attacker doesn’t need any particular skill to perpetrate such attacks. See our in-depth post on OkCupid to learn more.
What you can do about it: Start protecting yourself immediately by installing HTTPS Everywhere, a Firefox addon created and maintained jointly by EFF and the Tor Project. When you use Firefox, HTTPS Everywhere will automatically change URLs from HTTP to HTTPS on over a thousand sites. As more dating sites begin to provide support for HTTPS, we’ll expand the ruleset for HTTPS Everywhere to include those sites so you’ll be better protected.
EFF is individually contacting online dating sites to get them to step up their security practices, but we could use your help. Please send an email to OkCupid to tell them to safeguard user privacy and security.
1. Mayer clarified: "Leakage, in common parlance, implies unintentionality. In computer security, leakage is a term of art for an information flow—some instances of leakage are entirely intentional." Learn more: http://cyberlaw.stanford.edu/node/6740
Earlier this week, a Singapore-based iOS software developer made a startling discovery while working with the popular social-networking app Path: in the course of every new account creation, Path uploads the new user’s entire iPhone address book to their servers. To its credit, Path responded quickly, with its CEO and co-founder Dave Morin explaining that they use the address book data for “friend-finding” and “nothing more.” He also asserted that this technique was an industry standard for social iOS apps.
That response wasn’t enough to contain the firestorm of angry user reactions. Within a day, news of the address book upload had spread, and researchers discovered evidence of similar behavior by other apps, like the photo-sharing service Hipster. Path publicly apologized and promised to delete the address book data stored on their servers, and to begin using an opt-in system immediately. Hipster has also apologized, and plans to host an “Application Privacy Summit” at their office this month.
In their apology, Path acknowledged that the way they designed the “Add Friends” feature was wrong, which is true. As they acknowledged, they could have generated a “hash” of the e-mail addresses to provide a unique identifier. This would have allowed the matches necessary for friend finding, while being incapable of being converted back into the original address. Hopefully they will adopt this protection soon.
They also could have provided reasonable disclosure of the information they were collecting, but even that is not enough — applications on Android OS allow granular permission control, for example, but many users simply click through the installation process. Users need information present in a clear and understandable manner that allows them to make intelligent choices.
Setting aside the question of whether Apple should even allow application free access to sensitive user data like contact information, the route Path has now chosen — an affirmative opt-in process that explains what Path will collect — is certainly a good start.
Regardless of whether practices like checking addresses for friend-finding are “industry standard” in social apps, users expect and deserve respect from the providers of the services they use, and that means protecting personal data needed to use the service. Hiding behind the rationale that a certain functionality is commonplace among similar apps is not sufficient, the process must be proper whether it’s the uploading of data in the first place or its long-term storage.
Path is taking the right steps to recover from a public relations disaster, but providers of social services should take note: these problems are avoidable. Innovative products and rapid development are great, but service providers need to respect their users or be prepared to face the fallout.
How India is losing its footing on free expression.
The world’s biggest democracy is a formidable power in the IT sector. With software exports comprising approximately ten percent of India’s total GDP and a technology sector that employs more than 2.5 million people, India is poised to become a global industry leader. Over the past ten years, India has also experienced a rapid increase in Internet penetration, growing from 5.5 million users in 2000 to 61.3 million in 2009, and government initiatives have brought the Internet to rural areas by way of setting up cybercafés, in the hopes of closing the country’s digital divide.
Despite such growth, or perhaps because of it, India has struggled to strike a balance between its security concerns and online freedom. As we’ve previously noted, India has been known to censor online content, typically under the guise of national security or obscenity. Though the country’s constitution guarantees the right to freedom of expression, the State is given the right to impose "reasonable restrictions ... in the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or in relation to contempt of court, defamation, or incitement to an offence."1
As such, the 2000 Information Technology Act allows for the blocking of certain content online. In 2003, the Indian government created the Indian Computer Emergency Response Team (CERT-IN) to issue blocking orders of websites. Another provision, Section 144 of the Code of Criminal Procedure, allows police commissioners to identify and order the blocking of material that contains a threat or nuisance to society.
In recent years, online censorship has become part of national discourse in India. In particular, a set of regulations that went into effect in April 2011--the 'Intermediary Guidelines' Rules [pdf] and Cyber Cafe Rules [pdf]--have inspired new dialogue in India around limits to speech. The broad Intermediary Guidelines give power to citizens to submit complaints, upon which intermediaries are required to take down offensive content within thirty-six hours. With no transparency requirement, says Pranesh Prakash of the Centre for Internet & Society--which tested the regulations by submitting "frivolous" requests--"If we hadn't kept track [of their fulfilled takedown requests], it would be as though that content never existed."
Google India Removes Content
On Monday, reports emerged that Google India had removed web pages deemed offensive to Indian political and religious leaders to comply with a court case, filed by journalist Vinay Rai, who demands regulation of "offensive and objectionable" material. Rai's case followed a widely-publicized December meeting in which Indian Telecommunications Minister Kapil Sibal met with top executives of Internet companies and social media sites in an attempt to compel them to pro-actively filter certain content. Though at the time, the companies stated that such a move would be impossible, a January Delhi High Court decision issued by Justice Suresh Kait has apparently forced their hands. Issuing his decision, Justice Kait told lawyers for several of the companies that, unless they develop the capability to regulate "offensive and objectionable" material on their sites, the Indian government would block their websites "like China [does]."
The Delhi court gave Google--as well as 21 other websites--two weeks to present further plans for policing their networks, according to an AP report. Facebook, Yahoo, and Microsoft have reportedly questioned their inclusion in the case on the basis that no specific complaints have been presented against them.
In response to the case, Communications Minister Sachin Pilot claimed that "there is no question of any censorship," arguing that foreign companies must be responsible and "operate within the laws of the country."
A Shattered Web
As we have written before, when a company has employees in a given country it has little choice when faced with a legal order. Apart from leaving the country altogether, the company can refuse to comply and put its employees at risk of arrest (or worse), or it can comply with the order and risk backlash from users. Censorship therefore becomes a necessary tradeoff a company must make in order to continue its operations, a chilling effect of choosing to operate somewhere where freedom of expression is under threat.
Many companies, including Google and Twitter, have developed mechanisms by which they can locally censor content. This means that when companies comply with legal orders, content is removed on a country-per-country basis, as opposed to being taken down across the entire site. EFF views this as a good thing in that it minimizes censorship, however with the caveat that transparency in such decisions is vital.
Google, for its part, publishes a tranparency report, in which the company shares information about requests for user data and content removals. In respect to India, the company reports that, from January to June 2011, it declined the majority of YouTube takedown requests, but "locally restricted videos that appeared to violate local laws prohibiting speech that could incite enmity between communities." The report shows that Google complied with 51% of the 68 requests it received during that period.2 Twitter has also vowed to be transparent in its per-country takedowns, reporting requests to the Chilling Effects Clearinghouse. Other companies, such as Facebook, have not offered transparency reports to the public.
These mechanisms for transparency are vital to all citizens' ability to seek, receive, and impart information and ideas, regardless of borders. Despite the transparency, EFF has concerns that these localized content removals are leading to a fractured Web, in which different countries have different views of the Internet. To that end, we encourage companies considering opening foreign offices to think carefully about a given country's track record on freedom of expression.
As for India, we believe that by placing such pervasive restrictions on free expression, the Indian government is losing an opportunity to be an important part of the digital revolution. The inhibition of free speech to such a degree poses a real threat to India's once-thriving democracy. As UN Special Rapporteur on freedom of expression Frank LaRue stated last year in his widely-cited report,3 "By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an 'enabler' of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole."
EFF calls upon the government of India to respect the principles of free expression laid out in Article 19 of the Universal Declaration of Human Rights and halt further regression of rights and freedoms.
1. Article 19 of The Constitution of India, http://lawmin.nic.in/coi.htm [PDF]
2. The Centre for Internet & Society did an analysis comparing the Google Transparency report and reports from the Indian Department of information on website blocking, which demonstrated a lack of transparency on the part of the Indian authorities: http://www.cis-india.org/internet-governance/blog/analysis-dit-response-2nd-rti-blocking
3. Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, www2.ohchr.org/english/bodies/hrcouncil/.../A.HRC.17.27_en.pdf [PDF]
We really have to wonder when the message is going to sink in.On January 18, millions of Internet users spoke out together in one of the most profound and effective uses of technology to organize political opposition in U.S. history, sending a clear message to Congress that voters will not tolerate crippling of the Internet. But big content remains tone deaf to this chorus of Internet users.
This morning, the New York Times published a lengthy screed from Cary Sherman, president of the Recording Industry Association of America, complaining about how “Google and Wikipedia” got in the way of efforts to ram through the Internet blacklist bills, never mind the massive collateral damage to Internet security, expression, and innovation those bills would have caused.Techdirt's Mike Masnick has a great point-by-point response(noting, among other things, the profound hypocrisy of SOPA/PIPA proponents claiming the tide of opposition to the bills was based solely on “misinformation,” given that they have been feeding Congress and the public overblown statistics for years).
But it seems to us that the op-ed's really unfortunate message is that Hollywood still thinks the way forward is for a few executives to sit down together and make a deal. He calls on “the companies” that opposed the bills to come up with “constructive alternatives” and then have a "fact-based conversation" with the entertainment industries. MPAA chair Chris Dodd made a similar calla few weeks ago.Even New York Times op-ed columnist Bill Keller seems to think this comes down to a few "players": in his own piece on the battle against the bills, he seemed to assume that Wikipedia's Jimmy Wales is the only person who matters on the other side of this debate.
That’s precisely the wrong approach. It was great to see technology companies and platform hosts like Wikipedia stand up against SOPA and PIPA. But the people Hollywood most needs to consult now are the users of the internet– the millions of people who have found their voice due, in part, to the emergence of technologies and platforms that allow them to speak to a bigger audience then ever before.
The truth is that a broad swath of public interest, consumer rights, and human rights groups were fighting these bills from the get-go, because we saw how they would harm users, not just technology companies and platforms.Due in part to the hard work of this coalition in raising public awareness, millions of those users saw that, too, and that’s why they contacted their Congressional representatives. We weren’t scared by rhetoric, we were scared by what the bills actually proposed, and we were really scared that the proponents didn’t seem to understand their own legislation.
Having succeeded in halting the runaway SOPA/PIPA train, Internet users don’t intend to just stand down and let a few tech companies, who need to worry about their bottom line along with the needs of users, or even crucial nonprofit organizations like Wikipedia, speak for everyone.Indeed, it’s pretty ironic, and telling, that Sherman’s piece points to the “six-strikes” deal big content made with ISPs last year as a model for the “voluntary cooperation.” Users weren’t at the table when that deal was struck either, even though they’ll be stuck with much of the bill.If they had been, that deal could have been very different, and a lot more fair.
So, Cary and Chris and even Bill, tell you what: when you are ready to have a “fact-based conversation” with the folks who opposed the bill, let’s do it.But let’s include the users who are going to feel the real effects of attacks on the platforms and services that they rely on to create, innovate, and communicate.
Oh, and one more thing: if we’re really going to have a fact-based conversation, let's include the technologists who actually understand the collateral damage that can result when you interfere with Internet architecture, and the economic analysts who are developing real numbers based on hard data, not spin. Thanks.
Kevin McLeod is a deaf man who uses his Android phone — a Samsung Epic 4G — to assist him with communication, record-keeping, and time management. Like many deaf people, he uses video relay service (VRS) software on his phone to “work on a level playing field with hearing peers and have productive and meaningful careers.” He had these comments for the Copyright Office:
I need a phone that can run VRS software through the day without having to recharge every other hour. The stock phone I received can't do that. I had to upgrade to a more powerful battery. Then I installed an alternative version of the Android operating system called CleanGB that removes most of the carrier-installed software. This freed up memory and battery resources I need to stay connected.
We need the ability to modify our devices because manufacturers and carriers can't possibly anticipate all the needs of their customers. We need flexibility to make the most of the terrific tools they build for us. I love the power and connectivity my phone gives me. I love that I can customize it to meet my unique needs.
And Tom Van Nostrand sent these comments from Kuwait:
I work on an Army base in the middle east and at night it is very dark. Often times for my job I have walk outside the trailer, and there's rocks, scorpions, Spiny-Tailed Lizards, wild dogs etc to look out for outside.
I jailbreak my phone specifically so that I can set a button to immediately turn on the "flashlight" on my camera when I need it. Please do NOT make it against the law for me to be safe while supporting the U.S. Army's troops.
Stephanie Hughes had this to say:
I am a nurse and the customizations I can make to my devices after jailbreaking increase my productivity and success in my job every day. I can track my performance, treatments used on patients, and the effects of those treatments, much faster with customizations that are not available on a device that is not jailbroken.
Reasons for jailbreaking personal devices are as varied as the people who use them, but they share two common themes: one, the law shouldn't interfere with people's use of their own devices, and two, personal devices can't reach their full potential when manufacturers artificially limit their uses. If you have a compelling story for the Copyright Office, submit your comments today and sign on to the Jailbreaking Is Not a Crime and Rip Mix Make letters.
On Wednesday, EFF will give recommendations to the European Parliament for how to combat one of the mosttroubling problems facing democracy activists around the world: the fact that European and American companies are providing key surveillance technology to authoritarian governments that is then being used to aid repression.
Recent reports by the Wall Street Journal and Bloomberg News, as well as a subsequent release by WikiLeaks, have exposed the shadowy but growing industry that sells electronic spy gear to governments known for violating human rights. The technology’s reach is very broad: governments can listen in on cell phone calls, use voice recognition to scan mobile networks, read emails and text messages, censor web pages, track one’s every movement using GPS, and can even change email contents while en route to a recipient. Some tools are installed using the same type of malicious malware and spyware used by online criminals to steal credit card and banking information. They can secretly turn on webcams built into personal laptops and microphones in unused cell phones. And all of this information is filtered and organized on such a massive scale that it can be used to spy on every person in an entire country.
Ordinary citizens, journalists, human rights campaigners and democracy advocates have all been targeted, eviscerating privacy rights and chilling free speech. Ample evidence suggests information acquired through this spy gear appears has played a role in the harassment, threats, and even torture of journalists, human rights campaigners, and democracy activists. Yet dozens of companies from the U.S. and E.U continue to sell this technology, including to authoritarian regimes.The market for surveillance equipment has grown to a staggering $5 billion a year.
Dutch member of the EU Parliament Marietje Schaake has been trying to spearhead an effort to curb sales of this type of technology to repressive regimes. In September, the EU parliament passed a resolution proposed by Ms. Schaake which called on European countries to regulate sales of this dangerous surveillance tools if they can be used in human rights violations. She has also asked the European Commission to investigate sales by these companies to the governments of Bahrain, Yemen, Syria, Tunisia and Egypt. On Wednesday, EFF will be testifying at a workshop for Committee of International Trade and Committee on Foreign Affairs, co-chaired by Ms. Schaake. Here is part of what we will say:
First, transparency is key. The mass surveillance industry as a whole has been notoriously secretive and that has, in turn, allowed it to proliferate without meaningful safeguards. But we know that just having this information in the public eye can, by itself, force change. Companies have pulled out of countries and created official human rights policies thanks to news reports. The world program director of I.S.S. Tatiana Lucas even complained that shining a spotlight on these practices “makes U.S. manufacturers gun shy about developing, and eventually exporting, anything that can remotely be used to support government surveillance.” We want to turn up the heat on these companies even more to be accountable for selling to authoritarian regimes.
We encourage the EU commission to act on Ms. Schaake’s request for an investigation into these companies and have them answer questions on the record. The EU Parliament should also consider disclosure requirements, requiring companies to publicize which governments they are selling to (either a full list or a limited list of based on troubling regimes or portions of regimes) and descriptions of the capabilities of their technologies, so an investigative body could follow the money trail to find out exactly whose equipment ends up where and how it is being used.
“Know Your Customer”
But beyond transparency, there is also the question of limiting sales to certain governments or parts of governments. Many have called for such direct legislation of surveillance tools but EFF has not joined that chorus, in part because we recognize how difficult it will be to create rules that both reach the problem and do not create collateral harms.
First and foremost, we want to make sure we do not leave activists with fewer tools than they already have. Parliament must be mindful of legislation just based on types of technology becausebroadly written regulations could have a net negative effect on the availability of many general-purpose technologies and could easily harm very people that the regulations are trying to protect.As EFF has highlighted before, legal terms used to define harmful technology can often encompass basic technology like web browsers and email servers. We can see this problem in the U.S., where overbroad regulations keep Syrian activists from accessing Google Chrome and Earth, Java, and or hosting services like Rackspace or SuperGreenHosting. It can also harm network security efforts.
So instead of focusing on the technology being sold, we recommend that any formal or informal effort to address the problem of misuse of surveillance technologies look at the government customers as the ultimate chokepoint. To that end, EFF has proposed a “know your customer” framework, based on already existing legal frameworks in the U.S. that can be implemented without significant overhead cost to government or businesses.
Simply put, companies selling surveillance technologies to governments or government providers need to affirmatively investigate and "know their customer" before and during a sale. EFF has already detailed extensive framework for such regulations including questions, definitions, and procedures for how to accomplish it.
It would require companies to comprehensively review everything about a sale of surveillance technology from the negotiations, discussions, background of the buyer, contractual specifications, technical support requests, to State Department and U.N human rights reports and the capability for abuse. Companies would refrain from participating in transactions where their investigations reveal either objective evidence or credible concerns that the technologies provided by the company will be used to facilitate human rights violations. You can read EFF’s full, detailed “know your customer” framework here.
This approach does three things: First, it avoids the many problems with pre-defining technologies, and instead focuses on the uses of the technologies to facilitate human rights abuses.Second it encompasses both government-like entities and sales to third-parties when the technology is likely to pass to repressive governments.This problem has been a frequent excuse from companies engaged in this business and their apologists.Yet in the context of tracking bribes in the Foreign Corrupt Practices Act and other export regulations, the U.S. government, like other governments around the world, have developed tools to help discover these sorts of transactions.Third, because it is based on current regulations that many of the companies involved in selling surveillance equipment to government end users already have to comply with, this approach should not add a heavy regulatory burden.
We hope the EU moves quickly on this problem, as recent reports show it is only getting worse. We also hope the U.S. Congress is listening because with U.S companies sell the same equipment, they are not only undermining own foreign policy in these countries, but destroying the human rights the State Department claims it supports around the world.
When asked by the Guardian if he would be comfortable knowing that regimes in North Korea and Zimbabwe were purchasing this technology from the companies he does business with, Jerry Lucas, president of Telestrategies Inc., said, “That’s just not my job to determine who’s a bad country and who’s a good country. That’s not our business.”
By instituting EFF’s "know our customer" standards, we can make it their business.
The Sultanate of Oman has received little attention throughout the so-called Arab Spring, despite unprecedented protests last February. Although there is no reported online political censorship, reports that the government monitors private communications, as well as the country's recently amended penal code (which suggests punishment for those charged with weakening the "prestige of the state"), suggest that the Omani blogosphere likely engages in self-censorship. Despite that, no blogger has ever been reported arrested in the Gulf country...until now.
According to a report from Global Voices Advocacy, Muawiya Alrawahi was detained for a blog post and a series of tweets in which he criticized the government. The report states that Alrawahi wrote, in a now-deleted Arabic-language post on his blog, about "suffering sexual abuse as a young teenager, his earlier involvement with Oman's Internal Security Service (ISS), his admiration for and connections to ex-ISS Brigadier-General Khamis Al Ghraibi (now imprisoned under charges of spying for the UAE), his lack of religious belief, his disillusionment with Oman, and his loss of faith in the ruler Sultan Qaboos." Alrawahi's arrest comes shortly after the arrests of two journalists in the country on charges of "insulting" the country's Minister of Justice.
Alrawahi's arrest signals a downward turn for Oman. EFF urges Omani authorities to protect the right to free expression online by releasing Alrawahi unconditionally and re-considering elements of the penal code that would restrict the universal right to free speech for Oman's citizens.
South Korean Indicted for Tweets
South Korea is one of a handful of democracies that justifies online censorship on the basis of "national security." The country's National Security Law allows for harsh punishments to be meted out to those who "praise, encourage disseminate or cooperate with antistate groups, members or those under their control." The law covers, unsurprisingly, affiliation with or support for North Korea, and allows the government to block websites related to North Korea and communism.
As the New York Timesreports, that law was recently used to detain Park Jung-geun, a 23-year-old photographer, for re-posting content from North Korean government site Uriminzokkiri.com to his Twitter account. As it happens, South Korean media regularly cite the government-run website in news reports.
Park claims that his Twitter posts were intended sarcastically, but prosecutors have countered that the Twitter account "served as a tool to spread North Korean propaganda." Park could face up to seven years in jail if convicted.
EFF urges South Korean authorities to immediately drop the charges against Park Jung-geun.
China Cracks Down Over Tibet Unrest
Following news last week that China had shut down Tibetan blogs amid heightened tensions between Tibetans and the central government, new reports claim that the government also shut down Internet and mobile access during the protests. China's "kill switch" was previously used in 2009 to cut off access in the western Xinjiang province following ethnic riots in Urumqi.
EFF reiterates its condemnation of China's heavy-handed censorship policies and once again calls upon the Chinese government to stop silencing Tibetan voices.