PayPal has instituted a new policy aimed at censoring what digital denizens can and can’t read, and they’re doing it in a way that leaves us with little recourse to challenge their policies in court. Indie publisher Smashwords has notified contributing authors, publishers, and literary agents that they would no longer be providing a platform for certain forms of sexually explicit fiction. This comes in response to an initiative by online payment processor PayPal to deny service to online merchants selling what they deem to be obscene written content. PayPal is demonstrating, again and to our great disappointment, the dire consequences to online speech when service providers start acting like content police.
Mark Coker, founder of Smashwords, described the new policy in a recent blog post. The policy would ban the selling of ebooks that contain “bestiality, rape-for-titillation, incest and underage erotica.” Trying to apply these definitions to all forms of literary expression raise questions that can only have subjective answers. Would Nabokov’s Lolita be removed from online stores, as it explores issues of pedophilia and consent in soaring, oft-romantic language? Will the Bible be banned for its description of incestuous relationships?
This isn’t the first time PayPal has tried its hand at censorship. In 2010, they cut off services to the whistleblower WikiLeaks, helping to create the financial blockade that has hamstrung the whistleblower organization. And as we explained when WikiLeaks was facing censorship from service providers: the First Amendment to the Constitution guarantees freedom of expression against government encroachment—but that doesn't help if the censorship doesn't come from the government. Free speech online is only as strong as private intermediaries are willing to let it be.
Frankly, we don’t think that PayPal should be using its influence to make moral judgments about what ebooks are appropriate for Smashwords readers. As Wendy Kaminer wrote in a forward to Nadine Strossen’s Defending Pornography: “Speech shouldn’t have to justify itself as nice, socially constructive, or inoffensive in order to be protected. Civil liberty is shaped, in part, by the belief that free expression has normative or inherent value, which means that you have a right to speak regardless of the merits of what you say.”
But having a right to speak is not the same as having a right to be serviced by a popular online payment provider. Just as a bookseller can choose to carry or not a carry particular books, PayPal can choose to cut off services to ebook publishers that don’t meet its “moral” (if arbitrary and misguided) standards.
Online payment providers like PayPal help many websites fund their very existence. As we explained in our interactive graphic Free Speech is Only as Strong as the Weakest Link, a payment provider can shut down controversial online speech by cutting off their means of financial support. And PayPal, the behemoth of online payment providers, has little incentive to compromise with small businesses that are punished through these arbitrary policies.
Unfortunately, Congress knows just how vulnerable online speech can be to the vagaries of payment providers. The Stop Online Piracy Act, defeated earlier this year after Internet-wide protests, contained language that would have allowed individuals and companies to cut off financial support for a website simply by sending an infringement notice to its payment providers or ad networks. No judge or jury would have been required.
The censorship of Smashwords is a blow to free speech and adds to the ever-growing list of examples of payment providers turned into content police.
Earlier this month, EFF called for the protection of Saudi blogger and journalist Hamza Kashgari, who had fled Saudi Arabia after tweets he wrote about the Prophet Mohammed provoked clerics to demand that he be tried for apostasy, and members of the public to call for his murder. Kashgari had been a columnist for the Jeddah-based newspaper Al Bilad until outrage over the tweets, when Saudi Minister of Culture and Information Abdul Aziz Khoja ordered Kashgari “not to write in any Saudi paper or magazine,” an order which Kashgari also posted to his Twitter account. As outrage mounted, Kashgari retracted his statements, deleted his Twitter account, apologized for the comments, and finally fled the country in response to mounting threats on his life.
Upon arriving at the airport in Kuala Lumpur, Malaysia, on his way to seek refuge in New Zealand, Kashgari was arrested by security officials at the request of the Saudi government. Malaysia and Saudi Arabia do not have an extradition treaty, but they do maintain good relations. EFF was among the many organizations that called on Malaysian Prime Minister Najib Tun Razak release Kashgari from detention and stop extradition proceedings, reminding the Prime Minister that Malaysia that as member of the UN Human Rights Council, his nation is committed to upholding the highest human rights standards, which is inconsistent with allowing Kashgari to be extradited back to a country where he faces serious threats to his life.
Mohammed Noor, Kashgari’s lawyer in Malaysia, was able to obtain a court order to prevent the deportation, but he was not allowed to see his client before he was put on a plane and repatriated to Saudi Arabia. Noor told the Associated Press:
“We are concerned that he would not face a fair trail back home and that he could face the death penalty if he is charged with apostasy.”
Kashgari is now in detention in Saudi Arabia. Several sites and petitions have been set up to support him and call for his release. Kashgari is being represented by prominent human rights lawyer Abdul-Rahman al-Lahem, who has stated that he will push for this case to be argued before a committee in the information ministry instead of a Sharia court. Even if Kashgari is not charged with apostasy, a crime with carries the death penalty, the blogger and journalist continues to face threats to his life from Saudi militants. A Facebook page titled “The Saudi people want the execution of Hamza Kashgari,” has over 26,000 members. It is not enough for the Saudi government to release Kashgari—they must allow him to leave the country for his own safety.
The Electronic Frontier Foundation will continue to keep a close eye on developments in Saudi Arabia. Freedom of expression is a fundamental human right. No one deserves to be killed, whether by his or her government or by fellow citizens, for something they write in a 140-character tweet.
The world’s attention has recently turned to the question of how to hold companies accountable for knowingly marketing, selling and adapting the tools of surveillance to repressive regimes. U.S. and E.U. companies’ equipment has been linked to torture and other human rights violations in many Middle East and North African countries, along with longstanding cases involving similar allegations in China. Most recently, evidence suggests prominent American journalist Marie Colvin may have been tracked via her satellite phone before being killed by government forces in Syria. Public pressure on companies to “Know Your Customer” and take other actions to avoid having their tools used as part of human rights violations is intensifying. The European Parliament has begun the first steps in banning sales of this technology to authoritarian governments, and the U.S. Congressman Chris Smith (R-NJ) introduced a bill, the Global Online Freedom Act, which is in part aimed at this problem.
But there is another avenue for justice: the U.S. courts.
Aiding and abetting, and conspiracy to commit crimes, have long been illegal under U.S. law, and it’s not difficult to see how surveillance tools used to commit human rights violations — especially ones specifically and knowingly modified or supported by a company — could qualify under these or other longstanding laws. In fact, there are two pending cases in the U.S. right now raising those claims against Cisco based on evidence that the company knowingly marketed, sold and specially adapted and tools that the Chinese government uses to target Chinese democracy activists and members of the Falun Gong religious minority.
That’s right. Two years after holding that corporations must be allowed to fully participate in funding candidates in U.S. elections, the Supreme Court will consider whether corporations are nonetheless completely immune from claims alleging that they helped commit gross human rights abuses.1
There’s nothing particularly novel about corporate liability for facilitating the bad acts of others. While a corporation cannot go to jail, corporations are regularly held civilly and even criminally liable for involvement in the offenses done by others. Thus, a company that facilitates money laundering can be held liable, and, as EFF members well know, a company can also be secondarily liable for the copyright infringements of others. The two cases concern two different laws: the Alien Tort Statute (ATS) in Kiobel and the Torture Victim Protection Act (TVPA) in Mohamad. While the constitutional analysis under the First Amendment in Citizens United and the statutory interpretation of the TVPA and ATS in these cases are not exactly the same, the public’s concern that the Supreme Court may embrace a world in which corporations have the all rights, but none of the responsibilities, of ordinary people is very real.
How did we get here? In the United States, people have long been held liable for knowingly assisting in human rights abuses even when they are committed overseas. Under case law going back to Filártiga v. Peña-Irala in 1979, people who helped foreign governments engage in torture, summary execution or slavery have been held responsible in both civil and criminal courts. Recently these same claims, on the same standard, have been applied to companies, ranging from one using slave labor to build a pipeline in Burma, to one who helped in the wrongful hanging of Nigerian human rights hero Ken Saro-Wiwa. The cases are not easy, and only apply to a set of extreme human rights violations like torture and execution, but they provide a measure of justice to those who have faced horrific human rights abuses, and hopefully, a strong disincentive for corporations to get involved in the dirty business of assisting in human rights abuses abroad in the first place.2
This is where mass surveillance companies selling technology to authoritarian regimes come in. For months now, we have seen increasing evidence that U.S. and E.U.-based companies have been selling spying technology that has led to the torture and summary execution of journalists, human rights advocates, and democratic activists.
In Bahrain, dozens of recent political prisoners have testified that government officials tortured them before reading back transcripts of text messages and emails likely obtained through these technologies. In Syria, just as the government was ramping up its deadly crackdown on democratic protests, the Italian company Area SpA rushed to complete a “monitoring center” that could not only read every email in the country, but track citizens’ locations via GPS in virtual real-time. Technology from U.S. based companies Hewlett Packard and NetApp have also been linked to Syria, according to Bloomberg. And in Libya, the Wall Street Journalreported that, “a surveillance center in Tripoli provides clear new evidence of foreign companies' cooperation in the repression of Libyans under Col. Gadhafi's rule.” Similar reports have emanated from Iran.
Despite these damning investigations from Bloomberg and the Wall Street Journal, dozens of companies are still operating with little oversight or accountability if they knowingly sell and facilitate their products for use to commit these human rights abuses. On the contrary, business appears to be booming; the market for these products has increased to $5 billion a year.
Those looking for tools to help hold companies accountable for selling the surveillance state to foreign despots should be watching the Supreme Court closely. Kiobel and Mohamad will be argued February 28, and should be decided by late June. More information about the cases is available at corporateaccountabilitynow.org. While some judicial avenues will still exist even if these cases fail, if the Court does require the same responsibilities of corporations not to torture that it already requires of humans, it may help hold these surveillance companies accountable in the courts when they are responsible for assisting in human rights atrocities around the world, and more importantly, it may hopefully help dissuade companies from getting into bed with these repressive governments in the first place.
1. There’s nothing particularly novel about corporate liability for facilitating the bad acts of others. While a corporation cannot go to jail, corporations are regularly held civilly and even criminally liable for involvement in the offenses done by others. Thus, a company that facilitates money laundering can be held liable, and, as EFF members well know, a company can also be secondarily liable for the copyright infringements of others.
2. Note that EFF is counsel in one of the cases, Bowoto v. Chevron, involving Chevron’s helicoptering in, overseeing and payment of Nigerian forces who opened fire on protesters in Nigeria, and in that capacity we also signed on to an amicus brief in the Supreme Court urging the Supreme Court to find that corporations can be liable under the TVPA.
As the European Parliament considers passing a directive that would target hacking, EFF has submitted comments urging the legislators not to create legal woes for researchers who expose security flaws.
In the United States, laws such as the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act have created a murky legal landscape for researchers who conduct independent analysis of technology for security threats. Throughout the world, the Convention on Cybercrime has caused similar problems. Now, new vague and sweeping computer crime legislation is back on the European Union's agenda threatening coders' rights: the European Commission’s proposal on a draft Directive on Attacks Against Information Systems [pdf].
All told, the European Commission needs to make a stronger case for why this directive is needed at all. We believe it is largely duplicative of the Convention on Cybercrime, which itself is riddled with problems. Should the proposed directive move forward, however, we urge the Parliament to improve several aspects.
No criminalization of tools
The main so-called “novelty” of the draft directive is the criminalization of the use, production, sale, or distribution of tools to commit attacks against information systems. In our submission to the European Parliament, we opposed the wholesale criminalization of these tools: while they can be used for malicious purposes, they are also crucial for research and testing, including for "defensive" security efforts to make systems stronger and to prevent and deter attacks.
We urge the Parliament to focus on the intent behind using the tool, rather than mere possession, use, production, or distribution of such tools per se. The latter approach threatens valuable security testing that makes technology more robust and benefits us all.
Protect coders’ rights to unauthorized access to computers for security testing
We asked the European Parliament to protect researchers who access a computer system without explicit permission when the perpetrator does not have a criminal intent, or mens rea. This protection is needed to safeguard security researchers’ rights to free expression and innovation. Examining computers without the explicit permission of the owner is necessary for a vast amount of useful research, which might never be done if obtaining prior permission was a legal requirement.
The language of the draft Directive resembles language in the Computer Fraud and Abuse Act (CFAA), which provides, among other things, that it is illegal to ‘intentionally access a computer without authorization or exceed authorized access, and thereby obtain . . . information from any protected computer.’
The US experience can serve as a warning to European legislators that vague ill-defined terms can have deleterious effects on free expression, innovation, and competition, especially with respect to the meaning of "authorized" computer access.
Protect coders' rights to free expression and innovation
Finally, we asked the European Parliament to protect security researchers’ right to free expression. Their ability to freely report security flaws is crucial and highly beneficial for the global online community. Public disclosure of security information enables informed consumer choice and encourages vendors to be truthful about flaws, repair vulnerabilities, and improve upon products.
For example, in early February, two German security researchers reported a vulnerability in two encryption systems that could allow eavesdropping on hundreds of thousands of satellite phone calls. Public disclosure of this kind of research allows consumers to be better informed and aware that their communications are not actually protected, which in turn lets them make thoughtful choices about the technology they use. Hopefully it could even inspire the European Telecommunications Standards Institute to formulate a stronger security algorithm that protects users’ privacy.
In our submission, we asked the Parliament to protect the rights of those researchers and whistleblowers. In the course of fixing a problem, they could inadvertently violate laws—even if they never intend to steal information, invade people’s privacy, or otherwise cause harm. By reporting the vulnerability, researchers could risk exposing themselves to a lawsuit or criminal investigation. On the other hand, potentially serious security flaws will go unaddressed if security researchers are forced to withhold information to protect themselves from possible legal liability.
All told, the European Commission hasn’t demonstrated that this proposed directive is necessary, and we don’t think it is. If this proposal moves forward, though, the European Parliament needs to narrowly define and clarify it. The goal should be to leave breathing room for legitimate security research and testing, allowing security researchers to flourish and do what they do best.
The Pakistani government is looking for new ways to censor the Internet.
This week, the Pakistani Telecommunication Authority (PTA) released a Request for Proposals (RFP) for the development, deployment and operation of a “National Level URL Filtering and Blocking System,” calling on institutions to submit by March 2nd a feasible proposal that would allow the government to institute a large-scale filtering system. Shockingly, the RFP requires: “Each [filtering] box should be able to handle a block list of up to 50 million URLs (concurrent unidirectional filtering capacity) with processing delay of not more than 1 milliseconds.” While content filtering and blocking has existed in Pakistan for the past few years, it has been executed manually and has thus been inconsistent and intermittent.1 The state’s latest effort to subsidize a comprehensive, automated censorship regime is deeply troubling.
The RFP, posted on the National ICT R&D Fund website, details various requirements for the system, as well as details for applying for the grant. Its terms of reference describe how this system would address the supposed “problem” that Pakistan does not currently have a sufficient mechanism to filter and block content:
Many countries have deployed web filtering and blocking systems at the Internet backbones within their countries. However, Pakistani ISPs and backbone providers have expressed their inability to block millions of undesirable web sites using current manual blocking systems.
It goes on to describe how the blocking and filtering would be carried out:
This system would be indigenously developed within Pakistan and deployed at IP backbones in major cities, i.e., Karachi, Lahore and Islamabad. Any other city/POP could be added in future. The system is proposed to be centrally managed by a small and efficient team stationed at POPs of backbone providers.
The system would have a central database of undesirable URLs that would be loaded on the distributed hardware boxes at each POP and updated on daily basis. The database would be regularly updated through subscription to an international reputed company maintaining and updating such databases.
The RFP ends with 35 system requirements that details all aspects of the project and what would be required in the system. Some other specifications for the system include capabilities to block both an individual and a range of IP addresses, have support for multiple languages, and be stand-alone hardware that can easily be integrated into any network.
The entity funding this initiative is an arm of the Pakistani Ministry of Information Technology called the National ICT R&D Fund. The Ministry created the fund in 2007 to take a certain percentage of revenue from telecommunications companies and allocate it for scholarships in IT education and research and development of information and communication technologies. Therefore, all grant funding for this national censorship project comes from domestic ISPs, mobile carriers, and telephone companies. The decision-making process by which it chooses projects and beneficiaries for grants, however, is not described anywhere on their website.
Censorship and content filtering is part of a broader trend towards moral policing in Pakistan. Ever since the Pakistan Telecommunication Act, passed in 1996, enacted a prohibition on people from transmitting messages that are “false‚ fabricated‚ indecent or obscene,” the PTA has increasingly intensified their efforts to censor content online. The PTA blocked thousands of sites in 2007—not just those containing pornographic material or content offensive to Islam, but numerous vital websites and services—in response to a Supreme Court ruling that ordered the blocking of “blasphemous” websites. In 2008, they briefly blocked YouTube because the site hosted Geert Wilder’s film “Fitna.” They blocked it again in 2010, over a hosted clip of Pakistani President Asif Ali Zardari telling an unruly audience member to “shut up.” In May of 2010, the PTA blocked Facebook in response to a controversy over a competition to draw the Prophet Mohammed.
Most recently in November of last year, the PTA sent a notice to Pakistani mobile carriers to ban 1,600 terms and phrases from SMS texts within seven days or they would face legal penalties. It was soon revealed that the list originated from an American National Football League’s “naughty words” list words that were banned from being printing on American football jerseys.
This new proposal is fundamentally different from Pakistan’s prior censorship efforts. First, it aims to find a non-governmental third party to design and implement a censorship mechanism. Second, this new system would, for the first time, automate the blocking and filtering process to facilitate comprehensive censorship of webpages. Previously, they have had to censor and block content manually and therefore the process has been less than consistent.
A range of local Pakistani digital civil liberty organizations have come out against the PTA’s initiative. Bytes for All, a human rights organization based in Pakistan focused on digital security, online safety and privacy, responded to the announcement with a press release, which strongly criticized the government:
Bytes for All, Pakistan (B4A), strongly condemns this move of the Government and holds it akin to infringing citizens’ fundamental constitutional rights. For a democratically elected civilian government, implementing such a system is highly dictatorial in nature and will directly affect the freedoms and socio-economic well-being of the citizens, reflecting the tyrannical actions of repeated oppression by past military governments.
The statement goes on to call on the attention of the UN Expert Panel on Human Rights on the Internet to the current situation. Another organization, Bolo Bhi, has sent a letter to the Ministry of Information Technology to demand transparency into the proceedings of this alarming initiative:
We feel that for successful implementation of a policy at all levels, transparency is crucial. We are a functioning democracy and therefore it is important to have stakeholders on board that could guide and assist on a policy before such a decision is made.
Both organizations call for international companies and institutions to refrain from applying for this proposal in the name of upholding the right to free expression. The RFP itself does not even attempt to explain or justify the need for the censorship system. However, the terms of reference briefly mentions that such system is needed “in order to block the specific URLs containing undesirable content as notified by PTA from time to time.”
The website for the National ICT R&D Fund states that its mission is “To transform Pakistan’s economy into a knowledge based economy by promoting efficient, sustainable and effective ICT initiatives through synergic development of industrial and academic resources.” For the past five years, the fund has backed domestic IT projects in education, health, and technology development, including some dubious projects in biometrics and other supposed security measures.
It is deeply ironic that the National ICT R&D Fund’s purported purpose is “to transform Pakistan’s economy into a knowledge based economy,” yet it calls for proposals for a project that is itself inherently backward and draconian. A national blocking and filtering system would thrust the entire society into a tailspin of repression that would do immeasurable damage to the economy. More importantly, this automated censorship regime would violate the human right to free expression and access to knowledge.
It’s clear that the authorities behind these institutions simply do not comprehend the massive socio-economic costs this would have on Pakistan. As Bolo Bhi wrote in their press statement: “At a time when we as a country are struggling to counter a popular narrative about us, further limiting the sphere would portray us as a grim totalitarian state, which is simply untrue.” If the government of Pakistan ever hopes to catch up as a hub of innovation and re-emerge into the international realm as a modern democratic nation, a repressive censorship program restraining Pakistani expression would not be the place to begin.
Ahead of the Academy Awards this weekend, Chris Dodd, head of the Motion Picture Association of America, would like to assure you that "Hollywood is pro-technology and pro-Internet." But what does that mean? The comments filed at the Copyright Office this month by MPAA and RIAA, together with the Business Software Alliance, the Entertainment Software Association, and other copyright owners' groups, paint a clear picture of these groups' vision for the future of the Internet and digital technologies.
EFF is asking the Copyright Office for legal exemptions to the Digital Millennium Copyright Act to allow jailbreaking (or "rooting") of smartphones, tablets, and game consoles, so that people can run their software of choice on the devices they own. EFF is also asking for exemptions that will allow noncommercial video remixers to use video clips from DVDs and online video services. Other organizations are asking for exemptions for various forms of digital video, accessibility for the disabled, and other important projects. Under the DMCA, exemptions expire every three years, and have to be justified all over again. Many of you sent comments and signed petitions in support of EFF's exemption requests, and the Copyright Office received almost 700 comments.
MPAA and friends don't approve of a single one of the exemption requests. "The risk associated with encouraging people to circumvent and test the limits of fair use is too high," they say, and the makers of computing devices should be able to stop "unintended uses" of their products. In fact, say the entertainment lobbies, giving you the ability to modify your own devices for your own use will "wreak havoc" on "markets for consumer access to works."
Let's unpack this. Almost everything we do on the Internet or with digital media makes a copy—even viewing a webpage. In many cases, the fair use rule of copyright law is what keeps these everyday activities from being copyright violations. But proving definitively that a use is fair often requires a courageous artist or entrepreneur to go to court and risk massive penalties for the chance of having a judge say that what they're doing is legal. According to the entertainment lobbies, the U.S. government should not encourage people to do this.
Ironically, most of the devices that let us create and experience movies, music, software, and so on "test the limits of fair use"—and many have wound up in court. If this were discouraged, we may never have had the VCR, the MP3 player, the digital video recorder, image-searching websites, or social networks—at least not without asking the entertainment industries' permission first.
And speaking of permission, MPAA regrets that "the Copyright Office missed an opportunity to endorse" the custom of "asking permission" before innovating.
So what should the Copyright Office be doing? MPAA et al. humbly suggest that the Office should be protecting the "ongoing viability of business models" that create "predictability with respect to how works will be accessed and how copyrighted software and technologies used to facilitate such access will be used and manipulated." You won't find that in any law, although it sounds a lot like the goals of the now-defunct SOPA and PIPA bills. Again, let's look behind the euphemisms: the entertainment lobbies want the U.S. government to protect their members' bottom lines by regulating how digital technologies can be used. Only uses that receive Hollywood's permission, and are "predictable," should pass muster.
Apparently this is what Mr. Dodd means when he says "Hollywood is pro-technology and pro-Internet": technology that blocks "unintended uses" and an Internet subject to Hollywood's veto power. SOPA and PIPA may be dead, but the agenda behind them seems alive and well.
Note that disabling Viewing and Search History in your YouTube account will not prevent Google from gathering and storing this information and using it for internal purposes. It also does not change the fact that any information gathered and stored by Google could be sought by law enforcement.
With Viewing and Search History enabled, Google will keep these records indefinitely; with it disabled, they will be partially anonymized after 18 months, and certain kinds of uses, including sending you customized search results, will be prevented. An individual concerned about privacy may also want to set up a secondary Google account for browsing and sharing YouTube videos. She could then download all of her existing YouTube videos to her computer, delete them from her primary Google profile, and then use a separate browser to upload them to a new secondary Google account. If you want to do more to reduce the records Google keeps, the advice in EFF's Six Tips to Protect Your Search Privacy white paper remains relevant.
The following steps will delete your viewing and search history on YouTube. If you have multiple YouTube accounts, you will have to complete these steps for each account.
1. Log in to your Google account.
2. Go to https://www.youtube.com
3. Click on your icon.
4. Click “Video Manager”
5. Click “History”
6. Click “Clear all viewing history.”
7. Click “Pause viewing history."
8. Click "Search History."
9. Click "Clear all search history."
10. Click “Pause search history.”
If you have multiple YouTube accounts, you will have to complete these steps for each account.
California State Attorney General Kamala Harris announced an agreement yesterday with six mobile app platform providers aimed at encouraging app developers to provide more accessible privacy policies. The announcement comes at an auspicious moment -- consumer outrage at the recently-discovered address book practices that Path and other app developers claim are "industry standard" shows that there's a serious disconnect when it comes to industry practices and user privacy expectations. But we should be wary about solutions that depend on walled gardens. App developers need to start baking privacy protection into their designs, and though this agreement may help encourage that, it's not clear that it's the best tool to give consumers meaningful choices when it comes to controlling what data mobile apps access and share.
The good news about yesterday’s agreement is that it may encourage app developers to start thinking through the privacy ramifications of the technology they create. And this month’s address book uploading issues shows that these companies need the external motivation. When Hipster, a photo sharing social network app, was found to be surreptitiously uploading contact lists to their servers, their CEO announced an “Application Privacy Summit” to suss out the privacy issues around mobile apps. But the promised summit was scheduled for earlier this month and still hasn’t taken place.
The AG's agreement may be one way to address these issues, but this particular program -- relying on walled gardens and closed door negotiations with the gardens' gatekeepers -- isn’t necessarily the ideal resolution for the privacy problems afflicting mobile app users. Users need to have a voice when it comes to controlling their data, and software developers need to respect their choices or be held accountable.