The PROTECT IP Act, known as PIPA, yesterday passed through the Senate Judiciary Committee with only minimal changes. The current draft bill is here. The good news is that the approval was quickly met by a much-welcome hold on the legislation from Senator Wyden of Oregon.
Wyden sums up our concerns with the bill very nicely:
At the expense of legitimate commerce, PIPA’s prescription takes an overreaching approach to policing the Internet when a more balanced and targeted approach would be more effective. The collateral damage of this approach is speech, innovation and the very integrity of the Internet.
On the more granular side, we have a few additions to the issues raised in our earlier post about PIPA:
The amended bill versus the bill as introduced
The current amendment includes an especially unfortunate edit that the Senate Judiciary Committee failed to highlight in a summary of changes. PIPA enables both the Attorney General and private parties to bring cases against websites “dedicated to infringing activities.” Under the first version of the bill, if a plaintiff “through due diligence” couldn’t find someone within the United States to sue, the Attorney General but not a private litigant was allowed to pursue a claim directly against the domain name of the site. This kind of action is called in rem and refers to a court’s power to issue orders against property without involvement of the owner or other person related to the property. After yesterday's amendments, PIPA allows private litigants to sue in rem as well. As a general matter, the ability to get court orders against an entire website without the site owner’s prior knowledge, much less ability to protest, in and of itself raises concerns about due process. It also raises First Amendment concerns given that the actions target entire websites, including lawful speech on those sites. Extending this power to private parties increases the likelihood that it will be abused.
When COICA was introduced in the Senate last fall, EFF wrote about its dangerous implications for the Internet’s domain name system (DNS). These remain true for PIPA, despite the removal of a provision that would have required registrars and registries to block domain names pointing to sites “dedicated to infringing activities.” Because blocking via registries and registrars underlies Immigration and Customs Enforcement’s ongoing practice of seizing domain names, taking this device out of PIPA is small gain. The bill will still require targeted DNS server operators like ISPs to prevent an identified domain name from resolving to the domain's IP address, thereby preventing their users from accessing those sites. As a result, the warnings that we and others gave last year about serious security vulnerabilities and a fractured Internet are unchanged.
But the new bill goes even further. Where COICA didn't bother to define “domain name system server," PIPA says this:
the term “domain name system server” means a server or other mechanism used to provide the Internet protocol address associated with a domain name
The inclusion of the words “or other mechanism” vastly increases the potential scope of the definition, at the risk of extreme and unintended consequences. The term could sweep in, for example, operating systems, email clients, web clients, routers, and a host of other technology. This may be a simple blunder due to technical ignorance on the part of the drafters, defining “server” so broadly as to mean effectively “client.” If so, that’s troubling enough. If not, this bill has even more grave implications for the health of the network than we thought.
This blog post was also published on the Index on Censorship blog.
Despite a super injunction in place to keep his name and the story of his extra-marital affair out of the tabloids, a British footballer has found that where there’s the Internet, there’s a way...for the story to get out, that is.
Partially in response to the draconian nature of the super injunction the footballer obtained, tens of thousands of Twitter users published his name, briefly turning it—along with the name of his alleged mistress—into a Twitter trending topic, with purportedly as many as 75,000 individuals tweeting the name. The athlete—who has now been named in British media as well as in Parliament as Ryan Giggs—reportedly obtained a court order in British High Court to demand Twitter reveal the identities of users who had posted the tweets. We call this public backlash to overbroad censorship attempts the Streisand effect.
Publishing truthful information about a matter of public concern is and should be protected expression. Yet these injunctions prevent the press and the public from reporting on details of a court case, and can even include preventing a mention of the fact that an injunction has been taken out.
The controversial super injunction procedure was created by the 1998 Human Rights Act and aimed, nobly, at protecting individuals' privacy, while also protecting their right to freedom of expression. However, the balance here is plainly off. International freedom of expression organization, Article 19, has noted that super injunctions are a form of prior censorship that is not permitted under international human rights law—including permitted limits to Article 19 of the Universal Declaration of Human Rights and Article 10 of the European Convention on Human Rights.
It's easy to see why. In this case, as in reportedly many others, super injunctions have become a tool of powerful public figures to try to stop embarrassing facts from being discussed, and in this instance the injunction process is ironically being used to require Twitter to pierce the anonymity of its customers based on the content of their speech. Particularly in this situation—where very public figures who actively seek public attention much of the time are trying to ensure that the public only learns the heroic, and not the embarrassing, facts about them—these broad super injunctions raise deep concerns.
While the situation raises raises many questions, three issues jump out at us:
Blaming the Platform - UK needs Intermediary Protection
In the United States, intermediaries like Twitter are protected by Section 230 of the Communications Decency Act of 1996. CDA 230 provides online intermediaries that host speech with protection against a range of laws that might otherwise hold them legally responsible for what their users say and do. In essence, CDA places the responsibility for speech on the individual speaker rather than on the platform.
As Eric Goldman noted in a position paper for an OECD experts workshop on Internet intermediaries on the benefits of immunity regimes for Internet publishers:
“The United States has seen an explosion of entrepreneurial activity from Internet publishers of reputational information—a process fostered by 47 U.S.C. § 230, which Congress enacted in 1996 as part of the Communications Decency Act. Content originators remain liable for their content, but 230 provides Internet publishers with a powerful immunization for content originated by third parties. With 230’s protection, Internet publishers are developing innovative ways to supply consumers with helpful reputational information, freed from concerns that innovation will increase their liability for user content..."
CDA 230, along with the First Amendment, would protect Twitter (and likely most U.S. Twitter customers) should the footballer attempt to enforce a U.K. judgment here in the U.S., assuming Twitter is not subject to jurisdiction of the U.K. courts.
That's good news, but the failure of the U.K. to adequately protect intermediary platforms under UK law raises deep concerns.
It is now painfully clear that the judicial ruling is not stopping the facts about this matter from being spoken and that there is a strong public interest in this gossipy news about very public celebrities. As the British Courts themselves recently observed in a similar case:
"The Court should guard against slipping into playing the role of King Canute. Even though an order may be desirable for the protection of privacy, and may be made in accordance with the principles currently being applied by the courts, there may come a point where it would simply serve no useful purpose and would merely be characterised, in the traditional terminology, as a brutum fulmen. It is inappropriate for the Court to make vain gestures."
Continued insistence on this injunction, and continued efforts to impose liability, run the risk of creating an atmosphere where British court rulings have reduced authority because they are viewed as unrealistic and out of touch with modern technology.
The British courts deserve better and it may fall to the British Parliament to change the ‘super injunction’ law in order to fix this problem.
Once Again Twitter's Policy of Notifying Users is Key
In January, Twitter rightfully received the world's praise for insisting on notifying its users when the U.S. government demanded information about several Twitter users. Now Twitter's policy of notifying users may be triggered again, in the event that they receive appropriate legal process requiring them to identify users who republished the information. EFF has called on other service providers to make the same promise to notify users that Twitter has made, so that if a "super injunction" hits any other service providers, users can take steps to protect themselves.
We’ve previously written about the Kerry-McCain "Commercial Privacy Bill of Rights," which tries to create a general federal privacy framework rooted in the Fair Information Practices (although we’re not sure how well it succeeds). Currently, federal privacy law is sector-specific, often applying only to certain types of information or certain categories of "covered entities," and thus leaving gaps in privacy protection. A good comprehensive federal privacy law could fill those gaps.
At the same time, privacy advocates are also fans of state privacy laws. States are often privacy innovators. A classic example is California’s pioneering data breach notification law, which helped shed light on just how often (and how badly) holders of our personal data mess up—and has since been copied by many states. There’s still no federal breach notification law.
More generally, many states have laws that authorize state officials (and in more limited circumstances, consumers) to bring consumer protection lawsuits against unfair or deceptive trade practices. In California, Business & Professions Code § 17200 can be enforced not only by the state attorney general but also by: 58 county district attorneys; 5 city attorneys (for each of the cities with populations over 750,000); and full-time city attorneys for any of the other 400+ smaller cities (with the consent of the county district attorney). District attorneys across California—Alameda, Los Angeles, Sacramento, San Diego, San Francisco, San Mateo, and Sonoma (to name a few)—have actively used § 17200.
But these powerful state-level laws for protecting consumer privacy might be endangered. Under the U.S. Constitution’s Supremacy Clause, both the Constitution and federal law “shall be the supreme Law of the Land; ... any Thing in the Constitution or Laws of any state to the Contrary notwithstanding.” (Article VI, clause 2) Lawyers call this “preemption” - and it means that the federal law will trump the state law. Congress can expressly preempt state law, but even if Congress doesn’t say so outright, courts may find that a state law is preempted because it conflicts with federal law or because Congress intended to “occupy the field.”
On the other hand, Congress can also expressly set a federal “floor” but allow the states to impose stricter rules. For example, as the legislative history of the Wiretap Act states, “The proposed provision envisions that States would be free to adopt more restrictive legislation, or no legislation at all, but not less restrictive legislation.” S. Rep. No. 1097, at 98 (1968), reprinted in 1968 U.S.C.C.A.N. 2112, 2187.
So an obvious question is how the Kerry-McCain bill addresses state privacy laws. Our main conclusion: Kerry-McCain would preempt many state privacy laws, because § 405(a) of the bill expressly preempts all state laws “relating to” covered entities “to the extent that such provisions relate to the collection, use, or disclosure of” either “covered information” as defined in the bill or “personally identifiable information or personal identification information addressed in provisions of the law of a State.” (There are some carve-outs for state laws concerning the collection, use, or disclosure of health or financial information, required notifications pursuant to a data breach, and state laws that “relate to acts of fraud.” § 405(b)(2).)
The broad scope of preemption results from three factors. First, a comprehensive privacy law—regulating offline as well as online activity—by definition runs into the many state laws that currently protect information privacy. Second, Kerry-McCain isn’t a federal “floor” law like the Wiretap Act. It’s the opposite, setting a federal “ceiling.” So if it were enacted, states would be hampered from passing stronger protections for consumer privacy. Third, Kerry-McCain reaches entities like common carriers and non-profit organizations that the Federal Trade Commission (which under the bill would develop regulations) normally can’t regulate.
Thus, for example, Kerry-McCain likely preempts all state laws that protect the privacy of your phone records. Current California law protects telephone subscribers’ personal calling patterns, including numbers called, from being made available without first obtaining the residential subscriber’s written consent. Cal. Pub. Util. Code § 2891(a), et seq.; Cal. Penal Code § 638(a) (prohibiting any person from purchasing, selling, or offering or conspiring to purchase or sell “any telephone calling pattern records or list, without written consent of the subscriber”).
Such preemption might not be so bad if Kerry-McCain replaced the lost state protection with equivalent federal protection—but it doesn’t. California law provides a private right of action (to sue the telephone company and its employees) under § 2891(e); there’s no private right of action under Kerry-McCain.
The preemptive effect of Kerry-McCain would also affect enforcement of California law more broadly. Recall the earlier discussion of Business & Professions Code § 17200; it may be preempted as well. But even if it’s not, § 405(b) of the bill radically changes the enforcement picture, because of all state officials, only state attorneys general may bring actions that sound “in whole or in part” upon violations of Kerry-McCain—county district attorneys, city attorneys, etc., cannot. Remedies are restricted as well. Actions are authorized only in cases of economic or physical harm. § 403(a)
In short, we think that Kerry-McCain would preempt many state laws and weaken enforcement of those laws that it doesn’t preempt. We think that strips away the hard-won consumer protections many states have enacted, and could prevent new state-level protections from being passed in the future. We hope that the bill can be amended to eliminate these problems.
Update II: Apparently not frightened off by Apple's letter defending its developers, Lodsys went ahead and sued at least seven developers in the Eastern District of Texas for patent infringement. In its original cease-and-desist letters, Lodsys gave developers 21 days to respond. But – apparently in response to Apple's letter – Lodsys went ahead and filed suit sooner, claiming that it needed to "preserve its legal options." We continue to monitor the situation and follow developments in the litigation.
Update: We were pleased to learn that Apple has decided to stand up for its developers. Its detailed letter to Lodsys, sent yesterday, explains in no uncertain terms why the patent infringement allegations are baseless and improper. Let's hope this ends the matter.
We've been waiting expectantly for Apple to step up and protect the app developers accused of patent infringement solely for using a technology that Apple required they use in order to sell their apps in Apple's App Store. Apple's failure to defend these developers is troubling and highlights at least two larger problems: patent trolls and developers' vulnerability when harassing and counter-productive patent litigation comes around.
In case you missed it, Lodsys – a troll whose sole business model is owning and suing on patents – has sent letters to many of Apple's app developers accusing them of infringing a patent that covers the in-app purchasing functionality that Apple provides as part of its operating system. In addition to these accusations, Lodsys' letters demanded payment. Unfortunately, suing app developers – who often lack the resources required to defend a lawsuit – is a trend we’re seeing more and more often.
What’s different here, however, is that Apple provides this functionality to its developers and requires that they use it. Apple itself is protected from liability – Apple took a license from Lodsys' predecessor to use this very patent (which was likely part of a larger blanket license). And the apparently one-sided Apple-developer agreement does not require that Apple indemnify developers from suits based on technology that Apple provides.
This is a problem that lawyers call a misallocation of burden. The law generally works to ensure that the party in the best position to address an issue bears the responsibility of handling that issue. In the copyright context, for example, the default assumption is that the copyright owners are best positioned to identify potential infringement. This is because, among other reasons, copyright owners know what content they own and which of their works have been licensed. Here, absent protection from Apple, developers hoping to avoid a legal dispute must investigate each of the technologies that Apple provides to make sure none of them is patent-infringing. For many small developers, this requirement, combined with a 30 percent fee to Apple, is an unacceptable cost. Even careful developers who hire lawyers to do full-scale patent searches on potential apps surely would not expect to investigate the technology that Apple provides. Instead, they would expect (with good reason) that Apple wouldn't provide technologies in its App Store that open its developers up to liability – and/or would at least agree to defend them when a troll like Lodsys comes along.
By putting the burden on those least able to shoulder it, both Apple and Lodsys are harming not just developers but also the consumers who will see fewer apps and less innovation. We hope that going forward companies like Apple will do what's right and stand up for their developers and help teach the patent trolls a lesson.
EFF is proud to support SB 914, a bill that requires the police to obtain a warrant before searching a recent arrestee’s cell phone.
SB 914 is a response to a January decision of the California Supreme Court in People v. Diaz. In that case, the court authorized police officers to search any person’s cell phone after they had been arrested under a narrow exception to the Fourth Amendment’s warrant requirement that permits law enforcement officers to search the area immediately around a person “incident to arrest.” This exception has two traditional rationales: ensuring officer safety by allowing a search for weapons, and protecting evidence from immediate destruction. By permitting the warrantless search of a cell phone under this exception, the Court gave officers carte blanche to rummage through all the private data and information people keep on their cell phones – emails, text messages, call history, websites they’ve visited, and their calendars, to name just a few examples –regardless of whether the police believed there was evidence of the crime on the cell phone and without any judicial oversight.
Courts throughout the country have been grappling with this issue and have reached conflicting results, with some courts authorizing warrantless searches of cell phones and others not. In an amicus brief (pdf) recently filed before the Oregon Supreme Court, EFF argued that warrantless searches of cell phones incident to arrest violate the Constitution’s right to privacy. This is all the more troubling because cell phones pose no danger to the police, the threat of destruction of evidence can be easily remedied through simple preservation methods, and many arrests do not result in criminal prosecution at all.
SB 914 is a proactive attempt to legislate Constitutional protection and reverse Diaz’s dangerous course. Introduced by California Senator Mark Leno and sponsored by the Northern California ACLU, the bill reasonably balances law enforcement needs with people’s privacy rights by allowing the police to look through cell phones only when they have convinced a magistrate judge there is likely evidence of the crime on the phone.
The bill is expected to be on the Senate floor soon. All Californians should ask their state lawmakers to support SB 914 and tell law enforcement that if they want access to the personal and private data stored on cell phones, they need to come back with a warrant.
Freedom House released Leaping Over the Firewall last month, a report covering two angles: details about Internet censorship in Azerbaijan, Burma, China, and Iran; and the use of circumvention software in those countries to bypass Internet censorship. As government censorship of the Internet spreads worldwide, research about the technology, norms and policies determining the flow of information is going to be increasingly vital.
Leaping Over the Firewall blends a non-technical survey method with some lightweight lab testing of circumvention software. This approach is unique, but has some limitations that affect how the report should be read in order to avoid confusion.
What are the report's goals?
The report is about circumvention tools—software used by Internet users to get around blocking and filtering technologies set up by governments. More specifically, the report is a vehicle for two very different sets of information:
the results of non-probability sampled surveys about users of circumvention software, distributed to and collected from users in Azerbaijan, Burma, China, and Iran; and
the results of lab-based testing of circumvention software.
What does Freedom House intend to achieve by releasing this report? In their own words:
Freedom House conducted this product review to help internet users in selected internet-restricted environments assess a range of circumvention tools and choose the tools that are best suited to their needs. [...] By providing this assessment, Freedom House seeks to make circumvention tools more accessible in countries where they are needed, thereby countering internet censorship. The evaluation is also useful for tools developers to learn how their tools are perceived by the users in these countries, and what enhancement would be beneficial.
What are the report's limitations?
The survey results are not representative. Note that the survey was only issued to users in four countries, and that those users were not randomly sampled. Understandably, safety and operational considerations limit the ability of researchers to conduct a more robust survey—but it also means that the findings in this report should not be treated as generally applicable. Internet censorship takes place in many countries apart from Iran, China, Azerbaijan, and Burma. China and Iran in particular are understood to have more sophisticated, aggressive Internet censorship operations than other countries. Readers must be careful to avoid over-generalizing the report's results to other countries that practice censorship of the Internet, but differ in userbase, politics, technology understanding, and more.
The report isn't about how to communicate securely and safely. Internet filtering and blocking is increasingly combined with Internet surveillance, partly because tools capable of surveilling Internet traffic can help better identify what and how to block and filter. A software tool can provide circumvention but still be well-short of providing any kind of meaningful security against a government. The report does surface the complexity of making security decisions around using the Internet, but the report also makes notable misuses of "security" throughout.
Here is an example of "security" being used irresponsibly:
Tor is software that a user can run to give themselves a relatively strong guarantee of anonymity online.1 Tor's design allows it to work as circumvention software, but its value extends beyond getting access to blocked or filtered information—Tor's design is intended to give its users anonymity by taking measures to defeat network surveillance and traffic analysis.
A reader glossing over the report might see that Tor received 2 stars in security, and make the unfortunate judgment that Tor shouldn't be used on that basis. What this graphic actually means is that Freedom House's survey respondents—on a purely anecdotal, non-random sampled basis—evaluated Tor to present "operational problems" while having fewer technical support resources available.2
Everywhere else in the report, this category of poll question is called "security and support," but for some reason, in the box summary, it's inappropriately reduced to just "security." Looking at the questions, the category referred to as "security" in the boxes actually represents survey respondents' views on usability and support—essentially whether or not users had trouble using or understanding the software, and whether or not there were resources to help them understand what was wrong. Usability and support are certainly important characteristics, but describing it as "security" is a gross misnomer.
The security ratings for the circumvention tools don't appear to heavily weight crucial elements of the design of the circumvention software system as a whole—in particular, whether or not the operators of the circumvention software have tracking or data collection capabilities over users of the software, and whether or not the source code of the tool has been made available for analysis.3 One baseline factor in evaluating the security of a piece of software is whether or not the widest possible community of knowledgable technologists has had the opportunity to identify defects, from design and architecture, down to the code itself. From a computer security perspective, most (if not all) software has exploitable flaws—taking advantage of those flaws to disrupt or control a piece of technology is more or less a matter of time and resources. And so there's a general understanding that any tool whose source code hasn't been made widely available doesn't have the benefit of having allowed broad research into the ways that it could be exploited, making claims of security essentially impossible to validate.
The report features a decision-making flowchart, where the resulting recommendation of software rests upon whether or not users are seeking to receive information or upload information; and also whether or not they're interested in speed or security. But without sufficient context—details buried in the written descriptions about software—users are being encouraged to conduct a risk assessment without the broad range of knowledge that may be necessary to make a truly improved decision about which circumvention software to use, and how to use it.
What are the essential take-aways?
Does the report deliver on its stated goals? As far as helping users assess a range of circumvention tools, the report's writeups about circumvention software projects do include valuable contextual details—such as that a software project is not openly documented or described, or that the operator of the software is able to log what its users are accessing. But the reductive star ratings and the conflation of survey content with lab research content throughout cast doubt on how beneficial this report will be to end users who don't read the report in its entirety, with an eye for the few caveats that establish the report's limitations.
A second goal is to help "tools developers to learn how their tools are perceived by the users in these countries, and what enhancement would be beneficial." In this regard, the report could be beneficial if tools developers take the anecdotal survey findings and seek additional evidence to see if there are patterns that need addressing.
Ultimately, the Leaping Over the Firewall report seems to face a difficult internal contradiction: approaching circumvention tools from a largely non-technical perspective. The blocking of Internet content by governments and the circumvention of those blocks is a deeply technical topic where the adage that "code is law and architecture is policy" are powerfully validated.
However, there is value in attempting to identify and quantify what end users of circumvention software experience, and Freedom House's general finding that users will trade security for operational speed raises a number of vital questions about exactly why that choice is being made. Under what conditions does a user switch from a slow, highly secure channel to a faster, less secure channel? And for activists, when is an appropriate time to make that decision, and when should speed be sacrificed for security? The answer to these questions will help tool developers, activists, and users understand how to continue to have free expression on the Internet even and especially when faced with censorship.
1. EFF sponsored Tor early in its development because of its explicit, sophisticated focus on Internet anonymity, which EFF considers to be central to free expression.
2. The poll questions chosen by Freedom House leading to the "Security" star rating were:
Problems: How often have you encountered operational problems using the abovementioned tools?
Solutions: When you have encountered a problem, how easy was it for you to obtain help?
Support Validity: How frequently does the help you find come directly from the tool’s developers or the tool’s network?
3. The technical testing methodology has a section for "logging practises," but is not clear how this detail was represented in the relatively non-granular star rating.
Today, Senator Patrick Leahy introduced much-needed legislation to update the Electronic Communication Privacy Act of 1986, a critically important but woefully outdated federal privacy law in desperate need of a 21st century upgrade. This ECPA Amendments Act of 2011 (S. 1011) would implement several of the reform principles advocated by EFF as part of the Digital Due Process (DDP) coalition, and is a welcome first step in the process of providing stronger and clearer privacy protections for our Internet communications and location data. Here is the bill text, along with a summary of the bill.
The upshot? If the government wants to track your cell phone or seize your email or read your private IMs or social network messages, the bill would require that it first go to court and get a search warrant based on probable cause. This is consistent with DDP's principles, builds on EFF's hard-won court victories on how the Fourth Amendment applies to your email and your cell phone location data, and would represent a great step forward for online and mobile privacy protections.
The bill isn't absolutely free of problems: although it clearly would require a warrant for ongoing tracking of your cell phone, it would also and unfortunately preserve the current statutory rule allowing the government to get historical records of your location without probable cause. It also expands the government's authority to use National Security Letters to obtain rich transactional data about who you communicate with online and when, without probable cause or court oversight. You can count on EFF to press for these problems to be fixed, and for all of the DDP principles to be addressed, as the bill proceeds through Congress.
However, as the start of the process of updating ECPA for the always-on, location-enabled technology of the 21st century, Senator Leahy's bill represents an incredibly important step in the right direction, and we at EFF look forward to working with Senator Leahy and others in Congress as they work to create new laws to better protect your online and mobile privacy. In the meantime, stay tuned for more commentary and analysis from EFF as the ECPA reform process moves forward.
Join EFF on Friday to learn the inside story on social media and the Arab Spring!
How important are social networking tools like Twitter and Facebook for international activists fighting for their liberty? Are these networks forging a new international power structure? Tunisian activist Sami Ben Gharbia will join us for a special Geek Gathering to recount the role of social media in the Tunisian revolution. Sami's presentation and community dialogue will be hosted by Jillian York, EFF's new Director for International Freedom of Expression, with special guest Sachin Agarwal, founder of Posterous, the information sharing website Sami used extensively during the revolution.
This all ages event will be held at EFF's future headquarters in the heart of San Francisco's Mission District nightlife. Come see our new home for online rights!
Friday, May 20, 2011, 7-9 PM
EFF's Future Headquarters
2567 Mission Street
San Francisco, CA 94110
We ask for a $25 contribution, but no one will be turned away for lack of funds. EFF member admission is $20 online or at the door with your Member Card. Contact email@example.com for the Members-Only link!
About Sami Ben Gharbia
Sami is a Tunisian anti-censorship activist and blogger based in the Netherlands. He is the co-founder of nawaat.org, a Tunisian award-winning collective blog about news and politics. Sami serves as Advocacy Director for Global Voices, where he also works on Threatened Voices, a recently developed initiative of the Global Voices Advocacy project. He is the author of a French-language book titled Journey in a Hostile World, which documents his escape from Tunisia.