The COVID-19 public health crisis has no precedent in living memory. But government demands for new high-tech surveillance powers are all too familiar. This includes well-meaning proposals to use various forms of data about disease transmission among people. Even in the midst of a crisis, the public must carefully evaluate such government demands, because surveillance invades privacy, deters free speech, and unfairly burdens vulnerable groups. It also metastasizes behind closed doors. And new surveillance powers tend to stick around. For example, nearly two decades after the 9/11 attacks, the NSA is still conducting dragnet Internet surveillance.
- First, has the government shown its surveillance would be effective at solving the problem?
- Second, if the government shows efficacy, we ask: Would the surveillance do too much harm to our freedoms?
- Third, if the government shows efficacy, and the harm to our freedoms is not excessive, we ask: Are there sufficient guardrails around the surveillance?
Would It Work?
The threshold question is whether the government has shown that its surveillance plan would be effective at solving the problem at hand. This must include published details about what the government plans, why this would help, and what rules would apply. Absent efficacy, there is no reason to advance to the next questions. Surveillance technology is always a threat to our freedoms, so it is only justified where (among other things) it would actually do its job.
Sometimes, we simply can’t tell whether the plan would hit its target. For example, governments around the world are conducting location surveillance with phone records, or making plans to do so, in order to contain COVID-19. As we recently wrote, governments so far haven’t shown this surveillance works.
Would It Do Too Much Harm?
Even if the government shows that a surveillance power would be effective, EFF still opposes its use if it would too greatly burden our freedoms. High-tech surveillance can turn our lives into open books. It can chill and deter our participation in protests, advocacy groups, and online forums. Its burdens fall all too often on people of color, immigrants, and other vulnerable groups. Breaches of government data systems can expose intimate details about our lives to scrutiny by adversaries including identity thieves, foreign governments, and stalkers. In short, even if surveillance would be effective at solving a problem, it must also be necessary and proportionate to that problem, and not have an outsized impact on vulnerable groups.
Thus, for example, EFF opposes NSA dragnet Internet surveillance, even if it can theoretically provide leads to uncovering terrorists, such as the proverbial needle in the haystack. We believe this sort of mass, suspicionless surveillance is simply incompatible with universal human rights. Similarly, we oppose face surveillance, even if this technology sometimes contributes to solving crime. The price to our freedoms is simply too great.
On the other hand, the CDC’s proposed program for contact tracing of international flights might be necessary and proportionate. It would require airlines to maintain the names and contact information of passengers and crews arriving from abroad. If a person on a flight turned out to be infected, the program would then require the airline to send the CDC the names and contact information of the other people on the flight. This program applies to a discrete set of information about a discrete set of people. It will only occasionally lead to disclosure of this information to the government. And it is tailored to a heightened transmission risk: people returning from a foreign country, who are densely packed for many hours in a sealed chamber. However, as we recently wrote, we don’t know whether this program has sufficient safeguards.
Are the Safeguards Sufficient?
Even if the government shows a form of high-tech surveillance is effective, and even if such surveillance would not intolerably burden our freedoms, EFF still seeks guardrails to limit whether and how the government may conduct this surveillance. These include, in the context of surveillance for public health purposes:
1. Consent. For reasons of both personal autonomy and effective public health response, people should have the power to decide whether or not to participate in surveillance systems, such as an app built for virus-related location tracking. Such consent must be informed, voluntary, specific, and opt-in.
2. Minimization. Surveillance programs must collect, retain, use, and disclose the least possible amount of personal information needed to solve the problem at hand. For example, information collected for one purpose must not be used for another purpose, and must be deleted as soon as it is no longer useful to the original purpose. In the public health context, it may often be possible to engineer systems that do not share personal information with the government. When the government has access to public health information, it must not use it for other purposes, such as enforcement of criminal or immigration laws.
3. Information security. Surveillance programs must process personal information in a secure manner, and thereby minimize risk of abuse or breach. Robust security programs must include encryption, third-party audits, and penetration tests. And there must be transparency about security practices.
4. Privacy by design. Governments that undertake surveillance programs, and any corporate vendors that help build them, must employ privacy officers, who are knowledgeable about technology and privacy, and who ensure privacy safeguards are designed into the program.
6. Transparency. The government must publish its policies and training materials, and regularly publish statistics and other information about its use of each surveillance program in the greatest detail possible. Also, it must regularly conduct and publish the results of audits by independent experts about the effectiveness and any misuse of each program. Further, it must fully respond to public records requests about its programs, taking into account the privacy interests of people whose personal information has been collected.
7. Anti-bias. Surveillance must not intentionally or disparately burden people on the basis of categories such as race, ethnicity, religion, nationality, immigration status, LGBTQ status, or disability.
8. Expression. Surveillance must not target, or document information about, people’s political or religious speech, association, or practices.
9. Enforcement. Members of the community must have the power to go to court to enforce these safeguards, and evidence collected in violation of these safeguards must be excluded from court proceedings.
10. Expiration. If the government acquires a new surveillance power to address a crisis, that power must expire when the crisis ends. Likewise, personal data that is collected during the crisis, and used to help mitigate the crisis, must be deleted or minimized when the crisis is over. And crises cannot be defined to last in perpetuity.
Outside the context of public health, surveillance systems need additional safeguards. For example, before using a surveillance tool to enforce criminal laws, the government must first obtain a warrant from a judge, based on probable cause that evidence of a crime or contraband would be found, and particularly describing who and what may be surveilled. Targets of such surveillance must be promptly notified, whether or not they are ever prosecuted. Additional limits are needed for more intrusive forms of surveillance: use must be limited to investigation of serious violent crimes, and only after exhaustion of less intrusive investigative methods.
Once the genie is out of the bottle, it is hard to put back. That’s why we ask these questions about government demands for new high-tech surveillance powers, especially in the midst of a crisis. Has the government shown it would be effective? Would it do too much harm to our freedoms? Are there sufficient guardrails?