In a highly anticipated decision, a federal judge in Washington, D.C. has ruled that violating a website’s terms of service does not violate the Computer Fraud and Abuse Act (CFAA), the notoriously vague federal computer crime law. This decision solidifies the trend in courts to reject overbroad and constitutionally troublesome interpretations of the CFAA that have threatened to criminalize innocuous online activity, such as beneficial research and journalism.

The case, Sandvig v. Barr, is a lawsuit by university professors, computer scientists, and journalists who want to research how algorithms unlawfully discriminate based on characteristics like race or gender. To determine whether algorithms produce discriminatory results, the researchers want to create multiple “tester” accounts. For example, to study the impact of gender on an employment website's algorithms, a researcher may create tester accounts that are identical except as to their listed gender, to isolate the effect that users’ gender has on the job postings that an algorithm provides them.

This type of audit testing is commonly used to identify discrimination in the offline world, including by the federal government itself. But when these kinds of studies move online, researchers have been concerned—and in some cases chilled—by the prospect of federal criminal liability. This is because online discrimination research sometimes involves violating a website’s terms of service, such as a prohibition against providing inaccurate account information. And the government interprets the CFAA, a law meant to target serious computer breaks-ins, so broadly that it would turn a violation of a website’s terms of service into a federal crime.

If you’re not familiar with the CFAA, you may be wondering how is it possible that a law meant to target serious computer breaks-ins is being abused to target violations of websites’ terms of service. The CFAA makes it a crime to “access a computer without authorization” or in a manner that “exceeds authorization.” But the law doesn’t make clear what these phrases mean. Though the law was intended to address malicious break-ins of private computer systems, in the absence of clear definitions, the government has argued that it goes much further.

Under such sweeping interpretations of the CFAA’s vague language, a researcher who tests a job website for discrimination by using false first names when registering a user account would be committing a federal crime if the website has a “real name” policy. The Sandvig v. Barr plaintiffs, represented by the ACLU, have argued that this expansive interpretation of the CFAA violates their First Amendment right to engage in harmless false speech.

In a resounding victory for free speech and anti-discrimination testing, the D.C. District Court made clear that the CFAA does not sweep so far. For several reasons, the court ruled, violations of a website’s terms of service cannot be grounds for criminal liability under the CFAA.

First, the public is entitled to notice of criminal laws. As the court recognized, websites’ terms of service can’t provide sufficient notice to users because the terms of service are “often long, dense, and subject to change,” and may be hidden away in fine print or the bottom of a website.

Next, criminal law must be enacted through Congress, and Congress cannot delegate that power to private entities. If the CFAA were interpreted to criminalize violations of websites’ terms of service, the court explained, it would “turn[] each website into its own criminal jurisdiction and each webmaster into his own legislature”—and each website’s terms of service into “a law unto itself.”

Finally, the court determined that interpreting the CFAA to criminalize constitutionally protected speech that happens to violate a website’s terms of service would present a serious threat to the First Amendment. By rejecting such a broad interpretation of the CFAA, the court ensured that the Sandvig v. Barr plaintiffs and others can continue engaging in protected speech without fear of criminal liability.

As access to critical resources such as housing opportunities and job prospects is increasingly governed by opaque algorithms, it is essential that researchers, computer scientists, and journalists be able to test those algorithms for intentional or unintentional discrimination. This decision is therefore not only critical for protecting beneficial research and journalism, but it is also an important victory in the ongoing fight for algorithmic transparency and accountability.

Naomi Gilens previously worked on Sandvig v. Barr at the ACLU.