EFF is fighting another attempt by a giant corporation to take advantage of our poorly drafted federal computer crime statute for commercial advantage—without any regard for the impact on the rest of us. This time the culprit is LinkedIn. The social networking giant wants violations of its corporate policy against using automated scripts to access public information on its website to count as felony “hacking” under the Computer Fraud and Abuse Act, a 1986 federal law meant to criminalize breaking into private computer systems to access non-public information.

EFF, together with our friends DuckDuckGo and the Internet Archive, have urged the Ninth Circuit Court of Appeals to reject LinkedIn’s request to transform the CFAA from a law meant to target “hacking” into a tool for enforcing its computer use policies. Using automated scripts to access publicly available data is not “hacking,” and neither is violating a website’s terms of use. LinkedIn would have the court believe that all “bots” are bad, but they’re actually a common and necessary part of the Internet. “Good bots” were responsible for 23 percent of Web traffic in 2016. Using them to access publicly available information on the open Internet should not be punishable by years in federal prison.

LinkedIn’s position would undermine open access to information online, a hallmark of today’s Internet, and threaten socially valuable bots that journalists, researchers, and Internet users around the world rely on every day—all in the name of preserving LinkedIn’s advantage over a competing service. The Ninth Circuit should make sure that doesn’t happen.

Background: Bad Court Decisions Open Door to Abuse

The CFAA makes it illegal to engage in “unauthorized access” to a computer connected to the Internet, but the statute doesn’t tells us what “authorization” or “without authorization” means. This vague language might have seemed innocuous to some back in 1986 when the statute was passed, but in today’s networked world, where we all regularly connect to and use computers owned by others, this pre-Web law is causing serious problems

In some jurisdictions, the CFAA has metastasized into a tool for companies and websites to enforce their computer use policies, like terms of service (which no one reads) or corporate computer policies. But other courts—including the Ninth Circuit back in 2012—have rejected turning the CFAA “into a sweeping Internet-policing mandate.” The Ninth Circuit instead chose to “maintain[] the CFAA’s focus on hacking,” holding that violating a company’s or website’s terms of use cannot give rise to liability. The court recognized that basing criminal liability on violations of computer use policies would turn innocuous activities like checking the score of a baseball game at work or fudging your age on your social media profile into a felony offense—and make criminals out of all of us.

Then in 2016, the Ninth Circuit reversed course and delivered two dangerously expansive interpretations of the CFAA in cases involving password sharing. Despite our warnings that the decisions would be easily misused, the court refused to reconsider either case, stressing that the decisions would be limited to their “stark” facts.

Within weeks after the decisions were reached, LinkedIn began using these two decisions in an attempt to get around the Ninth Circuit’s 2012 ruling—and to use the CFAA to enforce its terms of service prohibition on scraping and thereby block competing services from perfectly legal uses of publicly available data on its website.

One company targeted by LinkedIn was hiQ Labs, which provides analysis of data on LinkedIn users’ publicly available profiles. LinkedIn sent hiQ cease and desist letters warning that any future access of its website, even the public portions, were “without permission and without authorization” and thus violations of the CFAA. hiQ challenged LinkedIn’s attempt to use the CFAA as a tool to enforce its terms of use in court. hiQ won a preliminary injunction against LinkedIn in district court, and LinkedIn appealed.

The Problems with LinkedIn’s Position

As we told the court in our amicus brief, Linkedin’s interpretation of the CFAA is problematic for a number of reasons.

First, allowing a website to use the CFAA as a terms of service enforcement mechanism would do precisely what the Ninth Circuit in 2012 sought to avoid: it would “transform the CFAA from an anti- hacking statute into an expansive misappropriation statute” for enforcing the use of publicly available information across the Web. Accessing public information on the open Internet cannot—and should not—give rise to liability under a law meant to target breaking into private computers to access non-public information.

Second, imposing CFAA liability for accessing publicly available information via automated scripts would potentially criminalize all automated “scraping” tools—including a wide range of valuable tools and services that Internet users, journalists, and researchers around the world rely on every day. Automated scraping is the process of using Internet “bots”—software applications that runs automated tasks over the Internet—to extract content and data from a website. LinkedIn tried to paint all bots as bad, but as we explained to the Ninth Circuit, bots are an essential and socially valuable component of the Internet. The Web crawlers that power tools we all rely on every day, including Google Search and Amici DuckDuckGo and Internet Archive, are Internet bots. News aggregation tools, including Google’s Crisis Map, which aggregated critical information about the California’s October 2016 wildfires, are Internet bots. ProPublica journalists used automated scrappers to investigate Amazon’s algorithm for ranking products by price and uncovered that Amazon’s pricing algorithm was hiding the best deals from many of its customers. The researchers who studied racial discrimination on Airbnb also used bots, and found that distinctively African American names were 16 percent less likely to be accepted relative to identical guests with distinctively white names.

Third, by potentially criminalizing what are in fact everyday online tools, LinkedIn’s position violates the long held “Rule of Lenity,” which requires that criminal statutes be interpreted to give clear notice of what conduct is criminal.

Old Laws Can’t Do New Tricks

The CFAA is an old, blunt instrument, and trying to use it to solve a modern, complicated dispute between two companies will undermine open access to information on the Internet for everyone. As we said in our amicus brief:

The power to limit access to publicly available information on the Internet under color of the law should be dictated by carefully considered rules that balance the various competing policy interests. These rules should not allow the handful of companies that collect massive amounts of user data to reap the benefits of making that information publicly available online—i.e., more Internet traffic and thus more data and more eyes for advertisers—while at the same time limiting use of that public information via the force of criminal law.

LinkedIn’s Position Won’t Actually Protect Privacy

LinkedIn argues that imposing criminal liability for automated access of publicly available LinkedIn data would protect the privacy interests of LinkedIn users who decide to publish their information publicly, but that’s just not true. LinkedIn still wouldn’t have any meaningful control over who accesses the data and how they use it, because the data will still be freely available on the open Internet for malicious actors and anyone not within the jurisdiction of the United States to access and use however they wish. LinkedIn’s contractual use restrictions on automated access may provide an illusion of privacy—and deter law-abiding individuals and U.S.-based companies from using automated tools to access that data—but nothing more.

LinkedIn knows this. Its privacy policy acknowledges the inherent lack of privacy in data posted publicly and makes no promises to users about LinkedIn’s ability to protect it: “Please do not post or add personal data to your profile that you would not want to be publicly available.” LinkedIn shouldn’t be spreading misconceptions about the “privacy” of publicly posted data in court pleadings to advance its corporate interests.

LinkedIn Can’t Have Its Cake and Eat It, Too

The only way for LinkedIn to truly protect the privacy of its users’ is to make their profiles non-public—i.e., to put their information behind a username and password barrier. But instead its profiles are public by default. As LinkedIn itself admits, it benefits from that data remaining public and freely accessible on the Internet: open access on its platforms means more Internet traffic (and thus more data and more eyes for advertisers). As we told the court, “LinkedIn wants to ‘participate in the open Web’ but at the same time abuse the CFAA to avoid ‘accept[ing] the open trespass norms of the Web.’” We hope the court does not allow it.

Correction: An earlier version of this post stated that LinkedIn and the Electronic Privacy Information Center both argued that imposing criminal liability for automated access would protect the privacy interests of LinkedIn users. The post was updated to clarify that EPIC filed an amicus brief in the case in support of neither party, raising the privacy implications of the scope of the lower court's preliminary injunction order, which enjoined LinkedIn from blocking hiQ's access to publicly available user profiles, and not arguing in favor of criminal liability.