In a 5-4 decision, the Supreme Court late last week barred the courthouse door to thousands of people who were wrongly marked as “potential terrorists” by credit giant TransUnion. The Court’s analysis of their “standing” —whether they were sufficiently injured to file a lawsuit—reflects a naïve view of the increasingly powerful role that personal data, and the private corporations that harvest and monetize it, play in everyday life. It also threatens Congressional efforts to protect our privacy and other intangible rights from predation by Facebook, Google and other tech giants.

Earlier this year, we filed an amicus brief, with our co-counsel at Hausfeld LLP, asking the Court to let all of the victims of corporate data abuses have their day in court.

What Did the Court Do?

TransUnion wrongly and negligently labelled approximately 8,000 people as potential terrorists in its databases. It also made that dangerous information available to businesses across the nation for purposes of making credit, employment, and other decisions. TransUnion then failed to provide the required statutory notice of the mistake. The Supreme Court held this was not a sufficiently “concrete” injury to allow these people to sue TransUnion in federal court for violating their privacy rights under the Fair Credit Reporting Act. Instead, the Court granted standing only to the approximately 1,800 of these people whose information was actually transmitted to third parties.

The majority opinion, written by Justice Kavanaugh, fails to grapple with how consumer data is collected, analyzed, and used in modern society. It likened the gross negligence resulting in a database marking these people as terrorists to “a letter in a drawer that is never sent.” But the ongoing technological revolution is not at all like a single letter. It involves large and often interconnected sets of corporate databases that collect and hold a huge amount of our personal information—both by us and about us. Those information stores are then used to create inferences and analysis that carry tremendous and often new risks for us that can be difficult to even understand, much less trace. For example, consumers who are denied a mortgage, a job, or another life-altering opportunity based upon bad records in a database or inferences based upon those records will often be unable to track the harm back to the wrongdoing data broker. In fact, figuring out how decisions were made, much less finding the wrongdoer, has become increasingly difficult as an opaque archipelago of databases are linked and used to build and deploy machine learning systems that judge us and limit our opportunities.

This decision is especially disappointing after the Court’s recent decisions, such as Riley and Carpenter, that demonstrated a deep understanding that new technology requires new approaches to privacy law.

This decision is especially disappointing after the Court’s recent decisions, such as Riley and Carpenter, that demonstrated a deep understanding that new technology requires new approaches to privacy law. The Court concluded in these cases that when police collect and use more and more of our data, that fundamentally changed the inquiry about our Fourth Amendment right to privacy and the Court could not rigidly follow pre-digital cases. The same should be true when new technologies are used by private entities in ways that threaten our privacy.

The majority’s dismissal of Congressional decision-making is also extremely troubling. In 1970, at the dawn of the database era, Congress decided that consumers should have a cause of action based upon a credit reporting agency failing to take reasonable steps to ensure that the data they have is correct. Here, TransUnion broke this rule in an especially reckless way: it marked people as potential terrorists simply because they shared the same name as people on a terrorist watch list without checking middle names, birthdays, addresses, or other information that TransUnion itself undoubtedly already had. The potential harms this could cause are particularly obvious and frightening. Yet the Court decided that, despite Congress’ clear determination to grant us the right to a remedy, the Court could still bar the courthouse doors.

Justice Thomas wrote the principal dissent, joined by Justices Breyer, Sotomayor, and Kagan. As Justice Kagan explained in an additional dissent, the ruling “transforms standing law from a doctrine of judicial modesty into a tool of judicial aggrandizement.” Indeed, Congress specifically recognized new harms and provided a new cause of action to enforce them, yet the Court nullified these democratically-enacted rights and remedies based on its crabbed view that the harms are not sufficiently “concrete.”

What Comes Next?

This could pose problems for a future Congress that wanted to get serious about recognizing and empowering us to seek accountability for the unique and new harms caused by modern data misuse practices, potentially including harms arising from decision-making based upon machine learning and artificial intelligence. Congress will need to make a record of the grievous injuries caused by out-of-control data processing by corporations who care more for their profits than our privacy and expressly tie whatever consumer protections it creates to those harms and be crystal clear about how those harms justify a private right of action.

The Court’s opinion does provide some paths forward, however. Most importantly, the Court expressly confirmed that intangible harms can be sufficiently concrete to bring a lawsuit. Doing so, the Court rejected the cynical invitation from Facebook, Google, and tech industry trade groups to deny standing for all but those who suffered a physical or economic injury. Nonetheless, we anticipate that companies will try to use this new decision to block further privacy litigation. We will work to make sure that future courts don’t overread this case.

The court also recognized that the risk of future harm could still be a basis for injunctive relief—so while you cannot seek damages, you don’t have to wait until you are denied credit or a job or a home before seeking protection from a court from known bad data practices. Finally, as the dissent observed, the majority’s standing analysis only applies in federal court; state courts applying state laws can go much further in recognizing harms and adjudicating private causes of action because the federal "standing" doctrine does not apply. The good work being done to protect privacy in states across the country is now all-the-more important.

But, overall, this is a bad day for privacy. We have been cheered by the Supreme Court’s increasing recognition, when ruling on law enforcement activity, of the perils of modern data collection practices and the vast difference between current and previous technologies. Yet now the Court has failed to recognize that Congress must have the power to proactively protect us from the risks created when private companies use modern databases to vacuum up our personal information, and use data-based decision-making to limit our access to life’s necessities. This decision is a big step backwards for empowering us to require accountability from today’s personal data-hungry tech giants. Let's hope that it is merely an anomaly. We need a Supreme Court that understands and takes seriously the technology-fueled issues facing us in the digital age.    

Related Issues