When a criminal started lacing Tylenol capsules with cyanide in 1982, Johnson & Johnson quickly sprang into action to ensure consumer safety. It increased its internal production controls, recalled the capsules, offered an exchange for tablets, and within two months started using triple-seal tamper-resistant packaging. The company focused on fixing weak points in their supply chain so that users could be sure that no one had interfered with the product before they purchased it.

This story is taught in business schools as an example of how a company chose to be proactive to protect its users. The FDA also passed regulations requiring increased security and Congress ultimately passed an anti-tampering law. But the focus of the response from both the private and the public sector was on ensuring that consumers remained safe and secure, rather than on catching the perpetrator. Indeed, the person who did the tampering was never caught.

This story springs to mind today as Congress considers the latest cybersecurity and data breach bills. To folks who understand computer security and networks, it's plain that the key problem are our vulnerable infrastructure and weak computer security, much like the vulnerabilities in Johnson & Johnson’s supply chain in the 1980s. As then, the failure to secure our networks, the services we rely upon, and our individual computers makes it easy for bad actors to step in and “poison” our information.

So if we were to approach this as a safety problem, the way forward is clear: We need better incentives for companies who store our data to keep it secure. In fact, there is broad agreement that we can easily raise the bar against cyberthieves and spies. Known vulnerabilities frequently go unpatched. For instance, The New York Times reported that the J.P. Morgan hack occurred due to an un-updated server. Information is too often stored in the clear rather than in encrypted form and many devices like smart phones or tablets, that increasingly store our entire lives, don’t even allow for key security upgrades.

Yet none of the proposals now in Congress are aimed at actually increasing the safety of our data. Instead, the focus is on “information sharing,” a euphemism for more surveillance of users and networks. These bills are not only wrongheaded, they seem to be a cynical ploy to use the very real problems of cybersecurity to advance a surveillance agenda, rather than to actually take steps to make people safer. EFF has long opposed these bills and we will continue to do so.

But that’s not all. Not only is Congress failing to address the need for increased computer and network security, key parts of the government are working to undermine our safety. The FBI continues to demonize strong cryptography, trying instead to sell the public on “technologically stupid” strategy that will make us all less safe. Equally outrageous, the recent Logjam vulnerabilities show that the NSA has been spending billions of our tax dollars to exploit weaknesses in our computer security—weaknesses caused by the government’s own ill-advised regulation of cryptography in the 1990s—rather than helping us strengthen our systems.

But how can we create stronger incentives for companies to protect our data?

If Congress wants to help, it has a big tool box, starting with its own government purchasing power—after all, the government stores a lot of our data and it can help spur stronger protections by only choosing secure tools for its own use. Congress can also endorse strong encryption and take steps ranging from setting funding priorities to holding hearings to directly legislating to counter the NSA and FBI’s efforts to keep us from upgrading to more secure tools and services.

Additionally, though, we need to ensure that companies to whom we entrust our data have clear, enforceable obligations to keep it safe from bad guys. This includes those who handle it it directly and those who build the tools we use to store or otherwise handle it ourselves.  In the case of Johnson & Johnson, products liability law makes the company responsible for the harm that comes to us due to the behavior of others if safer designs are available, and the attack was foreseeable. Similarly, hotels and restaurants that open their doors to the public have obligations under the law of premises liability to take reasonable steps to keep us safe, even if the danger comes from others. People who hold your physical stuff for you—the law calls them bailees—also have a responsibility to take reasonable steps to protect it against external forces.

Online services do have some baseline responsibility under negligence standards, as well as a few other legal doctrines, and those were relied upon in the cases against Target, Home Depot, the Gap, and Zappos. Yet so far those standards have been interpreted by the courts to put a very low burden on companies and a very high burden on those harmed. On their own, companies have largely failed to take develop shared, strong standards for what is “reasonable” security, and Congress hasn’t forced them to, leaving the courts with little to point to when trying to hold companies to account. The FTC has brought some actions based on the argument that poor data security is an unfair business practice, but they have had slow going, as their three year fight with Wyndham hotels demonstrates. Companies therefore have little incentive to invest in and adopt new, more secure products akin to Johnson & Johnson’s tamper-resistant packaging and those who take the lead get little reward for doing so.

Another problem is that the law hasn’t figured out a good way to recognize the harms suffered due to poor cybersecurity, which means that the threat of a lawsuit over a cybersecurity breach isn’t nearly as powerful as it might be in a situation involving, say, insecure cars or pain relievers. This is strange at a time when some of that same data is deemed to be worth billions by the venture capital markets and a whole military cyber command. Finally, the online agreements or EULAs we must click through to use services often limit or even fully block consumers from suing over insecure systems.

Congress (or state legislatures) could step in on any one of these topic to encourage real security for users—by creating incentives for greater security, a greater downside for companies that fail to do so and by rewarding those companies who make the effort to develop stronger security. It can also shine a light on security failures by requiring public reporting for big companies. By doing so, in careful measure, Congress could spur a race to the top on computer security and create real consequences for those who choose to linger on the bottom.

Yet none of these options are even part of the legislative debate; they often aren't even mentioned. Instead the proposed laws go the other way—giving companies immunity if they create more risk with your data by “sharing” it with the government, where it could still be hacked. "Information sharing" is focused on forensics—finding who did it and how after the fact—rather than on protecting computer users in the first place. And even then there is widespread disagreement about whether this extra step is likely to make a meaningful difference in most investigations. After all, technical information about attacks can already be shared by companies, it's just the content of our data itself that is protected. Meanwhile, on data breaches themselves, Congress is still monkeying about with notification laws even though almost every state already has one. If Congress wanted to lead on security, it might start a public debate about data breach liability laws.

Looking at the Congressional debate, it's as if the answer for Americans after the Tylenol incident was not to put on tamper-evident seals, or increase the security of the supply chain, but only to require Tylenol to “share” its customer lists with the government and with the folks over at Bayer aspirin. We wouldn’t have stood for such a wrongheaded response in 1982, and we shouldn’t do so now.

Related Issues