The House of Representatives kicked off their “cybersecurity week” yesterday with a hearing titled "America Is Under Cyber Attack: Why Urgent Action is Needed." Needless to say, the rhetoric of fear was in full force. A lot of topics were raised by members of Congress and panelists, but perhaps the most troublesome theme came from panelist and Former Executive Assistant Director of the FBI Shawn Henry, who repeatedly urged that good cybersecurity means going on the offensive:

“the problem with existing [...] tactics is that they are too focused on adversary tools (malware and exploits) and not on who the adversary is and how they operate. Ultimately, until we focus on the enemy and take the fight to them […], we will fail.”

This offensively-minded approach has major pitfalls, as it could lead to more government monitoring and control over our communications. While we think an increased focus on catching criminals using existing tools is a fine tactic that could be used by law enforcement, we fear the temptation for law enforcement to increase their surveillance capabilities in order to successfully go on the offensive in the context of computer crimes. This could mean things like breaking into people's computers without warrants, or disrupting privacy-enhancing tools like Tor. Needless to say, we think it would be a very bad idea to link our safety to the ability for law enforcement to effectively monitor people, and that is a danger of focusing solely on an offensive strategy. Instead, we would like to offer an alternative, defensively-oriented point of view regarding security, an important view that we think was not adequately represented in yesterday's panel.

Securing U.S. critical infrastructure networks, corporate networks, and the Internet at large depends upon securing our computers and networked devices. Fundamentally, it's very simple: fewer software vulnerabilities means more security. Once a vulnerability is patched and an upgraded version of software is available and in use, that increases safety for all of us. Ensuring that the right mechanisms are in place to maximize this baseline security should be a major focus area of any organized effort to secure our critical and other Internet infrastructure. This means encouraging the disclosure of vulnerabilities when they are found so that they can be fixed, and no longer exploited. This is what we mean when we talk about security for everyone. This defensive strategy also takes a view of vulnerabilities that includes engineering with security in mind: if software doesn't force good security on administrators and other humans who have a role to play to keep things secure, then that should be considered a security vulnerability in that software.

In order to understand why vulnerabilities are the foundation of insecurity and ought to the focus of defensive efforts, let's take a bit of time for those new to the computer security world to define bugs, vulnerabilities, exploits, and a particularly nasty class of exploits called “zero-day” exploits.

What are bugs, vulnerabilities, exploits and “zero-day” exploits?

A software bug is a general term referring to an unintentional problem with a piece of software that causes the software to work in an unexpected or unintended way. Bugs can refer to low-level issues (“we started counting from 0 over here, but from 1 over there, and now this array is messed up”), or to high-level issues (“we didn't implement a feature allowing people to see their open orders on this website”).

Security vulnerabilities are a class of bugs in software; these are the bugs that allow an attacker to gain unauthorized access to do something that she couldn't before. This could mean gaining access to a remote computer, or to a private network, or to other private information. Once again, these range from low-level vulnerabilities (“We weren't expecting the user to give a name that was 4 gigabytes long; our oversight allowed the user to crash the program and execute her malicious code on the victim's system”) to high-level (“Since we didn't force a user to use a strong passphrase, his account could be compromised”).

Exploits are pieces of software that actually take advantage of the security vulnerability and give the user running the software unauthorized access. A security vulnerability could lead to an exploit, although not all vulnerability lead to exploits.

Zero-day exploits are exploits that take advantage of an undisclosed vulnerability. Suppose there is a publicly known vulnerability in the browser Internet Explorer 6. Then any exploit based on that vulnerability is NOT considered a zero-day, and you can (often, theoretically) protect yourself from such a vulnerability. In this case, for example, you could do so by downloading Internet Explorer 9. However, if there is a “zero-day” in Internet Explorer 9, there's nothing you can knowingly do as a user to protect yourself. This makes this type of vulnerability especially scary, since it could be used not just against unwitting users who haven't upgraded their software, but against anyone.

Ok, got it. To make us safer, we need to patch vulnerabilities and prevent exploits, especially zero-day exploits. Does CISPA encourage this?

Unfortunately, the “cybersecurity” bill CISPA and other legislation under debate does NOT focus on this baseline security. Instead of encouraging the patching of vulnerabilities as quickly as possible, or offering solutions to improve the general security of networked computers, the bill encourages broad surveillance of personal data by companies and the government. This type of information sharing is largely unrelated to the core issue of vulnerabilities that need to be patched at the software level. It's certainly possible that by mining that data one could come across an exploit or an unknown vulnerability and share it with the vendor, but the bill is NOT about sharing vulnerabilities so that they can be patched – it's about sharing raw data in a way that could legitimize a public-private surveillance partnership. And this data sharing between companies and the government in no way encourages security vulnerabilities themselves to be shared with the relevant software vendors and developers so that they can be patched. In other words, it just doesn't attack the root of the problem.

Why is fixing vulnerabilities at odds with taking an offensive approach to security?

If we take an offensive approach as Mr. Henry suggests, a “security for the 1%” situation seems likely to arise, in which vulnerabilities are sometimes kept secret, and mitigations or fixes for these vulnerabilities are selectively doled out by the government or other private security firms only to critical infrastructure or paying clients (the “1%” deemed worthy of protection). The government might even deploy black box systems to companies and infrastructure designed to mitigate exploits based on secret vulnerabilities while giving as little information as possible about those underlying vulnerabilities, even to the companies they are protecting. Either way, the vendor would not be told about the vulnerability and so anyone who wasn't a recipient of the “privileged” information would be hung out to dry.

What is a better approach to security?

Changing the incentives and culture to encourage the right sort of information sharing concerning vulnerabilities is a complex problem, and we do not purport to have a complete solution. There are many pieces to the puzzle: what should be done about vendors who don't care about security? What about users who don't upgrade software, or go out of their way to be vulnerable? What about security researchers who discover vulnerabilities, and choose to sell this knowledge to the highest bidder, instead of ensuring that the vendor knows about the vulnerability and it gets fixed?

There are some common sense tactics that the government can take to help solve these problems. For starters, the government can itself commit to disclosing any known vulnerabilities to vendors so that they are promptly patched. Next, incentives could be put in place to encourage research that has broad beneficial effects for everyone's security. For example, suppose a researcher invents a new testing technique that reduces how many exploitable vulnerabilities there are in software in general. This is a win for everyone, and we think the government should strongly encourage such research.1

But beyond these common sense suggestions, the main point we want to raise in this post is not to offer a solution to these problems, but rather suggest that anyone interested in security at the national and international level should be thinking hard about them. Taking an offensive approach has the potential to put our civil liberties in danger, and could create a situation in which our safety ebbs and flows with how well the intelligence community can spy on us. This precarious and undesirable situation can be avoided if instead we take a defensive approach to stop the problem at its core, working to ensure that everyone is maximally protected. Mr. Henry suggests that "offense outpaces the defense." That seems like an oversimplification, but even if one accepts it to be true, we should not take this to be an immutable property of the world. Instead, we should work to change it by increasing our defensive efforts. Unfortunately, the “cybersecurity” debate does not seem to be addressing this point of view, but we hope that somebody brings it up during “cybersecurity week”.

In the mean time, please speak out against the misguided cybersecurity legislation by taking action against CISPA.

  • 1. At EFF, we think of ourselves as tackling a small piece of this puzzle by encouraging the adoption of HTTPS. We strongly believe that this increases the general security of the web, and we are working towards a future in which HTTPS (and other encrypted protocols) become the standard way to access resources and communicate on the web.