Part two in a short series on EFF’s Open Source Security Audit

Our recent security audit of libpurple and related libraries got us thinking about the general problem of open source security auditing, and we wanted to share what we’ve learned. Free and open source software that happens to be community-supported can be challenging from a security perspective. There is a fair amount of recent literature on this topic, and it is debatable whether openly readable source code helps defenders more than it helps attackers. The key issue is not about source code but the fact that community-based open source software projects often lack the organized resources of their corporate cousins. If large corporate projects choose to prioritize security, they can usually afford to hire experts to do regular security reviews; community projects need to find and coordinate volunteers with this specialized focus. In an environment where developers are stretched thin and often have a wide array of responsibilities, the search for security bugs may be less organized and lag behind. How do we combat this problem? How can we ensure good security in a world where vulnerabilities in important open source software can have disastrous consequences for users all over the world?

These are hard questions without simple answers. Yet although there are weaknesses to free, community-supported open source, there are also strengths: one can take advantage of crowdsourcing, open discussion, and can often give integrated updates with less hassle due to friendlier and sane licensing. In order to take advantage of the strengths while mitigating the weaknesses, we think that there are some design choices that these projects can make to drastically cut down on the amount of effort that will be required to do security auditing. These suggestions are by no means original, but we think are even more important to emphasize within the framework of the community-supported open source.

    • Make the code as simple, modular, and easy to understand as possible. To take advantage of volunteer effort to crowdsource security auditing, the barrier to entry for understanding the code has to be quite low. Modularity in itself helps improve security, but it also helps people take a look at one aspect of the code without having to digest the possibly complicated way that it all hangs together.
    • Treat every bug as potentially guilty of being a security vulnerability until proven innocent. There can be disastrous consequences to miscategorizing a security threat as benign or publishing security leaks too widely. Though publishing bugs openly helps community development and we want to encourage this practice, we would advise being cognizant about certain classes of bugs that should set off a security risk flag:
      1. Memory bugs: wild or null pointer dereference, use after free, stack or heap corruption, etc.
      2. User input bugs: unvalidated user input, unconstrained memory controlled by the user.
      3. Exploit mitigation bugs: broken or missing mitigations such as ASLR, stack canaries, array bounds checking, ELF hardening, etc.
  • Avoid using native code (i.e. C/C++) if at all possible in situations where one needs to make security guarantees; instead opt to use a Very High Level Language by default. Although the choice of language is a contentious issue, one can resolve the question scientifically with tests. In particular, one should establish quantified performance requirements; try tuning the hot-spots; try writing only small sections of native code with VHLL bindings. Native code is not type-safe or memory-safe and opens one up to an entire class of attack vectors based on vulnerabilities such as such as buffer overflows and double free bugs. By choosing a VHLL, one effectively eliminates the possibility of being attacked this way. 
  • Avoid giving the user options that could compromise security, in the form of modes, dialogs, preferences, or tweaks of any sort. As security expert Ian Grigg puts it, there is “only one Mode, and it is Secure.” Ask yourself if that checkbox to toggle secure connections is really necessary? When would a user really want to weaken security? To the extent you must allow such user preferences, make sure that the default is always secure.

In some respects our review only scratched the surface of libpurple, GnuTLS and libxml2. In addition to encouraging developers to follow the bullet points above, we also would like to encourage security experts who rely on open source software to get involved in the security auditing effort. Your expertise is invaluable, and writing security patches is just about the nicest thing you can do.