Next week, one of the most respected security research conferences in the world, the USENIX Security Symposium, will be held in Washington D.C.  Thanks to a gag order from a British court, however, it won't go quite as planned. The order forbids the authors of a paper describing fundamental flaws in car lock systems from discussing key aspects of the work, based on nothing more than speculation about a third party's alleged “misuse of confidential information.” 

We’ve taken a closer look at the court’s ruling and it’s a doozy.  According to the court, the researchers (1) reverse engineered a software program called Tango Programmer that’s been sold online since 2009; (2) in the process, identified an algorithm used in a popular car unlocking system; (3) identified fundamental security flaws in that algorithm; and (4) disclosed those flaws to the vendor of the system nine months before the conference.  One month before the deadline for final submission to the conference, Volkswagen, who uses the software, ran to court to stop it. 

The researchers acted responsibly and methodically.  They used the time-honored technique of reverse engineering publicly available software and disclosed in plenty of time to address the issue.  So, why can’t they advise car owners of the problem to that they can protect themselves? 

Because, according to the court, Tango Programmer was of "clearly murky origin.”  While the software had been available online for years without any apparent problem, in the court's view, the researchers had an affirmative obligation to establish that the software did not contain stolen confidential business information.  

It is all too clear that the court’s opinion is clouded by its view that the researchers – respected scholars at major universities – are irresponsible hackers:

The claimants do not have an overwhelming case on the merits, not even a very strong one, but the Tango Programmer has a clearly murky origin, and that is obvious to the defendants… In my judgment, the defendants have taken a reckless attitude to the probity of the source of the information they wish to publish.

To be clear, there’s no evidence in the record as to how Tango Programmer was developed, and the researchers stated that they assumed it was developed based on perfectly lawful technique, chip splicing.  The court dismissed that statement out of hand, and looked instead to the website on which the program was sold.  Based on language on the site, the court concluded that the sellers of Tango Programmer knew the software “is likely to facilitate crime.” And, the researchers themselves observed that Tango Programmer offers “functionality that goes beyond 'legitimate' usage.” 

As an initial matter, this is looking at security research presentations through the wrong lens.  Research on programs that could be misused enhances security by exposing the flaws and encouraging fixes.  Computer security would be a farce if it avoided all "murky" software.

But even accepting the court's framing, the possibility of misuse says nothing about whether the program was developed using stolen confidential information, much less whether the researchers acted recklessly in using the program for their legitimate purposes.

The court pays a fair amount of lip service to academic freedom, but it’s just that: lip service.  Even though it concedes that the case against the researchers is “not very strong,” even though there are many easier ways of stealing cars than the exploit that would be disclosed, even though Tango Programmer could have been developed without relying on stolen information, and even though car owners might be better off knowing about the flaws in the security systems on which they rely, the court nonetheless concludes that academic freedom has to give way to “the security of millions” of cars. 

Again, the court gets it exactly backwards.  The security of millions of cars depends on robust research into their flaws, and presentions of vulnerabilities and exploits at academic conferences ultimately enhance security.  Security through obscurity is widely and correctly rejected by the security community, and security through willful ignoring a publicly available program is even worse.

Taken as a whole, the ruling sends a terrible message to researchers: if the flaws you expose are sufficiently consequential, you can be censored based on nothing more than sheer speculation about the activities of third parties.  The irony, of course, is that these researchers have been punished precisely because they acted responsibly and disclosed their research well in advance of publication.  Indeed, the whole situation could have been avoided if the vendor had done its part and addressed the flaw in the first place.  

This ruling was issued by a U.K. court.  If the case had been brought in the U.S., things might have been quite different.  Under U.S. law, the person who wishes to publish doesn’t have the burden of proving there was no misappropriation just because the information is of "'murky' origin."  More broadly, a U.S. court would not issue preliminary injunction where the claimants case was "not even . . . very strong" quite the contrary.  U.S. law has been used to thwart the publication of security research in a number of ways, but a bogus trade secret claim is the weakest tool in the kit. 

EFF senior staff attorney Kurt Opsahl will be participating in a USENIX-sponsored workshop on academic freedom on the eve of the Security Symposium.  We hope the workshop will provide a much-needed opportunity for USENIX community members to share their perspective on this censorship, and consider ways to take action.