Several years ago, a professor at Holland's Radboud University Nijmegen, Dr. Bart Jacobs, landed in legal trouble. He'd attempted to publish an article exposing security flaws in the widely used MIFARE Classic wireless smart card chip, which is employed by transit systems around the world. Using an ordinary laptop, he was able to clone paying customers' cards to access transit systems for free. The point of his research was to demonstrate that the cards were vulnerable to attack.
The chip's owner, NXP Semiconductors, argued that it would have been irresponsible to make this information public. But a Dutch court ultimately ruled that clamping down on his research would have violated the scientist's rights to freedom of expression.
Scenarios like this remain highly relevant in the ongoing debate around coders’ rights that is unfolding in the European Parliament. On June 20, the Parliamentary Civil Liberties, Justice and Home Affairs Committee (LIBE) continued to debate a draft Directive on Attacks Against Information Systems.
As legislators mull over this computer crime legislation, questions about how security researchers should be treated under the law are key. Instead of being regarded like product researchers who gauge automobile safety using crash test dummies, to borrow an analogy from Germany’s Chaos Computer Club, technologists who are adept at discovering computer security flaws risk being defined as criminals under certain provisions of this draft Directive.
As we noted in an earlier post, a central issue is whether or not a security researcher must obtain explicit permission from information system operators when conducting his or her research. Article 3 of the Draft Directive makes it a crime to intentionally access information systems without prior “authorization,” where the actor infringes a security measure.
The wholesale banning of access without explicit permission, without building in a clear and thoughtful exception for legitimate research, is highly problematic. Unless it is improved during the legislative process, the Directive on Attacks Against Information Systems could have a chilling effect on Europe’s robust security research community, which frequently produces groundbreaking work.
In one example, security researcher Karsten Nohl demonstrated how easy it was to eavesdrop on GSM-based mobile phones in 2010. And this past February, to name another example, Ruhr University researchers published a report titled “Don’t Trust Satellite Phones,” announcing that they had succeeded in cracking the satellite encryption that protects the phone signals of hundreds of thousands of subscribers. With some equipment totaling about $2,000, they warned, practically anyone with the right expertise could spy on calls across the entire European continent.
EFF believes that it’s better to have flaws like this detected and addressed, rather than create a climate where honest and legitimate researchers are deterred from investigating such problems out of fears that they'll face lawsuits or prison time.
Legislative language that could curtail well-meaning researchers’ ability to access information systems must be crafted with surgical precision. So far, the European Parliament isn’t there yet. Security researchers are a crucial part of any effective security strategy, and their skills should be recognized as a benefit to the public that can be used to enhance security for everyone.
As they hash out this Directive, members of the European Parliament should keep in mind that there is potential for improved security across the board when skillful coders are allowed to engage in technological discovery.