In part 1, we described the technical details of Logjam. Here we'll discuss some of the disturbing questions this vulnerability raises about secure communication on the Internet and NSA's apparent failure in its "information assurance" role to keep the Internet safe from large-scale threats.

First, this vulnerability provides another great example of why intentionally weakened cryptography is an irredeemably bad idea. FREAK and Logjam both take advantage of the limited "export grade" cryptography that persisted into the 1990s, despite feedback from the security community that it makes us all less safe. One way to end that story is on a positive note: EFF and others successfully established in court the principle that code is speech and should not be restricted like munitions, leading to more secure software being distributed around the world.

But the reverberations of those bad security decisions to limit encryption still resonate today. It's an important reminder, as law enforcement in the U.S. and U.K. push for backdoors into our communications technology, that once these backdoors are built they can't be limited to "just the good guys" and they can't be limited to "just for now." The truth of these principles may be obvious to people working in security, but the FREAK and Logjam vulnerabilities are incontrovertible real-world demonstrations.

Second, this is yet another case where the NSA appears to have long known about a serious vulnerability in the way cryptography is implemented across the Internet, but instead of working with affected companies, developers, and systems, the NSA kept quiet in order to continue exploiting it themselves. While NSA only included ECDH and not traditional Diffie-Hellman in its 2005 "Suite B" set of recommend algorithms, there was never an explicit public recommendation to move past this algorithm that the NSA apparently knew was breakable. It was not clear if excluding traditional Diffie-Hellman was a security recommendation or simply one based on ECDH's superior efficiency.

The NSA's apparent decision to keep a known vulnerability secret made us all—including the systems it is responsible for protecting—less secure. The United States government says it has a policy and process for considering whether to disclose and fix these sorts of vulnerabilities, and that in the vast majority of cases it chooses to disclose them. EFF is suing to get access to that policy, but we're skeptical that it actually results in significant disclosures. Part of the NSA's mission is supposed to be "information assurance," but if the Logjam authors are correct, this is the yet another example where the Agency is prioritizing offensive intelligence collection over patching the systems that protect all of us. 

Finally, Logjam demonstrates that we need a better mechanism to proactively retire old cryptography. 1024-bit Diffie-Hellman should have been phased out years ago. It is, literally, the oldest public-key cryptographic algorithm known, first published in 1976. Better replacements, like Elliptic-Curve Diffie-Hellman (ECDH), have already been available for decades (although initially encumbered by patents). 

There are significant coordination problems. Replacing old cryptographic technology can make some clients or servers stop working; for example, sites that disable obsolete algorithms may no longer work in web browsers from five or ten years ago, which some visitors still rely on. As a result, it's very hard to make important security changes to TLS, and browser vendors and site administrators both often dither for years until a catastrophic break is found. Logjam makes it clear we should assume the worst about the NSA's capabilities and their ability to keep a large break secret. Expensive attacks that may seem far-fetched in the academic literature are likely being performed by NSA and potentially by other state-level agencies around the world. We need to ensure we upgrade the strength of our cryptography before news of these attacks leaks to the public.