While 2016 may not have been the banner year for cryptographic exploits that 2015 was, researchers around the world continued to advance the state of the art.

TLS 1.3 design finalized

The biggest practical development in crypto for 2016 is Transport Layer Security version 1.3. TLS is the most important and widely used cryptographic protocol and is the backbone of secure Internet communication; you're using it right now to read this blog! After years of work by hundreds of researchers and engineers, the new TLS design is now considered final from a cryptography standpoint. The protocol is now supported and available in Firefox, Chrome, and Opera. While it might seem like a minor version upgrade, TLS 1.3 is a major redesign from TLS 1.2 (which was finished over 8 years ago now). In fact, one of the most contentious issues was if the name should be something else to indicate how much of an improvement TLS 1.3 really is.

How might users notice TLS 1.3? Speed. TLS 1.3 is designed for speed, specifically by reducing the number of network round-trips required before data can be sent to one round-trip (1-RTT) or even zero round-trips (0-RTT) for repeat connections. These ideas have appeared before in experimental form through the QUIC protocol and False Start for earlier TLS versions, but as part of the default behavior of TLS 1.3 they will soon become much more widespread. This means latency will decrease and webpages will load faster.

In addition, TLS 1.3 should be a big improvement security-wise. It has absorbed two major lessons from decades of experience with TLS. First, the protocol is much simpler by removing support for a number of old protocol features and obsolete cryptographic algorithms. Additionally, TLS 1.3 was designed with the benefit of model checking (which has been used to find flaws in many older versions of TLS and SSL). TLS 1.3 was analyzed extensively by the cryptographic community during the standardization process, instead of waiting until the protocol is widely deployed and it's difficult to patch.

The quest for post-quantum cryptography continues

The cryptography community has been hard at work trying to transition away from today's algorithms (many of which are completely insecure if practical quantum computers are developed) to post-quantum cryptography.

This was nudged forward towards the end of last year as NIST announced a standardization project for post-quantum algorithms. NIST published its first report on this effort in February and a draft call for algorithm proposals in August. Researchers continue to debate what the goals for post-quantum algorithms should be (and if NIST should take a leadership role in this process after its involvement in the backdoored DualEC standard).

Meanwhile, Google ran a practical experiment in which it used the New Hope post-quantum key exchange algorithm to protect real traffic between Google servers and the Chrome Web browser, one of the first real-world deployments of post-quantum cryptography. Results from the experiment suggested that computation costs were negligible although bandwidth consumption increased due to larger key sizes. Another team of researchers experimented with adding quantum-resistant key exchange to TLS using a different algorithm. 

There's a lot we still don't know about post-quantum cryptography but we're starting to learn about the practical engineering implications.

New thinking on how to backdoor cryptographic algorithms

The concept of designing cryptographic systems that appear secure but have subtle backdoors has been discussed for a long time. (The term kleptography was coined in 1996 to describe this type of concept.) But the Snowden revelations, in particular that the DUAL_EC pseudorandom number generator was deliberately backdoored by NSA, have inspired more research on how backdoors might be created. A very clever paper by a team of French and American researchers showed that it's possible to carefully choose a prime number such that computing discrete logarithms becomes easy, which is enough to make Diffie-Hellman exchanges insecure.

What's worse, such a backdoored prime would be indistinguishable from any other randomly-chosen prime.

RFC 5114: Another backdoored crypto standard from NIST?

Speaking of backdoors, another potentially compromised standard was identified this year: RFC 5114. This little-known standard, written back in 2008, is somewhat mysterious all the way around. It was written by defense contractor BBN to standardize some parameters previously published by NIST. It defines eight Diffie-Hellman groups "that can be used in conjunction with IETF protocols to provide security for Internet communications" which eventually made their way into some widely-used cryptographic libraries like OpenSSL and Bouncy Castle. However, some of the groups have been identified as suspicious: They provide no explanation of how they were generated (meaning they might be backdoored as described above) and they're vulnerable to small group confinement attacks if parameters aren't checked carefully. This has led to some discussion about if the standard could have been intentionally backdoored, although there is no smoking gun. In response, one of the authors of the standard stated it was written in part to give an intern a "relatively easy" project to complete. A NIST cryptographer stated that it was written just to provide test data for people using the curves and "certainly not as a recommendation for people to use or adopt them operationally." It's certainly possible that this bad standard arose simply due to incompetence, but the suspicion around it highlights the ongoing lack of trust in NIST as a standardization body for cryptography.

Cryptographic deniability pops up in the US presidential election

Deniability and its antithesis non-repudiation are basic technical properties that cryptographic communications can have: should the system provide proof to outsiders that a message was sent by a specific sender (non-repudiation)? Or should the system ensure that any outsider can alter the transcript as desired (deniability) so that leaked communications are not incriminating? The real-world desirability of these properties is an age-old controversy in the cryptographic community. Mostly lost in the coverage of the 2016 election was that non-repudiation cropped up in a major way. Senior Democratic Party politicians, including vice-presidential nominee Tim Kaine and former DNC chair Donna Brazile, stated on the record that leaked DNC emails had been doctored and were not accurate. However, web sleuths quickly verified that the emails were correctly signed using the DKIM protocol with the correct keys for the hillaryclinton.com email server. There are a lot of caveats to these signatures: some of the emails were from outside addresses not supporting DKIM and hence could have been modified, DKIM only asserts that a specific email server sent the messages (and not any individual user) so it's possible the hillaryclinton.com DKIM key was stolen or used by a malicious insider, and it's possible the leaked email caches were modified by omitting some emails (which DKIM evidence would not reveal). Still, it's perhaps the most high-profile data point we have on the value (or lack thereof) of non-repudiable cryptographic evidence.

Attacks only get better

A number of new and improved attacks were discovered, building on prior work. Among the highlights:

  • The HEIST attack improves the versatility of previous compression-oracle attacks like BREACH and CRIME, potentially stealing sensitive data across Web origins using malicious JavaScript. While it was decided back in 2014 to drop support for compression altogether in TLS 1.3 due to the risk of these attacks, this vulnerability further shows how difficult it can be to add encryption into complicated protocols like HTTP.
  • The DROWN attack leverages weaknesses in the decades-old SSLv2 protocol to compromise a Web server's RSA signing keys. Like many previous TLS/SSL attacks (POODLEFREAK, etc.), this relies on an old protocol that no modern Web browser supports. Yet this is still a major flaw in practice because an attacker can use this method to steal the same key a Web server uses with modern clients. This attack is another reminder of how much insecurity is caused by maintaining support for outdated (and in some cases deliberately weakened) cryptographic protocols.
  • The Sweet32 attack showed that old 64-bit block ciphers (notably Triple DES and Blowfish) can be vulnerable in practice to collision attacks when used in CBC mode. Due to the birthday bound, this requires observing about 2^(64/2) = 2^32 encrypted blocks-or about 32 GB of data. Again, these are legacy ciphers that should have been disabled years ago, but are still used in about 1% of encrypted Web traffic.
  • A bit further away from practical systems, new attacks were found on certain classes of pairing-friendly elliptic curves, including the popular Barreto-Naehrig curves. While pairing-friendly curves are not commonly used today for encryption on the internet, they are essential to a number of advanced cryptographic systems like efficient zero-knowledge arguments of knowledge used in Zcash or group signatures used in Pond.
  • Secure randomness continues to be a fragile point in cryptography: if you can't generate truly random numbers, you can't create truly unpredictable cryptographic keys. The GnuPG project (who maintain widely used PGP software) announced and fixed a bug in the way Libcrypt generates random numbers from 1998–2016. While no easy way to exploit this in practice has been shown, the attack shows how subtle bugs in PRNG libraries can exist unnoticed for decades because they never cause any visible loss in functionality.

Out with the old, in with the new: HTTPS still being slowly hardened

HTTPS is also slowly being made more secure:

  • The SHA-1 hash function turned 21 years old in 2016, but nobody's celebrating that birthday. Instead we're nearing the end of a long process to retire the obsolescent algorithm. Somewhat surprisingly, no SHA-1 collision was found this year, which would be an irrefutable public demonstration that the algorithm is cryptographically broken. Yet browser vendors aren't waiting for a collision. Microsoft, Google, and Mozilla have all announced that their browsers will no longer accept SHA-1 certificates after early 2017. While it took a while, we consider the coordinated deprecation of SHA-1 a big win for the community. It's been observed that the browser market incentivizes vendors not to unilaterally remove insecure old protocols, so it's a positive sign that the vendors were able to agree on a timeline to kill off SHA-1 before it's completely broken.
  • Support for Certificate Transparency, a protocol designed to provide public logging of which certificates have been issued for which Web domains, continues to grow. All Symantec certificates issued since June 1 are included in CT logs (and will be rejected by Chrome and Firefox otherwise). Domains can opt-in to require CT using Chrome's HSTS preload list (also used in Firefox). Just this week Facebook released a preliminary Web-based tool to monitor Certificate Transparency logs.
  • RFC 7748, standardizing the elliptic curves Curve25519 and Curve448 ("Goldilocks"), was finalized. These two curves are both available already in TLS 1.3, offering fast performance and an alternative to the classic set of NIST-supported curves such as P-256.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2016.

Like what you're reading? Support digital freedom defense today!

donate to EFF

Related Issues