Why do we care about encryption? Why was it a big deal, at least in theory, when Mark Zuckerberg announced earlier this year that Facebook would move to end-to-end encryption on all three of its messaging platforms? We don’t just support encryption for its own sake. We fight for it because encryption is one of the most powerful tools individuals have for maintaining their digital privacy and security in an increasingly insecure world.

And although encryption may be the backbone, it’s important to recognize that protecting digital security and privacy encompasses much more; it’s also about additional technical features and policy choices that support the privacy and security goals that encryption enables.

But as we careen from one attack on encryption after another by governments from Australia to India to Singapore to Kazakhstan, we risk losing sight of this bigger picture. Even if encryption advocates could “win” this seemingly forever crypto war, it would be a hollow victory if it came at the expense of broader security. Some efforts—a recent proposal from Germany comes to mind—are as hamfisted as ever, attempting to give government the power to demand the plaintext of any encrypted message. But others, like the GCHQ’s “Ghost” proposal, purport to give governments the ability to listen in on end-to-end encrypted communications without “weakening encryption or defeating the end-to-end nature of the service.” And, relevant to Facebook’s announcement, we’ve seen suggestions that providers could still find ways of filtering or blocking certain content, even when it is encrypted with a key the provider doesn’t hold.

So, as governments and others try to find ways to surveil and moderate private messages, it leads us to ask: What policy choices are incompatible with secure messaging? We know that the answer has to be more than “don’t break encryption,” because, well, GCHQ already has a comeback to that one. Even when a policy choice technically maintains the mathematical components of end-to-end encryption, it can still violate the expectations users associate with secure communication.

So our answer, in short, is: a secure messenger should guarantee that no one but you and your intended recipients can read your messages or otherwise analyze their contents to infer what you are talking about. Any time a messaging app has to add “unless...” to that guarantee, whether in response to legislation or internal policy decisions, it’s a sign that messenger is delivering compromised security to its users.

EFF considers the following signs that a messenger is not delivering end-to-end encryption: client-side scanning, law enforcement “ghosts,” and unencrypted backups. In each of these cases, your messages remain between you and your intended recipient, unless...

Client-side scanning

Your messages stay between you and your recipient....unless you send something that matches up to a database of problematic content.

End-to-end encryption is meant to protect your messages from any outside party, including network eavesdroppers, law enforcement, and the messaging company itself. But the company could determine the contents of certain end-to-end encrypted messages if it implemented a technique called client-side scanning.

Sometimes called “endpoint filtering” or “local processing,” this privacy-invasive proposal works like this: every time you send a message, software that comes with your messaging app first checks it against a database of “hashes,” or unique digital fingerprints, usually of images or videos. If it finds a match, it may refuse to send your message, notify the recipient, or even forward it to a third party, possibly without your knowledge.

Hash-matching is already a common practice among email services, hosting providers, social networks, and other large services that allow users to upload and share their own content. One widely used tool is PhotoDNA, created by Microsoft to detect child exploitation images. It allows providers to automatically detect and prevent this content from being uploaded to their networks and to report it to law enforcement. But because services like PhotoDNA run on company servers, they cannot be used with an end-to-end encrypted messaging service, leading to the proposal that providers of these services should do this scanning “client-side,” on the device itself.

The prevention of child exploitation imagery might seem to be a uniquely strong case for client-side scanning on end-to-end encrypted services. But it’s safe to predict that once messaging platforms introduce this capability, it will likely be used to filter a wide range of other content. Indeed, we’ve already seen a proposal that Whatsapp create “an updatable list of rumors and fact-checks” that would be downloaded to each phone and compared to messages to “warn users before they share known misinformation.” We can expect to see similar attempts to screen end-to-end messaging for “extremist” content and copyright infringement. There are good reasons to be wary of this sort of filtering of speech when it is done on public social media sites, but using it in the context of encrypted messaging is a much more extreme step, fully undermining users’ ability to carry out a private conversation.

Because all of the scanning and comparison takes place on your device, rather than in the cloud, advocates of this technique argue that it does not break end-to-end encryption: your message still travels between its two “ends”—you and your recipient—fully encrypted. But it’s simply not end-to-end encryption if a company’s software is sitting on one of the “ends” silently looking over your shoulder and pre-filtering all the messages you send.

Messengers can make the choice to implement client-side scanning. However, if they do, they violate the user expectations associated with end-to-end encryption, and cannot claim to be offering it.

Law enforcement “ghosts”

Your messages stay between you and your recipient...unless law enforcement compels a company to add a silent onlooker to your conversation.

Another proposed tweak to encrypted messaging is the GCHQ’s “Ghost” proposal, which its authors describe like this:

It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorize today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

But as EFF has written before, this requires the provider to lie to its customers, actively suppressing any notification or UX feature that allow users to verify who is participating in a conversation. Encryption without this kind of notification simply does not meet the bar for security.

Unencrypted backups by default

Your messages stay between you and your recipient...unless you back up your messages.

Messaging apps will often give users the option to back up their messages, so that conversations can be recovered if a phone is lost or destroyed. Mobile operating systems iOS and Android offer similar options to back up one’s entire phone. If conversation history from a “secure” messenger is backed up to the cloud unencrypted (or encrypted in a way that allows the company running the backup to access message contents), then the messenger might as well not have been end-to-end encrypted to begin with.

Instead, a messenger can choose to encrypt the backups under a key kept on the user’s device or a password that only the users know, or it can choose to not encrypt the backups. If a messenger chooses not to encrypt backups, then they should be off by default and users should have an opportunity to understand the implications of turning them on.

For example, WhatsApp provides a mechanism to back messages up to the cloud. In order to back messages up in a way that makes them restorable without a passphrase in the future, these backups need to be stored unencrypted at rest. Upon first install, WhatsApp prompts you to choose how often you wish to backup your messages: daily, weekly, monthly, or never.  In EFF’s Surveillance Self-Defense, we advise users to never back up their WhatsApp messages to the cloud, since that would deliver unencrypted copies of your message log to the cloud provider. In order for your communications to be truly secure, any contact you chat with must do the same.

Continuing the fight

In the 1990s, we had to fight hard in the courts, and in software, to defend the right to use encryption strong enough to protect online communications; in the 2000s, we watched mass government and corporate surveillance undermine everything online that was not defended by that encryption, deployed end-to-end. But there will always be attempts to find a weakness in those protections. And right now, that weakness lies in our acceptance of surveillance in our devices. We see that in attempts to implement client-side scanning, mandate deceptive user interfaces, or leak plaintext from our devices and apps. Keeping everyone’s communications safe means making sure we don’t hand over control of our devices to companies, governments, or other third parties.

Related Issues