In Privacy Without Monopoly: Data Protection and Interoperability, we took a thorough look at the privacy implications of various kinds of interoperability. We examined the potential privacy risks of interoperability mandates, such as those contemplated by 2020’s ACCESS Act (USA), the Digital Services Act and Digital Markets Act (EU), and the recommendations presented in the Competition and Markets Authority report on online markets and digital advertising (UK). 

We also looked at the privacy implications of “competitive compatibility” (comcom, AKA adversarial interoperability), where new services are able to interoperate with existing incumbents without their permission, by using reverse-engineering, bots, scraping, and other  improvised techniques common to unsanctioned innovation.

Our analysis concluded that while interoperability created new privacy risks (for example, that a new firm might misappropriate user data under cover of helping users move from a dominant service to a new rival), these risks can largely be mitigated with thoughtful regulation and strong enforcement. More importantly, interoperability also had new privacy benefits, both because it made it easier to leave a service with unsuitable privacy policies, and because this created real costs for dominant firms that did not respect their users’ privacy: namely, an easy way for those users to make their displeasure known by leaving the service.

Critics of interoperability (including the dominant firms targeted by interoperability proposals) emphasize the fact that weakening a tech platform’s ability to control its users weakens its power to defend its users.

 They’re not wrong, but they’re not complete either. It’s fine for companies to defend their users’ privacy—we should accept nothing less—but the standards for defending user-privacy shouldn’t be set by corporate fiat in a remote boardroom, they should come from democratically accountable law and regulation.

The United States lags in this regard: Americans whose privacy is violated have to rely on patchy (and often absent) state privacy laws. The country needs—and deserves—a strong federal privacy law with a private right of action.

That’s something Europeans actually have. The General Data Protection Regulation (GDPR), a powerful, far-reaching, and comprehensive (if flawed and sometimes frustrating) privacy law came into effect in 2018.

The European Commission’s pending Digital Services Act (DSA) and Digital Markets Act (DMA) both contemplate some degree of interoperability, prompting two questions:

  1. Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy? And
  2. Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

We think the answers are “no” and “no,” respectively. Below, we explain why.

Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy?

Increased interoperability can help to address user lock-in and ultimately create opportunities for services to offer better data protection.

The European Data Protection Supervisor has weighed in on the relation between the GDPR and the Digital Markets Act (DMA), and they affirmed that interoperability can advance the GDPR’s goals.

Note that the GDPR doesn’t directly mandate interoperability, but rather “data portability,” the ability to take your data from one online service to another. In this regard, the GDPR represents the first two steps of a three-step process for full technological self-determination: 

  1. The right to access your data, and
  2. The right to take your data somewhere else.

The GDPR’s data portability framework is an important start! Lawmakers correctly identified the potential of data portability to help promote competition of platform services and to reduce the risk of user lock-in by reducing switching costs for users.

The law is clear on the duty of platforms to provide data in a structured, commonly used and machine-readable format and users should have the right to transmit data without hindrance from one data controller to another. Where technically feasible, users also have the right to ask the data controller to transmit the data to another controller.

Recital 68 of the GDPR explains that data controllers should be encouraged to develop interoperable formats that enable data portability. The WP29, a former official European data protection advisory body, explained that this could be implemented by making application programme interfaces (APIs) available.

However, the GDPR’s data portability limits and interoperability shortcomings have become more obvious since it came into effect. These shortcomings are exacerbated by lax enforcement. Data portability rights are insufficient to get Europeans the technological self-determination the GDPR seeks to achieve.

The limits the GDPR places on which data you have the right to export, and when you can demand that export, have not served their purpose. They have left users with a right to data portability, but few options about where to port that data to.

Missing from the GDPR is step three:

      3. The right to interoperate with the service you just left.

The DMA proposal is a legislative way of filling in that missing third step, creating a “real time data portability” obligation, which is a step toward real interop, of the sort that will allow you to leave a service, but remain in contact with the users who stayed behind. An interop mandate breathes life into the moribund idea of data-portability.

Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

The GDPR is very far-reaching, and European officials are still coming to grips with its implications. It’s conceivable that the Commission could propose a regulation that cannot be reconciled with EU data protection rules. We learned that in 2019, when the EU Parliament adopted the Copyright Directive without striking down the controversial and ill-conceived Article 13 (now Article 17). Article 17’s proponents confidently asserted that it would result in mandatory copyright filters for all major online platforms, not realizing that those filters cannot be reconciled with the GDPR.

But we don’t think that’s what’s going on here. Interoperability—both the narrow interop contemplated in the DMA, and more ambitious forms of interop beyond the conservative approach the Commission is taking—is fully compatible with European data protection, both in terms of what Europeans legitimately expect and what the GDPR guarantees.

Indeed, the existence of the GDPR solves the thorniest problem involved in interop and privacy. By establishing the rules for how providers must treat different types of data and when and how consent must be obtained and from whom during the construction and operation of an interoperable service, the GDPR moves hard calls out of the corporate boardroom and into a democratic and accountable realm.

Facebook often asserts that its duty to other users means that it has to block you from bringing some of “your” data with you if you want to leave for a rival service. There is definitely some material on Facebook that is not yours, like private conversations between two or more other people. Even if you could figure out how to access those conversations, we want Facebook to take steps to block your access and prevent you from taking that data elsewhere.

But what about when Facebook asserts that its privacy duties mean it can’t let you bring the replies to your private messages; or the comments on your public posts; or the entries in your address book; with you to a rival service? These are less clear-cut than the case of other peoples’ private conversations, but blocking you from accessing this data also helps Facebook lock you onto its platform, which is also one of the most surveilled environments in the history of data-collection.

There’s something genuinely perverse about deferring these decisions to the reigning world champions of digital surveillance, especially because an unfavorable ruling about which data you can legitimately take with you when you leave Facebook might leave you stuck on Facebook, without a ready means to address any privacy concerns you have about Facebook’s policies.

This is where the GDPR comes in. Rather than asking whether Facebook thinks you have the right to take certain data with you or to continue accessing that data from a rival platform, the GDPR lets us ask the law which kinds of data connections are legitimate, and when consent from other implicated users is warranted. Regulation can make good, accountable decisions about whether a survey app deserves access to all of the “likes” by all of its users’ friends (Facebook decided it did, and the data ended up in the hands of Cambridge Analytica), or whether a user should be able to download a portable list of their friends to help switch to another service (which Facebook continues to prevent).

The point of an interoperability mandate—either the modest version in the DMA or a more robust version that allows full interop—is to allow alternatives to high-surveillance environments like Facebook to thrive by reducing switching costs. There’s a hard collective action problem of getting all your friends to leave Facebook at the same time as you. If people can leave Facebook but stay in touch with their Facebook friends, they don’t need to wait for everyone else in their social circle to feel the same way. They can leave today.

In a world where platforms—giants, startups, co-ops, nonprofits, tinkerers’ hobbies—all treat the GDPR as the baseline for data-processing, services can differentiate themselves by going beyond the GDPR, sparking a race to the top for user privacy.

Consent, Minimization and Security

We can divide all the data that can be passed from a dominant platform to a new, interoperable rival into several categories. There is data that should not be passed. For example, a private conversation between two or more parties who do not want to leave the service and who have no connection to the new service. There is data that should be passed after a simple request from the user. For example, your own photos that you uploaded, with your own annotations; your own private and public messages, etc. Then there is data generated by others about you, such as ratings. Finally, there is someone else’s personal information contained in a reply to a message you posted.

The last category is tricky, and it turns on the GDPR’s very fulcrum: consent. The GDPR’s rules on data portability clarify that exporting data needs to respect the rights and freedom of others. Thus, although there is no ban on porting data that does not belong to the requesting user, data from other users shouldn’t be passed on without their explicit consent, or under another GDPR legal basis, and without further safeguards. 

That poses a unique challenge for allowing users to take their data with them to other platforms, when that data implicates other users—but it also promises a unique benefit to those other users.

If the data you take with you to another platform implicates other users, the GDPR requires that they consent to it. The GDPR’s rules for this are complex, but also flexible.

For example, say, in the future, that Facebook obtains consent from users to allow their friends to take the comments, annotations, and messages they send to those friends with them to new services. If you quit Facebook and take your data (including your friends’ contributions to it) to a new service, the service doesn’t have to bother all your friends to get their consent again—under the WP Guidelines, so long as the new service uses the data in a way that is consistent with the uses Facebook obtained consent for in the first place, that consent carries over.

But even though the new service doesn’t have to obtain consent from your friends, it does have to notify them within 30 days - so your friends will always know where their data ended up.

And the new platform has all the same GDPR obligations that Facebook has: they must only process data when they have a “lawful basis” to do so; they must practice data minimization; they must maintain the confidentiality and security of the data; and they must be accountable for its use.

None of that prevents a new service from asking your friends for consent when you bring their data along with you from Facebook. A new service might decide to do this just to be sure that they are satisfying the “lawfulness” obligations under the GDPR.

One way to obtain that consent is to incorporate it into Facebook’s own consent “onboarding”—the consent Facebook obtains when each user creates their account. To comply with the GDPR, Facebook already has to obtain consent for a broad range of data-processing activities. If Facebook were legally required to permit interoperability, it could amend its onboarding process to include consent for the additional uses involved in interop.

Of course, the GDPR does not permit far-reaching, speculative consent. There will be cases where no amount of onboarding consent can satisfy either the GDPR or the legitimate privacy expectations of users. In these cases, Facebook can serve as a “consent conduit,” through which consent to allow their friends to take data with muddled claims with them to a rival platform can be sought, obtained, or declined.

Such a system would mean that some people who leave Facebook would have to abandon some of the data they’d hope to take with them—their friends’ contact details, say, or the replies to a thread they started—and it would also mean that users who stayed behind would face a certain amount of administrative burden when their friends tried to leave the service. Facebook might dislike this on the grounds that it “degraded the user experience,” but on the other hand, a flurry of notices from friends and family who are leaving Facebook behind might spur the users who stayed to reconsider that decision and leave as well.

For users pondering whether to allow their friends to take their blended data with them onto a new platform, the GDPR presents a vital assurance: because the GDPR does not permit companies to seek speculative, blanket consent for future activities for new purposes that you haven’t already consented to, and because the companies your friends take your data to have no way of contacting you, they generally cannot lawfully make any further use of that data (except through one of the other narrow bases permitted by GDPR, for example, to fulfil a “legitimate interest”) . Your friends can still access it, but neither they, nor the services they’ve fled to, can process your data beyond the scope of the initial consent to move it to the new context. Once the data and you are separated, there is no way for third parties to obtain the consent they’d need to lawfully repurpose it for new products or services.

Beyond consent, the GDPR binds online services to two other vital obligations: “data minimization” and “data security.” These two requirements act as a further backstop to users whose data travels with their friends to a new platform.

Data minimization means that any user data that lands on a new platform has to be strictly necessary for its users’ purposes (whether or not there might be some commercial reason to retain it). That means that if a Facebook rival imports your comments to its new user’s posts, any irrelevant data that Facebook transmits along with that data (say, your location when you left the comment, or which link brought you to the post), must be discarded. This provides a second layer of protection for users whose friends migrate to new services: not only is their consent required before their blended data travels to the new service, but that service must not retain or process any extraneous information that seeps in along the way.

The GDPR’s security guarantee, meanwhile, guards against improper handling of the data you consent to let your friends take with them to new services. That means that the data in transit has to be encrypted, and likewise the data at rest, on the rival service’s servers. And no matter that the new service is a startup, it has a regulated, affirmative duty to practice good security across the board, with real liability if it commits a material omission that leads to a breach.

Without interoperability, the monopolistic high-surveillance platforms are likely to enjoy long term, sturdy dominance. The collective action problem represented by getting all the people on Facebook whose company you enjoy to leave at the same time you do means that anyone who leaves Facebook incurs a high switching cost.

Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.

Without interoperability, privacy’s best hope is to gentle Facebook, rendering it biddable and forcing it to abandon its deeply held beliefs in enrichment through nonconsensual surveillance —and to do all of this without the threat of an effective competitor that Facebook users can flee to no matter how badly it treats them.

Interoperability without privacy safeguards is a potential disaster, provoking a competition to see who can extract the most data from users while offering the least benefit in return. Every legislative and regulatory interoperability proposal in the US, the UK, and the EU contains some kind of privacy consideration, but the EU alone has a region-wide, strong privacy regulation that creates a consistent standard for data-protection no matter what measure is being contemplated. Having both components - an interoperability requirement and a comprehensive privacy regulation - is the best way to ensure interoperability leads to competition in desirable activities, not privacy invasions.