The increasing risk that the Supreme Court will overturn federal constitutional abortion protections has refocused attention on the role digital service providers of all kinds play in facilitating access to health information, education, and care—and the data they collect in return.

In a post-Roe world, service providers can expect a raft of subpoenas and warrants seeking user data that could be employed to prosecute abortion seekers, providers, and helpers. They can also expect pressure to aggressively police the use of their services to provide information that may be classified in many states as facilitating a crime.

Whatever your position on reproductive rights, this is a frightening prospect for data privacy and online expression. That’s the bad news.

The good news is there is a lot companies—from ISPs to app developers to platforms and beyond—can do right now to prepare for that future, and those steps will benefit all users. If your product or service might be used to target people seeking, offering, or facilitating abortion access, now is the time to minimize the harm that can be done.

Here’s some ideas to get you started.

If You Build it, They Will Come—So Don’t Build It, Don’t Keep It, Dismantle What You Can, and Keep It Secure

Many users don’t truly realize how much data is collected about them, by multiple entities, as they go about their daily business. Search engines, ISPs, apps, and social media platforms collect all kinds of data, including highly sensitive information. Sometimes, they need that data to provide the service the user wants. Too often, however, they use it for other purposes, like ad sales, and/or for selling to third parties. Sometimes they’ll claim the data is anonymized. But often that’s not possible. For example, there’s no such thing as “anonymous” location data. Data points like where a person sleeps at night or spends their days are an easy way to find a person’s home address or job. A malicious observer can easily connect these movements to identify a person and anticipate their routines and movement. Another piece of the puzzle is the ad ID, another so-called “anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of  “identity resolution” companies can readily link ad IDs to real people at scale.

Governments and private actors know that intermediaries and apps can be a treasure trove of information. Good data practices can help you avoid being on the wrong side of history and legal hassles to boot—after all, if you don’t have it, you can’t produce it.

1. Allow pseudonymous access

Give your users the freedom to access your service pseudonymously, that is, so that even you do not know their identities. As we've previously written, “real-name” policies and their ilk are especially harmful to vulnerable populations, including pro-democracy activists, the LGBT community—and people seeking or providing abortion access. Recognize that authentication or verification schemes that require users to submit identification may also put them at risk.

2. Stop behavioral tracking

Don’t do it. If you must, make sure users affirmatively opt in first. If that’s not possible, ensure users know about it and know they can opt out. This includes letting users modify data that's been collected about them so far, as well as giving them the option to not have your service collect this information about them at all. When users opt out, delete their data and stop collecting it moving forward. Offering an opt-out of targeting but not out of tracking is unacceptable.

3. Check your retention policy

Do you really need to keep all of that data you’ve been collecting? Now is the time to clean up the logs. If you need them to check for abuse or for debugging, think carefully about which precise pieces of data you really need. And then delete them regularly—say, every week for the most sensitive data. IP addresses are especially risky to keep. Avoid logging them, or if you must log them for anti-abuse or statistics, do so in separate files that you can aggregate and delete frequently. Reject user-hostile measures like browser fingerprinting.

4. Encrypt data in transit.

Seriously, encrypt data in transit. Why are you not already encrypting data in transit? Does the ISP and the entire internet need to know about the information your users are reading, the things they're buying, and the places they're going?

5. Enable end-to-end encryption by default.

If your service includes messages, enable end-to-end encryption by default. 

6. Don’t allow your app to become a location mine

There is an entire industry devoted to collecting and selling location data—and it’s got a well-documented history of privacy violations. Some location data brokers collect that data by getting ordinary app developers to install tracking software into their apps. Don’t do that.

7. Don’t share the data you collect more than necessary, and only with trusted/vetted partners

This one is beyond obvious: don’t share the data you collect except as necessary to provide the service you are offering. Even then, make sure you vet those third parties’ own data practices. Of course, this requires actually knowing where your data is going. Finally, avoid third-party connections.

8. Where possible, make it interoperable

There may be a third party that can do a better job protecting your privacy-conscious users than you can alone. If so, allow them to interoperate with you so they can offer that service.

Push Back Against Improper Demands—and Be Transparent About Them

For example, law enforcement may ask a search engine to provide information about all users who searched for a particular term, such as “abortion.” Law enforcement may also seek unconstitutional “geofence warrants” demanding data on every device in a given geographic area. Law enforcement might use that information to draw a line around an abortion clinic in a neighboring state, get a list of every phone that’s been there, and use that information to track people as they drive back home across state lines. Private parties, meanwhile, may leverage the power of the courts to issue subpoenas to try to unmask people who provide information online anonymously.

1. Stand up for your users

Challenge unlawful subpoenas for user information in court. If a warrant or subpoena is improper, push back. For example, federal courts have ruled that geofence warrants are unconstitutional. And there are strong protections in the U.S. for anonymous speech. Does the court have jurisdiction to require compliance? Some companies have been willing to stand up for their users. Join them. If your company can’t afford legal counsel, EFF may be able to help.

2. At minimum, provide notice to affected users 

Your user should never learn that you disclosed their information after it’s too late for them to do anything about it. If you get a data request, and there is no legal restriction forbidding you from doing so, notify the subject of the request as soon as possible.

3. Implement strong transparency practices

Issue transparency reports on a regular basis, including state-by-state breakdown of data requests and information related to reproductive rights bans/restrictions. Facebook’s transparency report, for example, is only searchable by country, not by state. And while the report mentions removing information based on reports from state attorneys general, it did not name the states or the reasons for the requests. Endorse the Santa Clara Principles on Transparency and Accountability – and implement them.

If You Market Surveillance Technology to Governments, Know Your Customer

This should also be obvious.

Review and Revise Your Community Standards Policy to Discourage Abuse

Social media platforms regularly engage in “content moderation”—the depublication, downranking, and sometimes outright censorship of information and/or user accounts from social media and other digital platforms, usually based on an alleged violation of a platform’s “community standards” policy. Such moderation, however well-intentioned, is often deeply flawed, confusing and inconsistent, particularly when it comes to material related to sexuality and sexual health. Take, for example, the attempt by companies to eradicate homophobic and transphobic speech. While that sounds like a worthy goal, these policies have resulted in LGBTQ users being censored for engaging in counterspeech or for using reclaimed terms like “dyke.”

Facebook bans ads it deems “overly suggestive or sexually provocative,” a practice that has had a chilling effect on women’s health startups, bra companies, a book whose title contains the word “uterus,” and even the National Campaign to Prevent Teen and Unwanted Pregnancy.

In addition, government and private actors can weaponize community standards policies, flagging speech they don’t like as violating community standards. Too often, the speaker won’t fight back, either because they don’t know how, or because they are intimidated.

Platforms should take another look at their speech policies, and consider carefully how they might be abused. For example, almost every major internet platform—Facebook, Google (owner of Blogger and YouTube), Twitter, and reddit—has some prohibition on “illegal” material, but their policies do not explain much further. Furthermore, most have some policy related to “local laws”—but they mean laws by country, not by state. This language leaves a large hole for individuals and governments to claim a user has violated the policy and get life-saving information removed.

Furthermore, as noted, Facebook has a terrible track record with its policy related to sex and sexual health. The company should review how its policy of labeling images associated with “birth-giving and after-birth giving moments, including both natural vaginal delivery and caesarean section,” might lead to confusion.

If your product or service might be used to target people seeking, offering, or facilitating abortion access, now is the time to minimize the harm that can be done.

Many groups share information through Google docs—posting links either within a network or even publicly. In a post-Roe world, that might include information about activities that are illegal in some states. However, while Google permits users to share educational information about illegal activities, it prohibits use of the service to engage in such activities or promote them.

Blogger uses similar language, and adds that “we will take appropriate action if we are notified of unlawful activities, which may include reporting you to the relevant authorities.” This language may discourage many from using the service to share information that, again, might be legal in some states and illegal in others.

In general, many sites have language outlawing material that may lead to “serious physical or emotional harm.” Depending on how “harm” is construed, and by whom, this language too could be an excuse to excise important tools and information. 

Worse, companies have set some unfortunate recent precedent. For example, Facebook’s transparency report mentions, in response to COVID-related concerns, that it blocked access to 27 items in response to reports from state attorneys general and the US Attorney General. All 27 were ultimately reinstated, as they did not actually violate Facebook’s “community standards or other applicable policies.” This shows a willingness on Facebook’s part to act first and ask questions later when contacted by state authorities. Even if eventually reinstated, the harm to people looking for information in a critical, time-sensitive situation could be incalculable.

Most of these ideas aren’t new – we’ve been calling for companies to take these steps for years. With a new threat model on the horizon, it’s past time for them to act. Our digital rights depend on it.

Correction: an earlier version of this report included a confusing example of a service which could be encrypted, which has since been deleted.