The Senate Judiciary Committee recently held a hearing on “Protecting Digital Innocence.” The hearing covered a range of problems facing young people on the Internet today, with a focus on harmful content and privacy-invasive data practices by tech companies. While children do face problems online, some committee members seemed bent on using those problems as an excuse to censor the Internet and undermine the legal protections for free expression that we all rely on, including kids.

Don’t Censor Users; Empower Them to Choose

Though tech companies weren’t represented in the hearing, senators offered plenty of suggestions about how those companies ought to make their services safer for children. Sen. John Kennedy suggested that online platforms should protect children by scanning for “any pictures of human genitalia.”

It’s foolish to think that one set of standards would be appropriate for all children, let alone all Internet users.

Sen. Kennedy’s idea is a good example of how lawmakers sometimes misunderstand the complexity of modern-day platform moderation, and the extreme difficulty of getting it right at scale. Many online platforms do voluntarily use automated filters, human reviewers, or both to snoop out nudity, pornography, or other speech that companies deem inappropriate. But those measures often bring unintended consequences that reach much further than whatever problems the rules were intended to address. Instagram deleted one painter’s profile until the company realized the absurdity of this aggressive application of its ban on nudity. When Tumblr employed automated filters to censor nudity, it accidentally removed hundreds of completely “safe for work” images.

The problem gets worse when lawmakers attempt to legislate what they consider good content moderation. In the wake of last year’s Internet censorship law SESTA-FOSTA, online platforms were faced with an awful choice: err on the side of extreme prudishness in their moderation policies or face the risk of overwhelming liability for their users’ speech. Facebook broadened its sexual solicitation policy to the point that it could feasibly justify removing discussion of sex altogether. Craigslist removed its dating section entirely.

Legislation to “protect” children from harmful material on the Internet will likely bring similar collateral damage for free speech: when lawmakers give online platforms the impossible task of ensuring that every post meets a certain standard, those companies have little choice but to over-censor.

During the hearing, Stephen Balkam of the Family Online Safety Institute provided an astute counterpoint to the calls for a more highly filtered Internet, calling to move the discussion “from protection to empowerment.” In other words, tech companies ought to give users more control over their online experience rather than forcing all of their users into an increasingly sanitized web. We agree.

It’s foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. But today, social media companies frequently make censorship decisions that affect everyone. Instead, companies should empower users to make their own decisions about what they see online by letting them calibrate and customize the content filtering methods those companies use. Furthermore, tech and media companies shouldn’t abuse copyright and other laws to prevent third parties from offering customization options to people who want them.

Congress and Government Must Do More to Fight Unfair Data-Collection Practices

Like all Internet users, kids are often at the mercy of companies’ privacy-invasive data practices, and often have no reasonable opportunity to opt out of collection, use, and sharing of their data. Congress should closely examine companies whose business models rely on collecting, using, and selling children’s personal information.

Some of the proposals floated during the hearing for protecting young Internet users’ privacy were well-intentioned but difficult to implement. Georgetown Law professor Angela Campbell suggested that platforms move all “child-directed” material to a separate website without behavioral data collection and related targeted advertising. Platforms must take measures to put all users in charge of how their data is collected, used, and shared—including children—but cleanly separating material directed at adults and children isn’t easy. It would be awful if a measure designed to protect young Internet users’ privacy made it harder for them to access materials on sensitive issues like sexual health and abuse. A two-tiered Internet undermines the very types of speech for which young Internet users most need privacy.

We do agree with Campbell that enforcement of existing children’s privacy laws must be a priority. As we’ve argued in the student privacy context, the Federal Trade Commission (FTC) should better enforce the Children’s Online Privacy Protection Act (COPPA), the law that requires websites and online services that are directed to children under 13 or have actual knowledge that a user is under 13 to obtain parental consent before collecting personal information from children for commercial purposes. The Department of Education should better enforce the Family Educational Rights and Privacy Act (FERPA), which generally prohibits schools that receive federal funding from sharing student information without parental consent.

EFF’s student privacy project catalogues the frustrations that students, parents, and other stakeholders have when it comes to student privacy. In particular, we’ve highlighted numerous examples of students effectively being forced to share data with Google through the free or low-cost cloud services and Chromebooks it provides to cash-strapped schools. We filed a complaint with the FTC in 2015 asking it to investigate Google’s student data practices, but the agency never responded. Sen. Marsha Blackburn cited our FTC complaint against Google as an example of the FTC’s failure to protect children’s privacy: “They go in, they scoop the data, they track, they follow, and they’ve got that virtual you of that child.”

While Google has made some progress since 2015, Congress should still investigate whether the relevant regulatory agencies are falling down on the job when it comes to protecting student privacy. Congress should also explore ways to ensure that users can make informed decisions about how their data is collected, used, and shared. Most importantly, Congress should pass comprehensive consumer privacy legislation that empowers users and families to bring their own lawsuits against the companies that violate their privacy rights.

Undermining Section 230 Won’t Improve Companies’ Practices

At the end of the hearing, Sen. Lindsey Graham (R-SC) turned the discussion to Section 230, the law that shields online platforms, services, and users from liability for most speech created by others. Sen. Graham called Section 230 the “elephant in the room,” suggesting that Congress use the law as leverage to force tech companies to change their practices: “We come up with best business practices, and if you meet those business practices you have a safe haven from liability, and if you don’t, you’re going to get sued.” He followed his comments with a Twitter thread claiming that kneecapping liability protections is “the best way to get social media companies to do better in this area.”

Don’t be surprised if the big tech companies fail to put up a fight against these proposals.

Sen. Graham didn’t go into detail about what “business practices” Congress should mandate, but regardless, he ought to rethink the approach of threatening to weaken Section 230. Google and Facebook are more willing to bargain away the protections of Section 230 than their smaller competitors. Nearly every major Internet company endorsed SESTA-FOSTA, a bill that made it far more difficult for small Internet startups to unseat the big players. Sen. Josh Hawley’s bill to address supposed political bias in content moderation makes the same mistake, giving more power to the large social media companies it’s intended to punish. Don’t be surprised if the big tech companies fail to put up a fight against these proposals: the day after the hearing, IBM announced support for further weakening Section 230, just like it did last time around.

More erosion of Section 230 won’t necessarily hurt big Internet companies, but it will hurt users. Under a compromised Section 230, online platforms would be incentivized to over-censor users’ speech. When platforms choose to err on the side of censorship, marginalized voices are the first to disappear.

Congress Must Consider Unintended Consequences

The problems facing young people online are complicated, and it’s essential that lawmakers carefully consider the unintended consequences of any legislation in this area.

Companies ought to help users and families customize online services for their own needs. But congressional attempts to legislate solutions to harmful Internet content by forcing companies to patrol users’ speech are fraught with the potential for collateral damage (and would likely be unconstitutional). We understand Congress’ desire to hold large Internet companies accountable, but it shouldn’t pass laws to make the Internet a more restrictive place.

At the same time, Congress does have an historic opportunity to help protect children and adults from invasive, unfair data-collection and advertising practices, both by passing strong consumer privacy legislation and by demanding that the government do more to enforce existing privacy laws.