There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board.