Video editing technology hit a milestone this month. The new tech is being used to make porn. With easy-to-use software, pretty much anyone can seamlessly take the face of one real person (like a celebrity) and splice it onto the body of another (like a porn star), creating videos that lack the consent of multiple parties.

People have already picked up the technology, creating and uploading dozens of videos on the Internet that purport to involve famous Hollywood actresses in pornography films that they had no part in whatsoever.

While many specific uses of the technology (like specific uses of any technology) may be illegal or create liability, there is nothing inherently illegal about the technology itself. And existing legal restrictions should be enough to set right any injuries caused by malicious uses.

As Samantha Cole at Motherboard reported in December, a Reddit user named “deepfakes” began posting videos he created that replaced the faces of porn actors with other well-known (non-pornography) actors. According to Cole, the videos were “created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.”

Just over a month later, Cole reported that the creation of face-swapped porn, labeled “deepfakes” after the original Redditor, had “exploded” with increasingly convincing results. And an increasingly easy-to-use app had launched with the aim of allowing those without technical skills to create convincing deepfakes. Soon, a marketplace for buying and selling deepfakes appeared in a subreddit, before being taken off the site. Other platforms including Twitter, PornHub, Discord, and Gfycat followed suit. In removing the streams, each platform noted a concern that the people depicted in the deepfakes did not consent to their involvement in the videos themselves.

We can quickly imagine many terrible uses for this face-swapping technology, both in creating nonconsensual pornography and false accounts of events, and in undermining the trust we currently place in video as a record of events.

But there can be beneficial and benign uses as well: political commentary, parody, anonymization of those needing identity protection, and even consensual vanity or novelty pornography. (A few others are hypothesized towards the end of this article.)

The knee-jerk reaction many people have towards any new technology that could be used for awful purposes is to try and criminalize or regulate the technology itself. But such a move would threaten the beneficial uses as well, and raise unnecessary constitutional problems.

Fortunately, existing laws should be able to provide acceptable remedies for anyone harmed by deepfake videos. In fact, this area isn’t entirely new when it comes to how our legal framework addresses it. The US legal system has been dealing with the harm caused by photo-manipulation and false information in general for a long time, and the principles so developed should apply equally to deepfakes.

What Laws Apply

If a deepfake is used for criminal purposes, then criminal laws will apply. For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.

On the tort side, the best fit is probably the tort of False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. Deepfakes fit into those areas quite easily.

To win a false light lawsuit, a plaintiff—the person harmed by the deepfake, for example—must typically prove that the defendant—the person who uploaded the deepfake, for example—published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense, in such a way that would be highly offensive to a reasonable person, and caused the plaintiff mental anguish or suffering. It seems that in many situations the placement of someone in a deepfake without their consent would be the type of “highly offensive” conduct that the false light tort covers.

The Supreme Court further requires that in cases pertaining to matters of public interest, the plaintiff must also prove an intent that the audience believe the impression to be true. This is the actual malice requirement found in defamation law. 

False light is recognized as a legal action in about two-thirds of the states. It can be difficult to distinguish false light from defamation, and many courts treat them identically. The courts that treat them differently focus on the injury: defamation compensates for damage to reputation, false light compensates for being subject to offensiveness.  But of course, a plaintiff could sue for defamation if a deepfake has a natural tendency to damage their reputation.

The tort of Intentional Infliction of Emotional Distress (IIED) will also be available in many situations. A plaintiff can win an IIED lawsuit if they prove that a defendant—again, for example, a deepfake creator and uploader—intended to cause the plaintiff severe emotional distress by extreme and outrageous conduct, and that the plaintiff actually suffered severe emotional distress as a result of the extreme and outrageous conduct. The Supreme Court has found that where the extreme and outrageous conduct is the publication of a false statement and when the statement is about either a matter of public interest or a public figure, the plaintiff must also prove an intent that the audience believe the statement to be true, an analog to defamation law’s actual malice requirement. The Supreme Court has further extended the actual malice requirement to all statements pertaining to matters of public interest.

And to the extent deepfakes are sold or the creator receives some other benefit from them, they raise the possibility of right of publicity claims as well by those whose images are used without their consent.

Lastly, one whose copyrighted material–either the facial image or the source material into which the facial image is embedded–may have a claim for copyright infringement, subject of course to fair use and other defenses.

Yes, deepfakes can present a social problem about consent and trust in video, but EFF sees no reason why the already available legal remedies will not cover injuries caused by deepfakes.