Skip to main content
February 01, 2023

Unless you live in one of five states, you have limited legal recourse if someone makes a pornographic deepfake of you. NYU Law students are working with the CCRI to change that.

Engelberg Center

By Sophie Liao (‘24) and Talya Whyte (‘24)

In April 2018, a deepfake attack against Rana Ayyub, a journalist based in Mumbai, caused her so much emotional distress that she suffered heart palpitations and withdrew from online life. Ayyub had written a critical article about India’s ruling party and in retaliation, someone digitally superimposed her face onto the body of a woman in a pornographic video, which was then circulated on social media.

This is just one of countless stories involving deepfakes, which are images and videos that have been digitally manipulated to show someone doing or saying something that they have not actually done. 90–95% of deepfake videos are of nonconsensual pornography (NCP), sexually graphic images of individuals made and/or distributed without their consent. And 90% of NCP deepfakes are of women.

The victims of deepfakes can experience emotional, financial, and even social harms, and the technology for creating extremely realistic-looking deepfakes is rapidly improving. With the consent of the user, AI image editing apps like Lensa can create deepfake images in just 10 minutes with only a few selfies. Meanwhile, the technology for detecting deepfakes lags behind. This makes deterrence and victim protection a pressing issue, but as of December 2022, only five states—Virginia, California, New York, Texas, and Georgia—have enacted statutes regulating deepfakes. The most salient issue may be concerns about censorship and free speech.

Since there exists no federal deepfake statute, states have taken different approaches. California, New York, and Texas enable victims to sue the deepfake creator and distributor(s) directly to remove the harmful content and/or receive monetary damages. Though Texas restricts the crime to deepfake videos of political candidates prior to an election. Virginia and Georgia rely on prosecutors to pursue criminal suits against creators/distributors of NCP deepfakes. All statutes have been used minimally, so it is unclear which state or states’ statute will be the most resilient to legal challenges.

In the 2022-23 academic year, two students from NYU’s Technology Law and Policy Clinic, Sophie Liao and Talya Whyte, helped the Cyber Civil Rights Initiative (CCRI) develop and evaluate options for deepfake legislation. The clinic students analyzed deepfake legislation, legal challenges to NCP statutes (a close analogue for deepfake NCP), and the extent to which privacy torts, like false light and the right of publicity, could prohibit deepfakes. They also examined potential First Amendment challenges and weighed the efficacy of privacy and dignity arguments in combating them. To conclude their part of the CCRI’s deepfake statute initiative, the students produced a comprehensive memo outlining their findings and recommendations for legislators seeking to pass or strengthen deepfake legislation. This analysis will help the CCRI and legislators ensure that if someone makes a deepfake of you without your consent, you will be able to do something about it.