Introduction
With the continuous development of AI technologies in recent years, creating AI-generated content has become easier than ever. While some uses of these technologies are beneficial—granting individuals access to heaps of information in seconds and automating tasks—other uses are illicit and defamatory. From phone scams and online extortion to creation and distribution of “deepfakes,” AI-generated content has begun to spin out of control, without any rules or regulations to curtail it.
What are deepfakes?
Deepfakes are a type of AI-generated media depicting highly realistic fake videos or audio recordings. Common forms of deepfake technology include impersonations of public figures, forged videos for political manipulation, spreading false information, and creating false, sexual content of individuals. Deepfakes are being used in the political sphere by parties to impersonate political candidates and advance partisan messages that harm opponents and spread false information. Additionally, deepfakes are prominent on social media platforms, with users creating and sharing AI-generated content using actors’ and actresses’ voices and features without their permission. This content has taken the form of deepfake pornography as well—using AI to create realistic videos and images of public figures seemingly engaging in sexual acts, without disclosing that the content is AI-manipulated. There is even concern that deepfakes may be used to erode trust in surveillance videos, body cameras, and other evidence. Without legislation in place to counteract and apprehend misuse of deepfakes, our society very well may end up in a situation where we cannot tell what is real from what is fake.
What is the DEEPFAKES Accountability Act?
Currently, there is no federal legislation addressing deepfakes. In 2019, Representative Yvette Clarke of New York proposed The DEEPFAKES Accountability Act for the first time. Roughly five years later, in September 2023, Clarke, hoping for better success, authored the proposal in the U.S. House of Representatives for a second time.
The Act is intended to provide protection to individuals nationwide that have fallen victim to deepfake content. Clarke noted that the Act would “provide prosecutors, regulators, and particularly victims with resources, like detection technology, to stand up against the threat of nefarious deepfakes.” If the Act is passed, it will require creators to label all deepfakes uploaded online to make it clear—through use of a non-removable digital watermark and text description—that the video or image is not real and has been modified. Failing to do so will be a crime. However, at this time, it is unclear whether the Act will pass.
NY Passes New Deepfake Law
But there has been some progress. In early October 2023, Governor Hochul of New York signed Senator Michelle Hinchey’s bill into law, making it illegal to disseminate AI-generated explicit content or deepfakes of a person without consent. If found guilty, one could face a year in jail and an $1000 fine. Victims will also be granted the right to pursue legal action against wrongdoers. Hinchley’s bill strives to send a strong message that New York will not tolerate this form of abuse and that victims will be avenged. The bill is the first in the state to advance specific protections surrounding this type of deepfake content and will – hopefully – pave the way for other states to take similar action. Will New Jersey be next?
Gianna D’Onofrio is a third-year law student at Seton Hall University School of Law with a passion for Corporate Law. Upon graduating in the Spring and taking the Bar Exam, she will serve as a law clerk to Judge Cynthia Santomauro in Essex County, Civil Division.