Connect with us

Technology

Congress and States Act to Combat AI-Generated Sexual Deepfakes

Editorial

Published

on

In a significant move to address the rising threat of AI-generated sexual deepfakes, Congress has enacted the Take It Down Act, signed into law by Donald Trump on May 19, 2024. This legislation represents the first federal effort to tackle the non-consensual posting and distribution of intimate images, including those generated by artificial intelligence. The law mandates that covered platforms must remove flagged intimate images within 48 hours of receiving a valid notice and grants the Federal Trade Commission (FTC) the authority to enforce these takedown requests.

The new law introduces criminal penalties for individuals who publish intimate depictions without consent. While the criminal provisions are already in effect, platforms have until May 19, 2026, to implement the required reporting and removal systems. The bill, championed by Senator Ted Cruz, aims to empower victims of revenge and deepfake pornography, particularly young women. “The Take It Down Act gives victims… the ability to fight back,” Cruz stated during the signing ceremony, which included Elliston Berry, a Texas student whose experience with deepfake imagery brought attention to the issue.

Berry shared her emotional journey, emphasizing the urgency of the legislation. “I had PSAT testing the next day… the last thing I need was to wake up and find out that someone made fake nudes of me,” she told CBS News. Her case highlights the real-world impact of deepfake technology and the importance of legal protections for vulnerable individuals.

States Respond with Their Own Legislation

Following the federal law, several states have begun to enhance their legal frameworks to combat deepfakes. Maryland updated its “revenge porn” statute, effective July 1, 2024, to include computer-generated depictions, increasing civil remedies and imposing criminal penalties of up to two years in prison and fines of $5,000. In Texas, the Stopping AI-Generated Child Pornography Act, effective September 1, 2024, introduced new offenses related to “obscene visual material” that appears to depict minors, including AI-generated content.

California has also expanded its legislation against digital likenesses, reinforcing protections against fabricated intimate images. State attorneys general are actively coordinating efforts to pressure technology companies to limit access to tools that facilitate the creation of sexual deepfakes. In late August, a bipartisan coalition of 47 attorneys general, led by California Attorney General Rob Bonta and Massachusetts Attorney General Andrea Joy Campbell, urged major search engines and payment platforms to take action against the proliferation of non-consensual imagery.

The coalition’s efforts included letters that outlined the failures of these companies to adequately restrict deepfake creation and called for stronger safeguards to protect the public from harmful content.

Political Deepfakes Remain Unregulated

While actions against sexual deepfakes have progressed, the response to political deepfakes remains fragmented. Although proposals like the REAL Political Advertisements Act and the Protect Elections from Deceptive AI Act have been introduced in Congress, neither has yet become law. Consequently, federal agency action has been limited. In February 2024, the Federal Communications Commission (FCC) clarified that AI-generated voice calls fall under existing regulations that restrict the use of artificial or prerecorded voices, following a controversial incident involving calls mimicking the President.

In August 2024, the FCC commenced rulemaking to require disclosures for AI-generated content in political advertisements. The outcomes of these proposals remain pending as states grapple with the implications of regulating election speech under the First Amendment. California’s attempts to legislate against political deepfakes have faced legal challenges, with a federal judge blocking a bill intended to allow private lawsuits over election-related deepfakes, citing concerns about free speech.

As of July 10, 2024, a total of 28 states have implemented laws addressing political deepfakes, but these remain inconsistent and vary widely in their scope and enforcement.

The landscape of deepfake legislation reflects a broader trend of states moving rapidly to address the misuse of technology. Since 2019, 47 states have enacted laws targeting deepfakes in various contexts, primarily focusing on issues surrounding intimate imagery. The Take It Down Act establishes a national baseline for addressing non-consensual intimate depictions, while political deepfakes continue to be regulated through a patchwork of state laws and platform policies.

Advocates for the Take It Down Act view it as a critical step in ensuring protections for individuals affected by deepfake technology. They emphasize the need for careful consideration in any future legislation regarding political content to avoid infringing upon protected speech. As the 2026 election cycle approaches, states are likely to continue experimenting with their own regulations, leading to further legal challenges and increasing pressure on platforms to define their policies regarding deepfakes.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.