Minnesota Women Lead Fight Against AI-Generated Deepfake Pornography

Mark Eisenberg
Photo: Finoracle.net

Minnesota Women Expose Perils of AI-Generated Deepfake Pornography

In June 2024, Jessica Guistolise, a technology consultant, was alerted to a disturbing discovery: photos of herself and dozens of other women had been manipulated using artificial intelligence to create explicit deepfake pornography. These images, sourced from social media profiles, were found on the computer of a mutual acquaintance, Ben, who used an AI-powered “nudify” platform called DeepSwap to generate the content. This revelation marked the beginning of a collective effort by Guistolise and her friends to confront the emerging threat posed by AI-driven nonconsensual pornography, a challenge that exposes significant gaps in current legal frameworks and digital safety measures.

The Rise of Nudify Apps and Their Impact

Nudify apps like DeepSwap utilize generative AI to merge real faces with sexually explicit imagery, creating realistic but fabricated videos and photos. These platforms require no technical expertise, making them accessible to virtually anyone. The rapid growth of such apps coincides with the broader AI boom following the launch of ChatGPT in late 2022. Victims of these apps face profound psychological trauma, including anxiety, paranoia, and health issues. Despite the severity of harm, many perpetrators’ actions remain legally ambiguous when the content is not publicly disseminated, as was the case with Ben’s deepfakes stored privately on his device.
“It’s not something that I would wish for on anybody,” said Jessica Guistolise, reflecting on the emotional toll of seeing AI-generated images of herself.
Although Guistolise and her friends filed police reports and obtained restraining orders, the absence of clear legal prohibitions on creating nonconsensual deepfakes without distribution hindered effective prosecution. Molly Kelley, a law student among the affected, underscored the difficulty: “He did not break any laws that we’re aware of. And that is problematic.” In response, the group engaged Minnesota state Senator Erin Maye Quade, who had previously sponsored legislation criminalizing the nonconsensual dissemination of intimate deepfake content. Building on this, Maye Quade introduced a bill targeting AI companies that enable nudify services, proposing fines of $500,000 per violation within the state. However, enforcing such laws against companies headquartered overseas remains a significant challenge, highlighting the need for coordinated federal and international responses.

Industry and Regulatory Responses to Nudify Services

Major technology companies and app stores have taken steps to curb the spread of nudify services. Meta enforces strict policies against ads featuring nudity and sexual content and has removed thousands of advertisements linked to nudify platforms. Apple routinely rejects apps violating content guidelines, while Google has remained silent on the issue. Despite these efforts, nudify services remain accessible through various channels, including third-party affiliate sites and Discord servers. Research indicates these platforms attract millions of unique monthly visitors and generate significant revenue, underscoring the scale and persistence of the problem.

Psychological and Social Impact on Victims

The emotional toll on victims is profound. Kelley, six months pregnant when she discovered her image had been manipulated, experienced severe stress affecting her health. Megan Hurley reported heightened paranoia and the constant fear of her AI-generated images being disseminated.
“Everyone is subject to being objectified or pornographied by everyone else,” said Ari Ezra Waldman, law professor at UC Irvine, highlighting the pervasive threat deepfake technology poses to personal dignity and privacy.
Victims often face isolation, avoid social media, and struggle with trusting others, underscoring the urgent need for comprehensive support services alongside legal reforms.

Broader Context: Deepfake Pornography and AI Regulation

Deepfake pornography has been a growing concern since 2018, with platforms hosting tens of thousands of explicit videos involving thousands of individuals. While some sites have shut down following investigations, the technology continues to evolve and proliferate. At the federal level, the 2025 “Take It Down Act” criminalizes the online publication of nonconsensual sexual images, including AI-generated content. However, its effectiveness is limited when content is not publicly shared. Meanwhile, local jurisdictions like San Francisco have pursued civil litigation to hold nudify companies accountable. Experts warn that rapid AI development and investments by major corporations complicate regulatory efforts, with ongoing debates over balancing innovation and protection against misuse.

FinOracleAI — Market View

The emergence of AI-powered nudify services exposes critical vulnerabilities in digital privacy and consent frameworks. As these platforms gain traction, they pose increasing risks to individuals’ reputations and mental health, while challenging existing legal and regulatory structures.
  • Opportunities: Legislative innovation at state and federal levels to address nonconsensual AI-generated content; enhanced collaboration between tech companies and regulators to enforce content policies.
  • Risks: Proliferation of accessible AI nudify tools facilitating abuse; jurisdictional and enforcement challenges against overseas operators; potential chilling effects on AI innovation due to regulatory uncertainty.
Impact: The increasing visibility of nonconsensual AI deepfake pornography is prompting legal reforms and industry responses, yet significant enforcement and ethical challenges remain. Stakeholders must balance technological advancement with robust protections for individual rights to mitigate harms in this evolving landscape.
Share This Article
Mark Eisenberg is a financial analyst and writer with over 15 years of experience in the finance industry. A graduate of the Wharton School of the University of Pennsylvania, Mark specializes in investment strategies, market analysis, and personal finance. His work has been featured in prominent publications like The Wall Street Journal, Bloomberg, and Forbes. Mark’s articles are known for their in-depth research, clear presentation, and actionable insights, making them highly valuable to readers seeking reliable financial advice. He stays updated on the latest trends and developments in the financial sector, regularly attending industry conferences and seminars. With a reputation for expertise, authoritativeness, and trustworthiness, Mark Eisenberg continues to contribute high-quality content that helps individuals and businesses make informed financial decisions.​⬤