Minnesota Women Fight Back Against AI-Generated Deepfake Pornography

Mark Eisenberg
Photo: Finoracle.net
In June 2024, Jessica Guistolise, a technology consultant from Minneapolis, received alarming news that exposed a disturbing misuse of artificial intelligence. A mutual acquaintance, Jenny, revealed that her estranged husband, Ben, had used a site called DeepSwap to create deepfake pornographic images of over 80 women from their social circles — including Guistolise herself and her friends — using photos sourced from social media. DeepSwap is part of a growing category of AI-powered “nudify” applications that generate explicit content by merging real faces with pornographic images. These tools require no technical expertise, making the creation of nonconsensual deepfake pornography alarmingly easy and accessible. Despite the distressing discovery, Guistolise and her friends faced a harsh legal reality: the creation of such deepfakes without distribution may not constitute a crime under current laws. Ben’s actions, while morally reprehensible, were not clearly illegal since the images were never shared publicly and all individuals depicted were adults. Molly Kelley, a law student among the victims, described the situation as “problematic,” highlighting the gap between evolving AI capabilities and existing legal frameworks. This legal void has left many victims without adequate protection or recourse.

Psychological Trauma and Social Consequences

The impact on victims has been profound. Kelley, who was pregnant at the time, reported severe stress-related health issues after discovering her face had been used in explicit deepfake images. Megan Hurley, another victim, described feelings of paranoia and social anxiety, fearing the potential spread of the content. Experts warn that such nonconsensual deepfake content leads to significant psychological trauma, including suicidal ideation and a persistent fear of exposure. The mere possibility that these images could be disseminated adds to the victims’ distress.

Tech Industry and Regulatory Responses

Major tech companies like Meta and Apple have policies against nudity and sexual content in advertisements, and Meta has taken steps to remove ads promoting nudify services. However, enforcement remains inconsistent, especially given the international nature of many AI deepfake platforms. DeepSwap, operated by a company registered in Ireland but with prior ties to Hong Kong, has been largely unresponsive to inquiries. Its terms of service prohibit uploading images without consent, but enforcement appears minimal.

Legislative Efforts to Combat AI-Driven Abuse

In response to the emerging threat, Minnesota State Senator Erin Maye Quade introduced legislation that would impose fines of $500,000 on companies offering nudify services for each nonconsensual explicit deepfake generated within the state. The bill addresses creation itself, not just distribution, marking a significant shift in legal strategy. While the bill is under consideration, Maye Quade acknowledges enforcement challenges, particularly with overseas operators, underscoring the need for federal-level intervention. At the federal level, the “Take It Down Act,” signed into law in May 2025, criminalizes the online publication of nonconsensual sexual images, including AI-generated deepfakes. However, it does not address the creation of such content if it is not disseminated.

The Expanding Market for Nudify Services

Research indicates that nudify services have become mainstream, with millions of monthly visitors and subscription-based revenue models. Platforms advertise on social media and app stores, despite ongoing efforts to remove violating content. The shutdown of prominent deepfake sites like MrDeepFakes has shifted activity to less regulated platforms such as Discord, where users exchange tutorials and request custom deepfake content.

A Call for Awareness and Comprehensive Action

Guistolise and her friends continue to advocate for stronger protections against AI-enabled sexual exploitation. Their experience underscores the urgent need for legal reforms, technological safeguards, and public awareness to address the dark side of generative AI. “It’s so important that people know this is out there, it’s accessible, and it really needs to stop,” Guistolise said.

FinOracleAI — Market View

The rise of AI-generated deepfake pornography via nudify apps presents a multifaceted challenge, combining technological, legal, and ethical dimensions. While the AI sector continues to expand rapidly, this misuse highlights significant regulatory gaps and reputational risks for companies involved in or enabling such services.
  • Opportunities: Development of advanced AI content detection tools and privacy-focused AI applications could create new market niches.
  • Risks: Legal liabilities, regulatory crackdowns, and public backlash against AI platforms facilitating nonconsensual content creation.
  • Regulatory Trends: Increasing momentum for state and federal laws targeting both creation and distribution of AI-generated explicit content.
  • Enforcement Challenges: Cross-border jurisdictional issues complicate efforts against overseas operators.
  • Consumer Awareness: Growing public understanding of AI risks may drive demand for stronger safeguards and responsible AI development.
Impact: The emergence of AI nudify services is a negative market force, exposing vulnerabilities in current legal frameworks and corporate governance. However, it also accelerates the push for comprehensive AI regulation and innovation in content verification technologies.
Share This Article
Mark Eisenberg is a financial analyst and writer with over 15 years of experience in the finance industry. A graduate of the Wharton School of the University of Pennsylvania, Mark specializes in investment strategies, market analysis, and personal finance. His work has been featured in prominent publications like The Wall Street Journal, Bloomberg, and Forbes. Mark’s articles are known for their in-depth research, clear presentation, and actionable insights, making them highly valuable to readers seeking reliable financial advice. He stays updated on the latest trends and developments in the financial sector, regularly attending industry conferences and seminars. With a reputation for expertise, authoritativeness, and trustworthiness, Mark Eisenberg continues to contribute high-quality content that helps individuals and businesses make informed financial decisions.​⬤