CNBC Investigation Reveals Risks of ‘Nudify’ AI Apps Creating Nonconsensual Deepfakes

Mark Eisenberg
Photo: Finoracle.net

CNBC Investigation Exposes Dangers of AI-Powered ‘Nudify’ Apps

In the summer of 2024, a disturbing discovery emerged in Minneapolis when a group of women found that a male acquaintance had used their Facebook photos combined with artificial intelligence to create sexualized deepfake images and videos without their consent. Utilizing an AI platform called DeepSwap, the man generated explicit content featuring over 80 women in the Twin Cities area, triggering profound emotional distress and prompting calls for legislative action.
Despite the traumatic impact, the women discovered they lacked legal protection because the creator did not distribute the images publicly and none were underage. Molly Kelley, a law student and victim, stated, “He did not break any laws that we’re aware of. And that is problematic.” This loophole underscores the challenge lawmakers face in addressing AI-generated nonconsensual imagery under existing statutes. In response, Minnesota Democratic State Senator Erin Maye Quade has proposed legislation aimed at prohibiting nudify services within the state. The bill would impose fines on companies facilitating the creation of explicit deepfakes without consent, drawing parallels to laws against invasive photography.
“We just haven’t grappled with the emergence of AI technology in the same way,” Senator Maye Quade remarked, highlighting the rapid evolution of AI and the lag in legal frameworks.

The Real and Lasting Emotional Toll on Victims

Jessica Guistolise, one of the affected women, continues to experience anxiety and panic attacks triggered by reminders of the incident. She described moments when the sound of a camera shutter overwhelms her with fear, evoking the traumatic realization of seeing fabricated images of herself engaged in acts she never committed.
“It makes you feel like you don’t own your own body, that you’ll never be able to take back your own identity,” said Mary Anne Franks, a law professor and president of the Cyber Civil Rights Initiative, comparing the trauma to that caused by revenge porn.

Deepfakes Now Easier Than Ever to Produce

Previously requiring advanced technical skills, the creation of explicit deepfakes has been democratized by nudify apps that bundle AI models into user-friendly interfaces. All that is needed is an internet connection and a photograph, often sourced from social media platforms like Facebook. These apps often disguise their true purpose, marketing themselves as playful face-swapping tools without robust enforcement of consent policies. Alexios Mantzarlis, an AI security expert at Cornell Tech, noted, “There are apps that present as playful and they are actually primarily meant as pornographic in purpose.”

Opaque Operations of DeepSwap and Similar Platforms

DeepSwap, the AI service used in this case, maintains a low profile with limited public information. Despite a July press release listing a Hong Kong dateline and executives including CEO Penyne Wu and marketing manager Shawn Banks, CNBC was unable to verify their identities or receive responses to inquiries. The company website lists its corporate entity as MINDSPARK AI LIMITED, registered in Dublin, Ireland, with terms governed under Irish law. Notably, previous versions of the site referenced Hong Kong, reflecting inconsistent corporate disclosures.

Federal AI Initiatives May Complicate State Efforts

While Minnesota pursues legislation to penalize companies enabling nonconsensual deepfake creation, concerns arise that federal AI policies could undermine such efforts. In July, the Trump administration issued executive orders advancing AI development as a national security priority, potentially limiting states’ regulatory scope. Molly Kelley expressed hope that federal initiatives will not hinder grassroots attempts to address the harms caused by AI-generated explicit content.

FinOracleAI — Market View

The proliferation of AI-powered nudify apps signals a growing challenge at the intersection of technology, privacy, and law. The ease of creating nonconsensual explicit deepfakes exposes individuals to unprecedented personal and reputational harm, while current legal frameworks lag behind technological advances.
  • Opportunities: Development of robust AI ethics frameworks and regulations could foster safer digital environments and restore user trust.
  • Risks: Lack of clear legislation may embolden malicious actors and complicate enforcement across jurisdictions.
  • Increased demand for AI detection and content verification technologies.
  • Potential conflicts between state-level regulations and federal AI initiatives may delay effective policymaking.
  • Growing public awareness could pressure tech companies to improve transparency and consent mechanisms.
Impact: The unchecked growth of nudify applications presents significant ethical and regulatory challenges, but also drives urgent legislative and technological responses aimed at safeguarding individuals from AI-enabled abuse.
Share This Article
Mark Eisenberg is a financial analyst and writer with over 15 years of experience in the finance industry. A graduate of the Wharton School of the University of Pennsylvania, Mark specializes in investment strategies, market analysis, and personal finance. His work has been featured in prominent publications like The Wall Street Journal, Bloomberg, and Forbes. Mark’s articles are known for their in-depth research, clear presentation, and actionable insights, making them highly valuable to readers seeking reliable financial advice. He stays updated on the latest trends and developments in the financial sector, regularly attending industry conferences and seminars. With a reputation for expertise, authoritativeness, and trustworthiness, Mark Eisenberg continues to contribute high-quality content that helps individuals and businesses make informed financial decisions.​⬤