Common Sense Media Flags Google Gemini AI as High Risk for Children and Teens

Lilu Anderson
Photo: Finoracle.net

Common Sense Media Labels Google Gemini AI High Risk for Young Users

Common Sense Media, a nonprofit dedicated to evaluating media safety for children, released a critical risk assessment of Google’s Gemini AI products on September 5, 2025. While the organization acknowledged that Gemini clearly identifies itself as an AI rather than a friend—an important factor in preventing delusional thinking among vulnerable youth—it found significant shortcomings in the platform’s safety features for children and teenagers.

Adult-Focused AI with Limited Child Safeguards

The assessment highlighted that Gemini’s “Under 13” and “Teen Experience” tiers are essentially adult versions of the AI with only superficial safety modifications. Common Sense Media emphasized that truly safe AI for children must be designed from the ground up with their developmental needs in mind, rather than relying on retrofitted adult models.

According to the report, Gemini can still generate content inappropriate for younger users, including material related to sex, drugs, alcohol, and potentially harmful mental health advice. This is particularly alarming given recent incidents where AI interactions have been implicated in teen suicides, including lawsuits against OpenAI and Character.AI following tragic deaths linked to AI conversations.

Implications Amid Apple’s Potential Adoption of Gemini

Leaks suggest that Apple is considering integrating Gemini’s large language model to power its upcoming AI-enabled Siri, scheduled for release next year. This development could expose a broader teen audience to the risks identified unless Apple implements robust safety mitigations.

Common Sense Media further criticized Gemini’s products for failing to differentiate guidance appropriately between younger children and teenagers. Despite the presence of content filters, both age-specific offerings were rated as “High Risk” overall.

Industry Response and Google’s Position

Robbie Torney, Senior Director of AI Programs at Common Sense Media, stated, “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults.”

Google responded by emphasizing its existing policies and safeguards aimed at protecting users under 18. The company noted ongoing efforts to red-team the AI and consult external experts to enhance safety. Google acknowledged that some Gemini responses did not perform as intended and that additional safeguards have been implemented. However, it also questioned whether Common Sense Media had access to the exact queries used during testing and suggested that some referenced features might not be available to minors.

Context Within Broader AI Safety Landscape

Common Sense Media has previously assessed various AI platforms, rating Meta AI and Character.AI as “unacceptable” due to severe risks, Perplexity as high risk, ChatGPT as moderate risk, and Claude as minimal risk for adult users.

This latest evaluation underscores the ongoing challenges AI developers face in balancing advanced capabilities with child safety considerations, particularly as AI becomes increasingly integrated into mainstream consumer technology.

FinOracleAI — Market View

The high-risk rating of Google’s Gemini AI by Common Sense Media may weigh negatively on market perception, especially as the model is poised to power Apple’s upcoming Siri upgrade. Safety concerns could prompt increased regulatory scrutiny and slow adoption among cautious consumers and enterprises focused on child protection. However, Google’s acknowledgement of issues and ongoing improvements could mitigate some risks if effectively communicated.

Investors should monitor further developments in Gemini’s safety features, Apple’s integration plans, and potential regulatory responses targeting AI safety in youth applications.

Impact: negative

Share This Article
Lilu Anderson is a technology writer and analyst with over 12 years of experience in the tech industry. A graduate of Stanford University with a degree in Computer Science, Lilu specializes in emerging technologies, software development, and cybersecurity. Her work has been published in renowned tech publications such as Wired, TechCrunch, and Ars Technica. Lilu’s articles are known for their detailed research, clear articulation, and insightful analysis, making them valuable to readers seeking reliable and up-to-date information on technology trends. She actively stays abreast of the latest advancements and regularly participates in industry conferences and tech meetups. With a strong reputation for expertise, authoritativeness, and trustworthiness, Lilu Anderson continues to deliver high-quality content that helps readers understand and navigate the fast-paced world of technology.