November 14, 2025

Empower your students to navigate A.I. ‘hallucinations’ and safety concerns

    Key points summarized for busy educators

  • Students must learn to fact-check A.I. information, since these tools often produce false information and fake sources – otherwise known as ‘A.I. hallucinations’.
  • Colleges and college students are creating A.I. tools that are enhancing
    learning, not replacing it.
  • Character.AI’s new safety features are a reminder to schools that students
    need help building self-awareness and balance as they navigate A.I.

Google’s A.I. Gemma and ChatGPT continue to generate false information

The Gist: Recent news stories are shedding light on the growing need for fact-checking skills when navigating A.I. tools. Google recently removed its A.I. model Gemma after it generated false criminal allegations against a U.S. senator. Similarly, reality TV star and entrepreneur Kim Kardashian called ChatGPT her “frenemy,” admitting that it has caused her to fail tests by creating fake legal cases and misinformation while she used it to help her study. Both examples remind us that A.I. tools are not infallible, and students need to use their best judgment and critical thinking skills when navigating them. 

What to Know: According to Google, Gemma AI was built for developers and researchers who want to design and customize their own A.I. tools, and was not intended as a chatbot for the average user to answer factual questions. Since the tool was not designed for this purpose, it began generating false information and even listing fake links. To prevent future confusion and the spreading of misinformation, Google has removed public access to Gemma, limiting it to developer use only.

Similarly, Kim Kardashian recently shared her experience encountering misinformation while using chatbots to help her study law, explaining that it sometimes gave answers that were completely incorrect or made up. This phenomenon, when an A.I. tool produces content or sources that don’t actually exist, has become known as a “hallucination.” Since these tools are designed to always respond, even when they don’t have the right information, users may unknowingly trust inaccurate results.

TSI’s Take: As A.I. becomes part of everyday learning, it’s essential that students understand that these systems are only as reliable as the humans training them. By strengthening fact-checking and media literacy skills, students can navigate A.I. thoughtfully, verify what they read, and make informed choices. Here are a few tips to start helping your students navigate A.I. and the possibility of “hallucinations.’

  • Coach students to fact-check first. Encourage them to verify A.I. responses before sharing or using them in assignments.
  • Model cross-checking. Show how to compare A.I. results with credible sources and discuss what makes a source trustworthy.
  • Prompt curiosity. Ask students reflective questions like, “How do you know this is accurate?” or “What evidence supports that claim?”

As educators, we can help students develop the skills to think critically and lead with character as they navigate tech like A.I. Preview the #WinAtSocial Lesson, Pulling back the curtain on A.I. and its fact-checking mistakes, where students learn more about the potential mistakes A.I. can make and why evaluating online information, verifying credibility, and making informed choices is essential when using technology.

Brown University implements new A.I. tools while students drive innovation

The Gist: Brown University recently launched its first A.I. tool, Transcribe, to help students and faculty record, translate, and organize their work more efficiently. At the same time, two college students have developed Turbo A.I., a note-taking app that has grown to more than five million users in just six months. Together, these stories highlight how higher-ed and students are using A.I. to transform how we learn, study, and create.

What to Know: Transcribe is the first A.I. tool developed directly by Brown’s Office of Information Technology. It can convert speech to text in multiple languages and provides secure, university-approved access that prioritizes student privacy. The university also partners with Google to give students access to tools like Gemini and NotebookLM under strong data-protection standards.

Meanwhile, the student-created Turbo A.I. app is taking other college campuses by storm. Built by two 20-year-olds, the app helps users capture lectures, summarize content, and even create flashcards and quizzes automatically. Their success reflects a growing movement of students using A.I. to enhance their learning and study habits. It’s no secret that A.I. is becoming a bigger part of education, and these stories highlight a shift towards responsible use that promotes creativity, learning, and integrity.

TSI’s Take: As students and educational institutions lean in to A.I. as thought partners, K-12 educators have a responsibility to equip students with the modern life skills to navigate all technology in a balanced and productive way. Equipped with the right skills, A.I. can strengthen—not replace—students’ learning. You can help students better understand the A.I. in their lives by:

  • Demystifying A.I.: Help students identify the ‘hidden A.I.’ in their daily lives, from music playlists to map apps.
  • Reminding students to protect their privacy: Teach students how artificial intelligence might collect and save the data that a student inputs, and to share only information they are comfortable with being made public.
  • Equipping them to see A.I. as a helper, not a shortcut: Show students how A.I. can support learning, like generating quiz questions or organizing notes, while emphasizing that their ideas and effort matter most.

Explore how you can equip students to navigate A.I. as a thought partner that strengthens their studies by previewing the #WinAtSocial Lesson, Making everyday tasks easier with artificial intelligence.

Character.AI introduces new safety measures for teen users

The Gist: Character.AI, a popular chatbot platform, is changing how students under 18 can interact with its technology. Instead of long, open-ended conversations with the chatbot, students under the age of 18 will now only be able to create videos, stories, and other creative projects with the A.I. These changes, the company says, are in an effort to build safer, more positive, and intentional experiences for young users. 

What to Know: Character.AI announced these updates, which start on November 25, as part of a larger effort to encourage creativity while setting healthy boundaries for teen users. In the meantime, students will have a two-hour chat limit. Along with this update, the company also plans to launch new age-verification tools and establish an A.I. Safety Lab, overseen by an independent non-profit that focuses on safety research related to A.I. usage. 

This trend extends across the tech industry. Companies like OpenAI and Meta are introducing new features to strengthen parental controls and support digital well-being. These shifts mark an important move, encouraged by the concerns of the public and everyday users, that prioritize balance, responsibility, and safety for students navigating modern technology. 

TSI’s Take: Much like social media, A.I. is here to stay and will continue to be integrated into students’ lives. When we equip students with A.I. Literacy that helps them recognize its limitations and challenges while understanding the positive moves they can make with A.I., we are preparing students to be future-ready. You can support your students by: 

  • Encouraging them to navigate A.I. with awareness: While platforms like Character.AI can mimic real conversations, they cannot replace real human connections. Encourage students to be aware of what they are sharing with A.I. chatbots, and always turn to a trusted adult if they need actual advice or help.
  • Finding balance: Learn how to use A.I. in ways that support creativity, curiosity, and well-being rather than constant engagement.
  • Playing to their core: Help them live up to high standards as they navigate technology, ensuring that A.I. supports their goals and character growth.

As A.I. continues to evolve, helping students build self-awareness and responsibility today will prepare them to use these tools thoughtfully and confidently tomorrow. Preview the #WinAtSocial Lesson, Breaking down ChatGPT and the role of artificial intelligence in our lives, where students learn how to navigate A.I. responsibly, strengthen their studies, and lead with integrity in a connected world.

When schools intentionally build A.I. Literacy into their school day, students learn to think critically, verify information, and use emerging tools with confidence and character. If you’re exploring how to bring future-ready A.I. education to your community, we’d love to share how #WinAtSocial can support your goals. Request more information to bring these skills to your students.


The Social Institute (TSI) is the leader in equipping students, families, and educators with modern life skills to impact learning, well-being, and students’ futures. Through #WinAtSocial, our interactive, peer-to-peer learning platform, we integrate teacher PD, family resources, student voice insights, and more to empower entire school communities to make positive choices online and offline. #WinAtSocial Lessons teach essential skills while capturing student voice and actionable insights for educators. These insights help educators maintain a healthy school culture, foster high-impact teaching, and build meaningful relationships with families. Our unique, student-respected approach empowers and equips students authentically, enabling our solution to increase classroom participation and improve student-teacher relationships. Through our one-of-a-kind lesson development process, we create lessons for a variety of core and elective classes, incorporating timely topics such as social media, A.I., screen time, misinformation, and current events to help schools stay proactive in supporting student health, happiness, and academic success.