Chatbots, Deepfakes, and Data—Oh My! The New A.I. World Students Are Navigating
Key points summarized
|
Sora 2: With Great A.I. Power Comes Great Responsibility
The Gist: OpenAI has launched Sora 2, a new social media platform where users can generate and share highly realistic A.I. videos using only a short text prompt. The app makes it easy to create anything you can imagine, from animals doing human activities to fictional characters singing opera. However, there are concerns about the app’s ability to adequately monitor copyright use, deepfake content, or conspiratorial videos, raising major concerns about safety and misinformation.
What to Know: Unlike the original Sora, which only allowed short video clips, Sora 2 functions like TikTok or Instagram by letting users generate, post, and endlessly scroll A.I.-made content. This opens exciting opportunities for creativity and connection with friends. The challenge is the lack of strong guardrails. Users can place real faces into videos without permission, recreate copyrighted scenes, or build convincing false “evidence.” Although OpenAI includes watermarks and reporting tools, early adopters have found they do not always work as intended. This could lead to the spread of deepfakes, misinformation, or even targeted harm toward individuals. Students can still use apps like Sora 2 in positive ways when they are equipped with the right A.I. Literacy skills, as A.I. can be a powerful tool for storytelling, projects, and community. But it must be used with awareness and responsibility.
TSI’s Take: Sora 2 shows how quickly A.I. is shaping the future of social media. Rather than avoiding it, educators have the opportunity to equip students with the tools to navigate it wisely and protect one another in digital spaces. Schools can help students stay safe and informed by teaching them to:
- Think before they share: Encourage students to pause and question whether a video looks real or manipulated and to fact-check anything that feels sensational or suspicious online.
- Stand up for others: If a deepfake or harmful post targets someone, empower students with the S.H.I.E.L.D. Method to know what to do if they are impacted by an A.I.-generated deepfake.
- Think of A.I. as a thought-partner, not a shortcut: Encourage students to take ownership of their work by reflecting on how their posts, essays, or projects reflect their values and demonstrate thoughtful, high-character decision-making.
A.I. platforms will continue to grow, but education and modern life skills can help students make high-character decisions as they navigate it all. Ready to start the conversation? Preview the #WinAtSocial Lesson, Spotting fake videos and pictures made by A.I., to show how having a critical lens on media is more important than ever.
Regulating A.I. Companions: When Chatbots Feel Real, Safety Matters Most
The Gist: The California legislature has signed the first law in the nation to regulate A.I. companion chatbots. The goal is to protect children and vulnerable users by requiring companies to add safety features such as age checks, clear labels that the chatbot is artificial, and reminders to take breaks. This action comes after growing concerns and real cases that show how A.I. chatbots can sometimes give confusing guidance or blur boundaries. As A.I. companions become more realistic and personal, students need support in understanding their limitations, and that a chatbot should never replace human support systems.
What to Know: California’s law requires A.I. companies to follow safety standards, prevent inappropriate or misleading content, and clearly inform users when they are interacting with A.I. rather than a real person. While California is leading the way, it is currently the only state with these protections. At the same time, there has been a rise in cases that highlight the dangers of unregulated A.I. companions, including emotional confusion, misinformation, and unprofessional advice on well-being. This highlights how A.I. Literacy is more essential than ever for K-12 students.
TSI’s Take: A.I. chatbots can be great for brainstorming and sharing ideas, but they should not replace real friendships or trusted adults when students need support. California’s law is a step in the right direction, but students need skills and guidance now, not later. Schools can help students build healthy habits by encouraging them to:
- Use A.I. for ideas, not connections: A.I. can be a helpful way to brainstorm ideas, like planning weekend activities with friends, but it can’t replace a real conversation with a trusted adult when you need emotional or mental support.
- Think before you trust: Chatbots can sound confident even when they are incorrect, so students should pause, reflect, and double-check any information they receive from A.I.
- Support one another: If a friend is relying too much on A.I., encourage them to take a break and reconnect in real life.
Ready to empower your students to understand A.I.’s limitations and brainstorm ways to look out for A.I. bias? Request a demo of the #WinAtSocial Lesson, Breaking down unseen ethics and biases in A.I., where students learn that A.I. is imperfect and can make mistakes that might even cause harm.
When You Talk to A.I., Meta Listens: Why Privacy Skills Matter More Than Ever
The Gist: Meta has announced that starting December 16, it will begin using people’s conversations with its A.I. chatbot to personalize ads and content on platforms like Instagram and Facebook. If a user chats about hiking, they may start seeing ads for hiking boots or related posts in their feed. This marks a major shift in how social media platforms collect data and shows why understanding digital privacy is more important than ever.
What to Know: When students use an A.I. chatbot, it may feel private, but those conversations are actually saved and will soon be used to shape what content and ads appear on their feed. Users will not be able to opt out of this new policy. Even though Meta says certain sensitive content will be filtered out, most chatbot conversations will still influence each user’s online experience. This highlights the importance of knowing how platform features work and being aware that what you share with A.I. becomes part of your digital footprint.
TSI’s Take: This update is a reminder that A.I. tools are not just assistants—they are data collectors. But rather than avoid them, students can learn to use them wisely. Schools can help students protect their privacy and take control of their online experience by encouraging them to:
- Understand how platforms work: Teach students that what they say to A.I. can shape the ads, videos, and posts they see in their feed.
- Protect their privacy settings: Encourage students to review and adjust in-app privacy tools to limit how their data is used.
- Use A.I. with intention: A.I. can be helpful for ideas or learning, but students should stay aware of how their input might be stored or used.
Every time A.I. features evolve, it is a new opportunity to teach students how to stay informed, take ownership of their digital choices, and navigate social media with confidence. Help students understand the importance of protecting their privacy as they interact with A.I. by huddling with them through the #WinAtSocial Lesson, Protecting our personal information from A.I.
A.I. on social media doesn’t have to be something to fear. It can be a powerful learning space where students build modern skills like digital privacy, critical thinking, and responsible use of technology. By equipping students with A.I. Literacy, we can help them understand how their data is used, how platform features work, and how to protect their privacy while still enjoying the benefits of creativity and connection online. When students learn to use A.I. with purpose and high character, they are better prepared to lead, connect, and thrive in every digital space. Request a demo of our Social Media Literacy Lessons to empower students to use technology confidently and responsibly.
The Social Institute (TSI) is the leader in equipping students, families, and educators with modern life skills to impact learning, well-being, and students’ futures. Through #WinAtSocial, our interactive, peer-to-peer learning platform, we integrate teacher PD, family resources, student voice insights, and more to empower entire school communities to make positive choices online and offline. #WinAtSocial Lessons teach essential skills while capturing student voice and actionable insights for educators. These insights help educators maintain a healthy school culture, foster high-impact teaching, and build meaningful relationships with families. Our unique, student-respected approach empowers and equips students authentically, enabling our solution to increase classroom participation and improve student-teacher relationships. Through our one-of-a-kind lesson development process, we create lessons for a variety of core and elective classes, incorporating timely topics such as social media, A.I., screen time, misinformation, and current events to help schools stay proactive in supporting student health, happiness, and academic success.