March 22, 2024

What educators need to know about AI-Generated YouTube content that is targeting younger students

Be sure to check out the suggested Huddle question at the bottom of this article to discuss this important topic with your students in class, if you feel it is appropriate.

From chatbot “psychologists” to deepfake political content, we all know that AI is impacting older students both in and out of the classroom. But the newest AI-generated content targets a new demographic: elementary school-aged students and younger. 

Recently, there’s been a rush of YouTube tutorials—many with eye-catching headlines—boasting how much money you can make using artificial intelligence to generate videos for kids. These tutorials advocate for the use of AI tools like ChatGPT and ElevenLabs to create kid-friendly scripts and generate audio and video production. 

80% of U.S. parents of children 11 years old and younger say their kids watch YouTube, so it’s likely that millions of younger students will see AI-generated videos on the platform. This has sparked concern from experts, educators, and caregivers alike for various reasons, such as potentially unsettling themes in unreviewed AI-generated content or glitchy visuals and disturbing soundtracks.  

As AI-generated content becomes increasingly sophisticated, it’s becoming harder to tell what’s real and what’s not. This is especially true for younger students who may not be familiar with artificial intelligence’s capabilities to create fake content that looks very realistic. That’s why we as educators must help students develop the skills to recognize credible information online and think critically about the content they see. 

AI-generated YouTube content is targeting younger students

There’s a new “get-rich-quick” scheme on the internet: video tutorials showing how YouTube channels can quickly generate lucrative children’s videos with the help of artificial intelligence. These tutorials advocate the use of tools like ChatGPT, ElevenLabs, and Adobe Express to automate scripting and audio and video production and are accompanied by eye-catching headlines, such as “IT’S NOT HARD” and “$1.2 Million With AI-generated Videos for Kids?” 

Channels like Yes! Neo and Super Crazy Kids lead the charge, boasting millions of subscribers and views. Their content, characterized by 3D animation and catchy titles, appeals to young audiences and labels themselves as “educational” or ways to learn shapes, colors, and numbers. Critics of these videos say they lack originality and quality, featuring frenetic and sometimes even disturbing content, such as floating eyeballs and melting blocks of color.

Some experts, like Common Sense Media’s senior adviser of AI, Tracy Pizzo Frey, are concerned that YouTube itself is not holding creators accountable. Recently, YouTube said it will require creators to “disclose when they’ve created altered or synthetic content that’s realistic.” Since this AI-generated content labeling tool is based on the honor system, where creators report what appears in their videos, it is still unclear how effective this measure will be. 

With 53% of U.S. children using YouTube daily, younger students are bound to see AI-generated YouTube content, some of which promotes misinformation, lacks quality, and includes inappropriate themes. Until platforms start holding their creators accountable for the type of content they post, educators play one of the most important roles in empowering students to navigate AI-generated content online. AI impacts younger students by exposing them to potentially harmful content and sometimes influencing what they think is true. Educators can help by teaching students the skills to navigate AI-generated content, empowering them to identify misinformation and avoid inappropriate content.

TSI’s Take

Last year, the BBC reported that some YouTube channels are using AI to make videos that include false science information, which is recommended to older students as “educational content.” Since then, AI-generated content on YouTube has not only increased but is now targeted towards younger students and children who are not in school yet. 

It can be extremely difficult for students of all ages, and even adults, to tell whether online content is AI-generated. This highlights educators’ crucial role in guiding students to make smart and informed online decisions, find credible sources, fact-check, and follow content creators who support their health, happiness, and future success.

Here are some ways educators can empower their younger students to find their influencers while also navigating AI-generated content online: 

  1. Simplify Fact-Checking: Break down fact-checking steps into simple questions, such as “Is this information from a trusted source?” or “Does it match what we already know?” to help students develop basic fact-checking habits.
  2. Make AI Misinformation Visible: Use age-appropriate examples to illustrate how AI-generated content may look or sound different from human-created content, teaching students to recognize signs like strange language or unrealistic scenarios.
  3. Introduce Positive Role Models: Introduce students to age-appropriate positive influencers, such as children’s authors, scientists, or community leaders, and discuss their contributions to inspire critical thinking and curiosity.

Though it may not initially seem like it, younger students are also significantly impacted by the presence of AI-generated content online. Helping them navigate this kind of content may look different compared to older students, but students are never too young to learn the ins and outs of making positive choices online. Check out # WinAtSocial’s Grade 3 Lesson, Stumbling across new videos, shows, and websites: What do you do? that empowers students to make positive choices when they find age-inappropriate videos or apps.

#WinAtSocial Huddle Question

Huddle with your students
More and more YouTube channels have started creating AI-generated content made specifically for younger students and children not in school yet. Sometimes, these videos can contain things that aren’t true, seem loud or crazy, or make us feel uncomfortable. How can you tell if a video you’re watching online might not be suitable for students your age? What can you do if you see a strange or confusing video?


The Social Institute (TSI) is the leader in empowering students by understanding students. Through #WinAtSocial, our gamified, peer-to-peer learning platform, we equip students, educators, and families to navigate their social world – in the classroom and beyond, online and offline – in healthy, high-character ways. Our unique, student-respected approach empowers and equips, rather than scares and restricts. We incorporate timely topics about social media, tech use, and current events that are impacting student well-being and learning. #WinAtSocial Lessons teach life skills for the modern day, capture student voice, and provide school leaders with actionable insights. Through these insights, students play an essential role in school efforts to support their own health, happiness, and future success as we enable high-impact teaching, meaningful family conversations, and a healthy school culture.