Navigating Election Misinformation: How Educators Can Help Students Tackle A.I., Deepfakes, and Finding Credible Sources
Earlier this year, Stanford University’s Internet Observatory studied generative A.I. chatbots like ChatGPT and found that they produced inaccurate information when asked about the voting process. This example shows how generative A.I.—artificial intelligence that can create text, images, video, and audio—is being used in politics to sway voters. It is only the tip of the iceberg. With 75% of students using generative A.I. chatbot technology and 33-50% having trouble distinguishing between deepfakes and authentic content, generative A.I. can make it even more challenging for students to find accurate information.
Equipping students with the tools to navigate generative A.I. deepfakes this election season is key to helping them make informed choices and stopping the spread of misinformation. As many high school seniors become first-time voters and younger students build opinions and make decisions related to policy, educators can help make sure their choices are built upon reliable and accurate facts.
Equipping students with the tools to navigate misinformation and find authentic resources
71% of internet users are unaware of how rampant deepfake content is online, and with good reason. It can be hard to tell the difference between real and fake content as generative A.I. gets smarter and produces more realistic content. Many students also use generative A.I. tools like ChatGPT to find information quickly. Often, students take information from A.I. models at face value because of their conversational nature and sense of authority. To help students recognize disinformation and misinformation produced by A.I., forward-thinking teachers can use A.I.-generated content from tools like ChatGPT as a learning opportunity. By presenting students with inaccurate or misleading information provided by A.I. and encouraging them to question the accuracy of the information they encounter, teachers empower students to become more critical thinkers and build their tech skills.
As the U.S. election draws near, it’s important to understand that students are sharing viral memes or posts without checking if they’re real first and believing deepfake videos of politicians on social media. Many teen students are being fed fake A.I.-generated videos featuring political party leaders about policy issues on TikTok, and 58% of teens use TikTok daily and learn information from fake or unverifiable sources. This poses a risk to how they make decisions about the election and how they form opinions about policy issues.
Here are some tips that educators can share with students on spotting the difference between authentic and A.I.-driven content:
When looking at images, here are some things to keep in mind:
- Details, details, details: Look at an image very closely. Are hands missing any fingers? Does the image look airbrushed, as if a filter was applied? These tell that it might be A.I.-generated.
- Reverse image search: Don’t trust an image just because you saw it online. Try to plug it into a search engine and see if it is linked to a reputable source. If you can’t find a link to a credible site, it might not be authentic.
When looking at videos, here are some things to look out for to tell the difference between something real and fake:
- Weird movements or expressions: If the person in the video seems jerky or stiff, or you observe strange facial expressions, look closer. A.I. isn’t good at creating the small, detailed ways our bodies move.
- Mismatched audio and video: If the audio doesn’t match the lip or body movements in the video, it might not be real.
- Fact check: If a video makes specific claims, always do a quick search to see if it’s true. This is a good practice to make sure the information you’re getting is accurate.
TSI Take
A.I. is a helpful tool that can boost student success in the classroom. It can personalize learning, make tasks easier, and give quick feedback, helping students stay focused and understand tough concepts better. For example, educators can use tools like Quiziz to create personalized learning assessments and ChatGPT to help generate ideas for lesson plans, tests, and assignments.
However, the rise of A.I. is not without its challenges. A 2023 survey found that nearly 60% of teens between the ages of 13 and 17 have been misled by misinformation at least once. This shows how important it is for students to learn how to spot false information, especially as A.I.-generated content becomes more convincing. As educators, we can provide students with advice and information to increase their media literacy skills and make informed decisions on and offline so that they can succeed. Interested in learning more about how The Social Institute’s #WinAtSocial collaborative learning platform helps students tackle misinformation? Check out our lesson, Vetting videos, posts, and articles we find online to make sure they’re real. #WinAtSocial also offers a lesson on Navigating false A.I.-generated content during an election, so students can learn more about navigating deepfakes this election season.
The Social Institute (TSI) is the leader in empowering students by understanding students. Through #WinAtSocial, our gamified, peer-to-peer learning platform, we equip students, educators, and families to navigate their social world – in the classroom and beyond, online and offline – in healthy, high-character ways. Our unique, student-respected approach empowers and equips, rather than scares and restricts. We incorporate timely topics about social media, tech use, and current events that are impacting student well-being and learning. #WinAtSocial Lessons teach life skills for the modern day, capture student voice, and provide school leaders with actionable insights. Through these insights, students play an essential role in school efforts to support their own health, happiness, and future success as we enable high-impact teaching, meaningful family conversations, and a healthy school culture.