Classrooms and curricula are constantly evolving to match the emergent needs of learners and reflect the times we live in. Coding used to be the domain of IT specialists, and now the basics are taught alongside reading and writing skills; and with the internet being the single greatest tool for learning, media literacy is a growing concern. As the world changes, our ideas of firsthand sources of historical events won’t come from diaries and letters, but TikToks, Instagram posts and tweets. And in this age of information overload, with widespread use of generative AI on the horizon, equipping young people with the skills to identify misinformation is crucial.
Digital Natives aren’t Digitally Literate
“Digital natives” is a term used to describe the generation that has grown up with the internet. They’re tech-savvy, able to quickly and intuitively navigate the internet, and consider technology central to their daily lives. A Stanford study revealed that while digital natives are adept at using the internet and technology, they’re not digitally literate. Students struggle with verifying news and information sources and are susceptible to bias and misinformation. The report examined students’ ability to infer credibility from Facebook and X (formerly known as Twitter) posts, comments on news sites, blogs, visual content, and other online sources that inform civic opinion.
There’s an obvious concern that the next generation of voters are at a higher risk of falling prey to misinformation because students value social media content over traditional sources. In the study, one of the assessments with high school students presented two Facebook posts announcing Donald Trump’s presidential candidacy. One post was from the official Fox News account, with a verified checkmark visible, while the other was a fake account.
Surprisingly, only a quarter of the high school students pointed to the verification mark as a significant indication of authenticity. On the other hand, 30% of students believed the fake account was trustworthy, citing graphics as a reason for credibility. As generative AI becomes more prolific, the ability to falsely create images and content is likely to widen the digital literacy gap. To stem the tide of misinformation, weaving digital literacy into the school curriculum can provide learners with the confidence to engage online as responsible digital citizens.
Identifying Fake News and Misinformation on Social Media
One of the biggest differences between digital natives and their older counterparts, digital immigrants, is that millennials and Gen Z get their news almost exclusively online. The use of nontraditional sources for information means that platforms and apps like X, Instagram, and TikTok are increasingly used for real-time news updates and considered as more trustworthy sources. While studies show that all age groups fall prey to misinformation, the younger generation’s reliance on tech-driven news sources leaves them more susceptible to fake news.
While each social media platform deploys its own version of fact-checkers and guardrails, algorithms can only account for so much. This is why knowing how to identify misinformation is still a vital skill. Experts suggest building exercises in these skills into various lessons, from language arts, to history, and using trending topics to make the content relevant.
Having learners evaluate the differences between clickbait articles and online journalism can go a long way in preparing learners to identify false news. From the telltale signs of catchy headlines and sensational content, to examining bad links and insecure sources, providing young learners with the basic skills to identify misinformation is an excellent way to bring awareness to the topic and provide the requisite skills to engage online as responsible digital citizens.
Develop Critical Thinking Skills Alongside Digital Literacy Skills
The combination of critical thinking with digital literacy skills is the combination to best impact misinformation, disinformation, and fake content. Critical thinking skills are essential to ensuring that young people are equipped to navigate the online landscape, regardless of what new technologies emerge in the coming decades. Because algorithms are hyper-targeted, teaching learners how to understand the echo-chambers they exist in online can help prevent them from falling prey to content designed to manipulate or otherwise influence opinion.
Subject-matter experts suggest using probing questions to inspire critical thinking when it comes to digital literacy. Asking questions like “what are my likes and dislikes? What type of content do I consume? How do I feel when a video aligns with my political or social beliefs?” Helping students understand how every like, comment, and share promotes content that serves to eliminate diversity of thought and further drive their beliefs, is useful in helping them understand how their social profiles can inspire sponsored and targeted content.
Using AI Detectors to Screen Content
One of the risks associated with generative AI is its ability to subvert democracy through false content using deepfake technology. Companies like Synthesia specialize in this, and their services have already been misappropriated by governments. Teaching young people how to identify misinformation empowers them with the basics, but in the case of sophisticated technology like deepfakes, AI detectors are required.
Ensuring learners are aware of AI detectors is another opportunity to combat misinformation. Popular deepfake content creators are already gaining followers across platforms, with creators using the likeness of celebrities like Tom Cruise and Timothée Chalamet. While deepfake technology is currently used for entertainment, it’s not hard to imagine how easily it can be used to incite violence, terror and/or radicalize groups. Having students practice using AI detectors during classes and discussing what stands out in the messaging and imagery can better prepare them to combat misinformation in the future.
Policy Supports Prioritizing Digital Literacy
In California, legislators in education are getting ahead of the issue by creating a framework to support teachers and schools in implementing digital literacy skills. The state bill identifies skills that help learners grapple with unverified data and fake news, guides learners not only in content consumption, but responsible content creation as well.
Interestingly, legislators and experts have pointed out that equipping learners with digital literacy skill has wider impact, particularly in adolescent mental health. For young boys and girls growing up in the era of hyperrealistic editing and filters, social media can have devastating effects on self-esteem and wellbeing. Being able to recognize digitally altered and manipulated images can go a long way in preventing the negative effects created by the unrealistic standards perpetuated on social media.
Conclusion
Misinformation is on the rise, and with our younger generation relying on social media and internet sources for news, they need to be empowered to question and judge for themselves the credibility of the content they consume. We can’t do what we’re not taught, and in an effort to inculcate a culture of curiosity, knowledge and wisdom, learners need to be taught early to be on the lookout for the signs of fake news and AI-generated media.
Equally important is understanding their own role in this and the echo-chambers they can create on social media. Critical thinking skills, alongside strong digital literacy programs, can help young people navigate the online world safely, responsibly, and confidently. It’s in the best interest of our democracy, but also, their own longterm wellbeing.