Digital Dementia: Is AI Undermining Our Critical Thinking?

Digital Dementia: Is AI Undermining Our Critical Thinking?

I’m thrilled to sit down with Camille Faivre, a renowned expert in education management who has dedicated her career to shaping the future of learning. In the wake of the global shift to digital platforms, especially post-pandemic, Camille has been at the forefront of helping institutions design and implement innovative open and e-learning programs. Today, we’ll dive into her insights on the growing role of AI in education, the risks of over-reliance on technology, the importance of critical thinking, and how educators can strike a balance between embracing digital tools and preserving essential cognitive skills.

How do you see AI shaping our daily lives and education, particularly as a tool rather than a substitute for human thinking?

AI is undeniably transformative. It’s like having a super-efficient assistant that can crunch data, suggest ideas, or streamline complex tasks in seconds. In education, it can personalize learning experiences or help teachers manage workloads. But I always emphasize that it’s a tool, not a brain. It can’t replicate the depth of human reasoning or emotional intelligence. We must use it to enhance our thinking, not to offload it entirely, or we risk losing the very skills that make us human.

What first drew your attention to the concept of ‘digital dementia’ in the context of AI dependency?

I started noticing this trend when I saw how quickly people, especially students, turned to AI for answers without questioning the output. The term ‘digital dementia’ resonated with me because it captures this idea of cognitive decline—not in a medical sense, but as a metaphor for what happens when we stop exercising our minds. If we let machines do all the heavy lifting, our ability to think critically and creatively could wither, much like muscles do without use. It’s a wake-up call to be mindful of how we engage with technology.

Can you unpack the idea of a ‘cognitive detour’ when we lean too heavily on AI?

Absolutely. By ‘cognitive detour,’ I mean taking a shortcut that bypasses the mental processes needed to build understanding. When we ask AI for an answer and accept it at face value, we skip the steps of analysis, problem-solving, and reflection that solidify learning. It’s like using GPS every time you drive—you might get to your destination, but you won’t learn the route. Over time, these detours can erode our ability to navigate challenges independently, especially in educational settings where those skills are still forming.

Why do you think AI dependency poses a unique risk to students and young learners?

Young learners are at a critical stage where their brains are wiring for problem-solving, creativity, and critical thinking. If they rely on AI too early, they miss out on struggling through those messy but essential processes. A recent study I came across showed that consistent AI use could predict lower critical thinking performance, even more than factors like educational background. That’s alarming because it suggests we’re not just shortcutting tasks—we’re potentially stunting long-term cognitive growth in the next generation.

What specific mental skills do you believe are most threatened by overusing AI in education?

I’d point to skills like analysis, synthesis, and evaluation. These are the building blocks of critical thinking. When students use AI to write essays or solve problems without engaging with the material themselves, they don’t practice breaking down information, connecting ideas, or judging the quality of an argument. Memory retention also suffers because there’s less active recall. Over time, this can create a generation that’s great at prompting a machine but struggles to think independently.

How can educators foster critical thinking in students before introducing AI tools into their learning environment?

It starts with building a strong foundation. Educators should focus on teaching students how to ask questions, challenge assumptions, and seek evidence through hands-on, inquiry-based learning. For example, have them debate ideas, solve problems collaboratively, or research topics using primary sources. These activities strengthen mental muscles. Only once those skills are solid should AI be brought in as a support tool, paired with explicit lessons on how to use it critically—not as a crutch.

In your view, how does using AI actually demand more critical thinking, and can you share an example of this in practice?

Contrary to what some might think, AI doesn’t make thinking easier—it often requires more rigor. You have to know what to ask, how to interpret the response, and whether to trust it. For instance, I’ve seen teachers use AI to generate sample essays for a class discussion. Instead of accepting the output as final, they task students with critiquing it—identifying weak arguments or factual errors. This forces students to engage deeply with the content and sharpens their discernment, showing that AI can be a springboard for thinking rather than an endpoint.

Why do so many of us accept AI responses without skepticism, and what can we do to shift that mindset?

It’s largely about convenience. AI delivers quick, polished answers that feel authoritative, and in our fast-paced world, that’s seductive. But this can lull us into complacency. To shift this, we need to cultivate a culture of curiosity and skepticism. Encourage habits like always asking, “Where did this information come from?” or “Does this make sense based on what I know?” If we model and teach this consistently—whether in classrooms or at home—we can rewire our approach to treat AI as a starting point, not the final word.

How can people ensure they’re not misled by AI inaccuracies or so-called ‘hallucinations’?

First, recognize that AI isn’t infallible—it can confidently spit out nonsense. Always cross-check its output with trusted sources, whether that’s a textbook, a reputable website, or an expert opinion. If AI provides a claim, dig into the evidence behind it. I also suggest varying your prompts to see if the response holds up under different angles. And if something feels off, trust your gut. Building this habit of verification turns AI into a tool for exploration rather than a source of truth.

What’s your forecast for the role of AI in education over the next decade?

I believe AI will become even more embedded in education, personalizing learning and automating administrative tasks to free up teachers for deeper engagement with students. But I also foresee a growing pushback as we recognize the risks of dependency. My hope is that we’ll see a balanced approach—AI as a powerful ally, but with clear boundaries and a renewed focus on nurturing human skills like creativity and critical thinking. If we get this right, AI could amplify education without compromising the essence of what it means to learn.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later