Can School Policy Keep Up With AI’s Growth?

Can School Policy Keep Up With AI’s Growth?

In a world where technology evolves faster than policy can be written, Camille Faivre stands at the intersection of innovation and education. As an expert in education management, she has guided countless institutions through the seismic shifts of digital learning, particularly in the post-pandemic era. Now, as generative AI rapidly embeds itself into the fabric of K-12 education, she offers a crucial perspective on the challenges and opportunities ahead. This conversation explores the alarming gap between AI adoption and institutional guidance, the disconnect in perception between parents and school administrators, the urgent need to redefine academic integrity for the digital age, and the unique landscape of AI in elementary education.

The data shows AI use surged by over 15 points recently, yet 80% of students report no formal training. What are the biggest risks of this ‘adoption-guidance gap,’ and what immediate, practical steps can schools take to start closing it for their students and staff?

That gap is a chasm, and it’s where responsible learning habits can fall and break. The single biggest risk is that we’re cultivating a generation of students who see AI not as a tool for inquiry, but as an outsourcing service for thinking. It’s the specter of “digital dementia”—where the muscle of critical thinking atrophies because it’s no longer being exercised. When you see a surge of more than 15 percentage points in usage while 80 percent of students feel they’re navigating it alone, it creates a culture of uncertainty and misuse. Immediately, schools need to start small and fast. Forget waiting for a perfect district-wide policy. Teachers can initiate classroom conversations tomorrow, defining what collaboration with AI looks like for the next assignment. Principals can dedicate the first 30 minutes of the next staff meeting to sharing what’s working and what isn’t, creating a support network instead of letting every teacher feel like they are on an island.

There’s a striking disconnect in concern over critical thinking, with 61% of parents worried versus just 22% of district leaders. Why do you believe this perception gap exists, and what kind of dialogue is needed to align these key stakeholders on a unified strategy?

This gap isn’t about one side being right and the other being wrong; it’s about perspective. District leaders are often looking at a 30,000-foot view, focused on metrics like efficiency, innovation, and preparing the district for the future. They see AI as a powerful tool to achieve those goals. Parents, on the other hand, are on the ground. They’re sitting at the kitchen table, watching their child grapple with homework, and they feel a visceral fear that their child is losing the ability to struggle, to think, to create independently. That 61 percent figure from parents is an emotional one, while the 22 percent from leaders is often more strategic. To bridge this, the dialogue has to move from the boardroom to the living room. We need town halls and workshops where leaders aren’t just presenting a plan, but are actively listening to those parental anxieties and students are sharing their own experiences. The goal isn’t just to inform, but to build a shared understanding of what success looks like for a child in the age of AI.

Given that half of students fear false accusations of cheating and few schools have clear rules, how can educators concretely define academic integrity in the age of AI? Could you share a step-by-step approach for creating and communicating a fair classroom-level AI policy?

That fear is corrosive to the learning environment. When half of your students are more worried about being wrongly accused than about learning the material, you have a serious trust problem. The solution is absolute clarity, co-created with the students themselves. First, start by explicitly acknowledging AI as a tool in the classroom; don’t treat it like a forbidden secret. Second, create a simple, visual guide—think traffic lights. Green-light uses could be brainstorming or checking grammar. Yellow-light uses might be generating an outline, but requiring students to show their revision process. Red-light uses would be submitting an unedited, AI-generated paper as their own. Third, and this is crucial, involve students in defining those zones. When they help build the rules, they are far more likely to respect them. Finally, communicate this policy everywhere: post it online, include it in the syllabus, and, most importantly, have a candid conversation about it before every major project begins.

The report notes nearly half of elementary teachers are already experimenting with AI. What unique opportunities and challenges does this present for younger students, and how can schools introduce age-appropriate AI literacy to build a strong foundation for responsible use in later grades?

Seeing that nearly half of elementary teachers are already on board is both exciting and a little daunting. The opportunity is immense; we can normalize AI as a learning companion from the very beginning, much like we did with calculators. The challenge is ensuring it doesn’t become a developmental crutch, preventing students from mastering foundational skills like handwriting, basic arithmetic, or the simple, beautiful process of forming their own first sentences. For these younger learners, AI literacy isn’t about learning to code or write complex prompts. It’s about planting conceptual seeds. It can be as simple as a discussion about how a video streaming service “knows” what cartoons they like, teaching them that technology makes choices based on data. It’s about building a foundational understanding that these tools are created by humans and can be flawed, which is the bedrock of responsible digital citizenship later on.

What is your forecast for how AI will reshape both teaching and student learning in K-12 education over the next five years?

Over the next five years, I foresee AI becoming a deeply integrated, almost invisible, layer in the educational ecosystem. For students, it will evolve into a truly personalized learning co-pilot, adapting in real-time to their individual strengths and weaknesses, offering customized explanations and practice problems on demand. For teachers, AI will be the ultimate administrative assistant, finally freeing them from the mountains of paperwork and grading that consume so much of their time. This will allow them to focus on the deeply human aspects of teaching: mentoring, facilitating complex project-based learning, and fostering social-emotional growth. The most profound shift won’t be in the tools we use, but in what we value. The focus will pivot dramatically from the memorization of information—which AI can do instantly—to the mastery of uniquely human skills: creative problem-solving, ethical reasoning, and the art of asking the right questions.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later