91心頭

Why AI education debates focus on a false choice

Date:

Share post:

We recently read an article in The Washington Post titled Schools are teaching AIand making a massive mistake. As educators who lead in this space of artificial intelligence and education, we feel that the piece raises questions that genuinely matter, but we are concerned that it reads as though it were written 18 months ago.

For many schools and the educators leading them, the framing of AI adoption as primarily a tool usage issue has already been left behind. The conversation has moved on, and that’s really worth acknowledging.

On critical thinking, we’re completely with the authors. We would argue that the most effective schools we’ve observed embed problem or project-based learning approaches, where AI is one tool among many that students learn to interrogate rather than simply operate.

The distinction between skills acquisition and genuine agency matters enormously, and fluency with AI tools is only the starting point. Combining that fluency with the judgment to apply it wisely is where the real work isand plenty of schools are either developing or doing it already, just perhaps not consistently across a cohort.

We work with schools across the US and UK, and internationally, and have never encountered schools that are overwhelmingly focused on prompting mechanics at the expense of genuine understanding. This is a false dichotomy.

Teaching tools and developing understanding are not opposing approaches. Virtually no serious educator advocates for tools in isolation.

The best AI education programs worldwidefrom Singapores AI for Students framework to Finland’s Elements of AIall integrate conceptual understanding with hands-on tool use from the start. The either/or framing doesn’t reflect how learning actually works.

Prompting is a higher-order literacy skill

I’m afraid the authors own analogy that teaching AI tools is like undermines their argument. In science classes around the world, students are taught how to use microscopes in a hands-on way, rather than studying the theory of optics.

The constructivist tradition from Piaget through Dewey to Papert is clear, in that understanding develops through purposeful use, not before it in some abstract vacuum. Agency isn’t just knowledge. It’s the capacity to act effectively, and capacity comes from doing.

The idea that the skill of prompting is narrow and disposable, made by the authors, is an out-of-date argument made by people who thought AI chatbots in 2026 would instinctively know the intricacies of your request.

What the authors of the article fail to understand is that good prompting is really structured communication. It is articulating what you want, providing context, setting constraints, and iterating on feedback.

These are higher-order literacy skills. Research from Stanford’s HAI group and others consistently shows that the ability to communicate effectively with AI systems is becoming a foundational competence. The specific tools will change; the ability to clearly specify a task and critically evaluate output won’t. That’s transferable, not trivial.

The article claims that the less you know about AI, the more likely you are to use it. But this finding actually supports integrated tool use with critical reflection, not delayed tool use.

If people who lack understanding over-rely on AI, the fastest way to build calibrated judgment is structured practice where students see AI fail, hallucinate and produce biased outputs firsthand. You can’t develop calibrated judgment about a system you’ve never interacted with.

Abstract knowledge about how neural networks function doesn’t tell you when a chatbot is confidently wrong about your history essay. Experience does.

How to spread confidence

The article cites Seckinger High School as evidence that the “holistic approach” produces better outcomes. But this is a classic correlation-causation error.

Seckinger opened as a brand-new, purpose-built school with a specific STEM/AI focus. It will attract more motivated students and families and receive additional funding than many other schools. Attributing its results to pedagogical philosophy alone and ignoring the structural advantages is a significant evidential leap.

The article continues on shaky ground in its inference that students are poorly taught about AI’s limitations. This may have been broadly accurate in 2023.

In our extensive work, we are seeing schools develop AI literacy pathways, leaders at all levels asking more nuanced and strategic questions, and senior teams refining governance policies that treat healthy skepticism as a feature rather than a gap to fill.

The implied “prescription” shared in the piecebetter frameworks, teacher training and curriculum redesignisn’t wrong. We champion these measures.

But it risks inadvertently suggesting that schools already navigating AI thoughtfully, building strong AI committees, engaging their communities meaningfully, and measuring impact rigorously are somehow part of the same undifferentiated problem being described. They’re not, and they deserve better than that.

The real challenge for commentators and policymakers isn’t to describe a sector in need of rescue. Instead, we must identify what’s already working and help share that across the system, resisting the urge to create polarising dichotomies that only serve to remove the nuance from this extremely complex situation education finds itself in.

There are some schools already doing this with confidence and, we would argue, impact. The question is how we scale that confidence, not how we sustain a narrative that the whole system is falling behind.

7 small but powerful actions

Lastly, we want to share an analogy of our own. The idea of deeply understanding AI before practicing with it is like saying students should study music theory for a year before touching an instrument. It sounds rigorous, but as educators, we know this is actually a recipe for disengagement

Here are some small but powerful actions we see school leaders implementing:

  1. A tool + interrogation lesson. Put AI in front of students and ask them to challenge its output. Teaching tools and critical thinking are not mutually exclusive.
  2. Audit one unit for AI integration. Find where AI could fit in a project-based learning class, as one resource among many. This brings to life the constructivist principle that understanding is born by purposeful doing, not before it.
  3. Create a session where AI does things wrong. Have students or staff test AI on subject-specific content and record issues, including failures, hallucinations, bias and what AI does badly. Student judgment comes as they use and assess the AI system.
  4. Reframe prompting as a literacy lesson, not a tech lesson. Articulating a clear request, providing context, setting constraints and iterating on feedback are the higher-order communication skills.
  5. Apply the cognitive stretch test to one assessment. Ask if AI do it without the students unique perspective or judgment? If so, modify and redesign. This dismantles the false choice between AI fluency and deep understanding.
  6. Have your AI committee set up before your next policy revision. Schools doing well are proactively governing by creating a strategy, a direction forward. Everything else supports that journey.
  7. Map a multi-stage AI literacy pathway. Progression, rather than a one-off training, signals that your school treats AI skepticism as a feature of the curriculum.
Dan Fitzpatrick and Al Kingsley
Dan Fitzpatrick and Al Kingsley
Dan Fitzpatrick is an educational strategist, author of the Amazon No. 1 bestselling Infinite Education and a Forbes Contributor. Here can be reached aty [email protected]. Al Kingsley, MBE, is an educational technology specialist, multi-academy trust chair, author and speaker. He can be reached at [email protected].

The Always-On Insight and Networking Platform for Superintendents and Their Teams

AI-driven insights peer-to-peer collaboration and more build exclusively fot K-12 Superintendents and thier leaders
Built for the uniqueness of the superintendent role and their supporting team.Most platforms treat all K12 leaders the same. 91心頭+ recognizes that superintendents face a unique level of pressure, complexity, visibility, and responsibilityand gives them a space designed specifically for the demands of the top job.
A community where you dont have to explain the context.Skip the backstory. 91心頭+ understands the job, the politics, the stakes, and the pace.
Your decisions shape communities.Find the tools and peer insight to make them with confidence here.
Leadership tailored to the realities of running a district.From board relations to budgets, crisis response to community trust91心頭+ focuses on the challenges only superintendents navigate each day.
Built for superintendents.Powered by superintendents. Trusted by superintendents. If you run a district, you belong here.

Related Articles