Why critical thinking is key to using AI wisely
/Returning guest writer Stephanie Simoes is the mind behind Critikid.com, a website that teaches critical thinking to children and teens through interactive courses, worksheets, and lesson plans. This article is meant to help educators (and parents) more effectively teach kids to use large language models and other forms of AI in positive ways.
In the Phaedrus, Plato expressed concerns that if men learned writing, it would “implant forgetfulness in their souls.” A 1975 issue of Science News referenced a survey that revealed that “72 percent of those polled opposed giving every seventh-grade student a calculator to use during his secondary education.”
Generative artificial intelligence is the newest target of that same opposition, and the debate has intensified since the U.S. Department of Education released its Proposed Priority on Advancing AI in Education.
“Advancing AI in education” can mean different things, but it generally falls into three main areas, all of which are addressed in the DoE’s proposals:
Teaching how to use AI—media literacy and how to effectively use LLMs as thinking helpers
Teaching how AI works—expanding computer science lessons to teach the fundamentals of AI systems
Using AI to support instruction—empoying AI-driven tools to provide analytics and virtual teaching assistants
Because I teach critical thinking—and because some critics worry that using AI is destroying our ability to think critically—I will explore the first area in this article.
One of the proposed priorities is teaching students to spot AI‑generated misinformation. That one isn’t especially contentious; spotting misinformation, including AI-generated misinformation, is a core part of modern media literacy.
The more controversial question is whether students should use large language models as “thinking partners.” The virality of the recent MIT study, “Your Brain on ChatGPT,” has amplified the fear that LLM use dampens our thinking skills. In the study, 54 adults wore electroencephalogram (EEG) caps while writing short essays. One group wrote unaided, another used a search engine, and a third relied on ChatGPT. Neural activity was highest in the unaided group, lower with search, and lowest with ChatGPT.
Those results, however, come with big caveats: the paper is still in preprint, the sample was small, and none of the participants were K–12 students.
Moreover, the reduced neural activity during ChatGPT‑assisted writing may simply indicate cognitive offloading, the practice of using external tools to reduce mental effort. From maps to calculators to writing lists of things we need to remember, humans have long been engaging in this practice. Cognitive loading isn’t necessarily a bad thing, as it allows us to spend our mental energy on higher‑order tasks. However, it must be implemented carefully in a classroom.
For instance, calculators support higher‑level math education only after students learn arithmetic. Similarly, children should develop basic writing and reasoning skills before using AI as a helper.
Moreover, we need solid subject-specific knowledge before using LLMs as research assistants; otherwise, we lack the expertise to evaluate the results. If we skip those steps, we risk producing a generation of incompetent experts.
But used correctly, AI can be a powerful tool for strengthening students’ critical thinking skills.
Critical thinking is slow, careful thinking. It allows us to question assumptions, spot biases, and weigh evidence. LLM outputs can be flawed or biased like any human source, so their responses deserve the same scrutiny. That scrutiny must sit alongside intellectual humility—recognizing when we don’t (yet) know enough to judge a claim. Students already practice these habits when they evaluate social media posts or websites; LLM outputs are simply the newest arena to apply the same skills.
A drawback of LLMs is that they amplify confirmation bias when we prompt poorly. Ask, “Give me evidence for my belief,” and they may oblige. This flaw can be turned into a lesson about both responsible prompting and confirmation bias. Teach students to prompt “Show the strongest evidence for and against this claim,” and then point out the human tendency to pay more attention to the pieces of evidence that support our preconceptions.
Better yet, have students ask the LLM to challenge their beliefs: “Show me evidence that I am wrong about this.” By prompting for dissent, students learn to explore their beliefs and may even change their minds about some unsupported ones.
History shows a pattern when it comes to new technology: panic, adaptation, and, finally, integration. The task of educators isn’t to shut the door on AI, but to teach students to use it wisely.
—Stephanie Simoes | Critikid.com