Why critical thinking is key to using AI wisely

Returning guest writer Stephanie Simoes is the mind behind Critikid.com, a website that teaches critical thinking to children and teens through interactive courses, worksheets, and lesson plans. This article is meant to help educators (and parents) more effectively teach kids to use large language models and other forms of AI in positive ways.

 
In the Phaedrus, Plato expressed concerns that if men learned writing, it would “implant forgetfulness in their souls.” A 1975 issue of Science News referenced a survey that revealed that “72 percent of those polled opposed giving every seventh-grade student a calculator to use during his secondary education.”

Generative artificial intelligence is the newest target of that same opposition, and the debate has intensified since the U.S. Department of Education released its Proposed Priority on Advancing AI in Education.

“Advancing AI in education” can mean different things, but it generally falls into three main areas, all of which are addressed in the DoE’s proposals:

  1. Teaching how to use AI—media literacy and how to effectively use LLMs as thinking helpers 

  2. Teaching how AI works—expanding computer science lessons to teach the fundamentals of AI systems

  3. Using AI to support instruction—empoying AI-driven tools to provide analytics and virtual teaching assistants 

Because I teach critical thinking—and because some critics worry that using AI is destroying our ability to think critically—I will explore the first area in this article.

One of the proposed priorities is teaching students to spot AI‑generated misinformation. That one isn’t especially contentious; spotting misinformation, including AI-generated misinformation, is a core part of modern media literacy.

The more controversial question is whether students should use large language models as “thinking partners.” The virality of the recent MIT study, “Your Brain on ChatGPT,” has amplified the fear that LLM use dampens our thinking skills. In the study, 54 adults wore electroencephalogram (EEG) caps while writing short essays. One group wrote unaided, another used a search engine, and a third relied on ChatGPT. Neural activity was highest in the unaided group, lower with search, and lowest with ChatGPT.

Those results, however, come with big caveats: the paper is still in preprint, the sample was small, and none of the participants were K–12 students.

Moreover, the reduced neural activity during ChatGPT‑assisted writing may simply indicate cognitive offloading, the practice of using external tools to reduce mental effort. From maps to calculators to writing lists of things we need to remember, humans have long been engaging in this practice. Cognitive loading isn’t necessarily a bad thing, as it allows us to spend our mental energy on higher‑order tasks. However, it must be implemented carefully in a classroom.

For instance, calculators support higher‑level math education only after students learn arithmetic. Similarly, children should develop basic writing and reasoning skills before using AI as a helper.

Moreover, we need solid subject-specific knowledge before using LLMs as research assistants; otherwise, we lack the expertise to evaluate the results. If we skip those steps, we risk producing a generation of incompetent experts.

But used correctly, AI can be a powerful tool for strengthening students’ critical thinking skills.

Critical thinking is slow, careful thinking. It allows us to question assumptions, spot biases, and weigh evidence. LLM outputs can be flawed or biased like any human source, so their responses deserve the same scrutiny. That scrutiny must sit alongside intellectual humility—recognizing when we don’t (yet) know enough to judge a claim. Students already practice these habits when they evaluate social media posts or websites; LLM outputs are simply the newest arena to apply the same skills.

A drawback of LLMs is that they amplify confirmation bias when we prompt poorly. Ask, “Give me evidence for my belief,” and they may oblige. This flaw can be turned into a lesson about both responsible prompting and confirmation bias. Teach students to prompt “Show the strongest evidence for and against this claim,” and then point out the human tendency to pay more attention to the pieces of evidence that support our preconceptions.

Better yet, have students ask the LLM to challenge their beliefs: “Show me evidence that I am wrong about this.” By prompting for dissent, students learn to explore their beliefs and may even change their minds about some unsupported ones.

History shows a pattern when it comes to new technology: panic, adaptation, and, finally, integration. The task of educators isn’t to shut the door on AI, but to teach students to use it wisely.


Stephanie Simoes | Critikid.com

A chat about AI and the new learning landscape

You’ve probably seen funny, intriguing, or scary news items popping up over the past few weeks about Artificial Intelligence (AI) in the form of ChatGPT, Claude, LLaMA, and other interfaces. You might have heard podcasters explaining or bemoaning the revolutionary new world that’s about to replace our old one. At this point, for me, it’s all still pretty confusing in terms of how it will affect my day-to-day life, but I’m curious and very cautiously optimistic.

One thing that’s pretty certain: Our kids’ lives will be dramatically shaped by AI—and in ways we can’t possibly predict.

In education, two big names—Khan Academy and Duolingo—announced last week that they are on board the ChatGPT train, having been granted early access to develop and test ideas. We’ve already seen both learning platforms suggest some of the benefits they anticipate for learners and educators, but let me preface this short summary by saying I have no idea what the real-world educational outcomes will be because we are so early in the exploration stage. Having said that, here’s a rundown of what’s happening at Khan Academy and Duolingo, along with some links that will take you on a deeper dive, if you’re interested. 

ChatGPT-4, which stands for Generative Pre-trained Transformer 4, is a sophisticated AI model that can understand the context of questions and create written responses that are much more “human-like” than past versions. Khan Academy, a nonprofit dedicated to free, high-quality education for all, has been around since 2008 and is well known in the alt ed community for its library of outstanding YouTube video lessons on math and a vast array of other subjects. Duolingo is a for-profit language learning company that has been on the scene for 12 years. Duolingo teaches more than 100 languages to people all over the world in gamefied lessons, primarily through phone apps. 

There is a free version of Duolingo, but the new AI-assisted features will be offered only within a $30-per-month paid premium version. These features will offer learners more detailed explanations of how a language works in a feature called Explain My Answer. But most fascinating and useful will be the option for learners to role-play and interact with the AI tool like a personal tutor. After a conversation, users will get specific feedback they can use to improve their responses in their next Duolingo conversation or in real life.

Even more powerful possibilities seem to lie in the approach Khan Academy is taking to this new AI tech. Khan is calling its AI-powered platform Khanmigo. It is still in the development stage and open primarily to educators and school districts that are already working with Khan Academy on other projects, but it’s likely that more and more features will be rolled out to ordinary users in the near future.

Because each learner is different, the value of Khanmigo is that it will immediately adapt its tutoring in any subject to meet the individual’s needs, just like a one-on-one tutoring session with a human teacher. So, if a student is struggling with a particular type of math operation, Khanmigo will ask questions that direct the student toward a deeper understanding, rather than providing quick answers. And if a student needs to understand a controversial current news story, the Khanmigo tutor could gather information in order to debate multiple sides of the issue with the student in real time.

Khan Academy also wants its AI tool to become a valuable assistant for educators as well as students, removing some of the time-consuming work of lesson planning and grading so that educators can spend more time engaging with each student.

Sensibly, the folks at Khan are carefully communicating with users and adapting their projects to make sure they are safe and that kids are truly getting an enhanced learning experience rather than simply getting all the answers to their questions from Khanmigo. The short demonstration founder Sal Khan did on YouTube reminded me of a Socratic back-and-forth between students and teachers.

Next week, I’ll take a look at the world that’s opening up for students with disabilities as a result of AI learning tools. If you have hopes, fears, or experiences to share regarding AI and learning, please comment!

Here are a few links to learn more:


Shelley Sperry |
Sperry Editorial