GenAI Can Reduce Critical Thinking: What It Means for School Psychologists
Too busy to listen? A podcast of this study can be accessed here.
Generative AI is creeping into nearly every corner of our professional lives—from writing emails and summarizing documents to analyzing data and drafting psychological reports. But how is this shift impacting our thinking?
A recent study by Lee, et al. (2025) provides one of the most thorough looks yet at how GenAI use is affecting critical thinking among knowledge workers. Titled "GenAI’s Impact on Critical Thinking in Knowledge Work," this research is especially relevant for school psychologists, a group that constantly balances efficiency with ethical, thoughtful decision-making.
What the Study Explored
This large-scale study surveyed 319 knowledge workers, collecting 936 real-world examples of how people use GenAI in day-to-day tasks. The research team aimed to understand:
When and how people engage in critical thinking while using GenAI,
How GenAI affects the mental effort involved in work,
And how task-related and user-related factors influence this process.
The study used a mixed-methods design, combining statistical modeling with qualitative analysis of participants’ free-text responses.
To assess critical thinking, the researchers turned to Bloom’s Taxonomy—a framework familiar to many educators and psychologists. Participants were asked to reflect on their use of GenAI across six cognitive domains:
Knowledge (recall of facts),
Comprehension (understanding concepts),
Application (using information in context),
Analysis (breaking down information),
Synthesis (combining ideas creatively), and
Evaluation (making informed judgments).
They were also asked to compare how much cognitive effort they put into these domains with and without GenAI.
The researchers further examined task-related variables (e.g., type task, confidence in doing the task) and user characteristics, such as: age, gender, occupation, tendency to reflect on their work, trust in GenAI.
Key Findings: A Mixed Picture
1. Confidence in GenAI Reduces Critical Thinking
One of the most striking quantitative findings was a negative correlation between confidence in GenAI and perceived critical thinking:
The more users trusted GenAI to handle the task, the less they felt they were thinking critically.
In contrast, self-confidence in one’s own abilities to do a task without AI was positively associated with perceived critical thinking.
Similarly, confidence in evaluating AI outputs was associated more critical thinking.
In short, trusting GenAI too much can dull your thinking. But trusting yourself—and your ability to assess the AI—can sharpen it.
2. Critical Thinking Isn’t Gone—It’s Changing
Participants described a shift in how they think critically when using GenAI. Rather than engaging deeply during task execution, users were more likely to focus on:
Formulating good prompts (goal setting, question crafting),
Inspecting and verifying outputs (checking the AI’s responses against their own knowledge or external sources).
This suggests that critical thinking hasn't disappeared, but it’s moved upstream (how we ask) and downstream (how we judge), rather than occurring during the task itself.
3. Barriers to Critical Thinking with GenAI
The study identified several obstacles that prevent users from thinking critically when using AI:
Awareness Barriers: People may not even realize critical thinking is needed—especially for “low-stakes” or familiar tasks and when users trust and rely on AI.
Motivation Barriers: When users have a lack of time, perceive critical thinking as not part of their job responsibilities may feel less compelled to engage critically.
Ability Barriers: Some users lack the domain knowledge or prompt-engineering skills needed to properly evaluate GenAI output or to guide it effectively.
4. The Irony of Generative AI: Atrophy of Expertise
The authors draw on Bainbridge’s classic idea—the "irony of automation"—which argues that automating routine tasks can backfire. Why? Because it erodes the very skills we need when things go wrong.
Applied here, the concern is clear: Over-reliance on GenAI—even for basic tasks—could lead to cognitive atrophy. If we’re not regularly practicing skills like analysis, synthesis, and evaluation, we may find ourselves underprepared for the situations that demand them most: complex cases, ambiguous data, ethical dilemmas, and nuanced human problems.
5. Cognitive Effort Isn’t Eliminated—It’s Redistributed
GenAI may make some aspects of our work easier, but it doesn’t eliminate mental effort:
Knowledge and Comprehension: GenAI often reduces effort here by providing facts and summaries.
Synthesis and Evaluation: These often become less demanding due to AI assistance.
Prompting itself is a new cognitive task—requiring users to clearly define what they need and how to ask for it.
Implications for School Psychologists
For Practicing School Psychologists
We often face high cognitive loads—from report writing and data interpretation to intervention planning and consultation. GenAI is tempting. It can help us generate summaries, intervention ideas, and even draft content. But the findings of this study suggest that trusting GenAI too much can weaken our thinking.
To prevent this, we must engage with GenAI critically—across all six levels of Bloom’s Taxonomy:
Knowledge: Don’t assume AI outputs are factually correct. Cross-check data, citations, and terminology.
Comprehension: Make sure you understand what the AI is saying. Can you explain it to a colleague or parent?
Application: Can this AI-generated suggestion be realistically applied in your school setting, with this student?
Analysis: Does the reasoning in the output hold up? Are there gaps, biases, or inconsistencies?
Synthesis: How can this output be integrated with your own observations, the student’s background, and your professional expertise?
Evaluation: Is the final recommendation ethical, equitable, and appropriate?
Remember: A polished output isn't the same as a good one.
For Trainees and Early-Career Professionals
For those still building foundational skills, these findings are even more important. GenAI may feel like a shortcut, but if you bypass opportunities to reason through tasks yourself, you may hinder your development.
Training programs and supervisors should:
Encourage critical engagement with GenAI—don’t just accept its outputs.
Include explicit instruction on how to evaluate AI-generated content.
Treat GenAI as a collaborator, not a crutch.
Reinforce Bloom’s levels as a framework for interacting with AI tools: What kind of thinking are you doing with the AI, and what are you at risk of not doing because of it?
Moving Forward Mindfully
GenAI can enhance our efficiency, but it should never replace our judgment. As school psychologists, our most valuable skills aren’t speed or productivity—they’re tasks like nuance, insight, and critical analysis.
This study offers an important reminder: the more we automate, the more intentional we must be about preserving and exercising the cognitive muscles that make us effective, ethical practitioners. In an era of intelligent machines, it’s not enough to ask what the AI can do. We must keep asking what we still need to do ourselves—and why it matters.
Let’s make sure we stay thinkers, not just prompt writers.
Reference:
Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. In CHI Conference on Human Factors in Computing Systems. 1–23. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf