AI in School Psychology: Insights, Misconceptions, and Ethical Considerations from NASP 2025

Attending the NASP 2025 conference has been an incredible experience! I’ve been energized by the thoughtful conversations, engaging presentations, and the opportunity to connect with colleagues who share a passion for supporting students and schools. One of the most frequent topics of discussion that I have had has been the role of artificial intelligence (AI) in school psychology—a topic that sparks curiosity, excitement, and understandable caution. While I plan to provide a full conference recap soon, I wanted to share some of the most common questions and comments I received during my conversations at NASP. These questions touch on fundamental misconceptions, ethical considerations, and the practical implications of integrating AI into our professional practices. My goal is to provide clarity, encourage thoughtful dialogue, and promote responsible use of this evolving technology.

Q1 - “AI is just a tool like the calculator. This is all hysteria, just like we saw when the calculator came out. Right?”

In my Opinion the Calculator Analogy for AI Is Wrong—Here’s Why
A calculator does one thing: it follows explicit rules to produce a correct answer (it calculates!). It doesn’t learn, it doesn’t adapt, and it never hallucinates.

Perhaps the PC is a better analogy because it runs multiple programs, but even that falls short. A PC doesn’t generate content, infer meaning, or improve over time. It’s still just executing code.

A better analogy? AI is like an intern. It’s smart, can work quickly, and can even handle complex tasks—but it also makes mistakes, sometimes in surprising ways. It needs guidance, oversight, and a critical eye on its output.

Treat AI like an intern, not a calculator. Otherwise, you’re setting yourself up for trouble.

Q2 - “Should I let my trainees use AI for clinical work?”

In my Opinion, No – School Psychology Trainees Should Not Be Allowed to Use AI for Writing Reports, Planning Interventions, or Similar Tasks. At least not until they have first mastered these skills on their own.

While AI tools offer powerful capabilities for generating reports, summarizing information, and suggesting interventions, allowing school psychology trainees to use AI for these tasks too early presents significant risks to their skill development, ethical practice, and professional competence. Here’s why trainees should not rely on AI in these critical areas:

1. Inhibits Skill Development and Critical Thinking

Writing reports, analyzing data, and planning interventions are essential skills that school psychology trainees must develop through applied practice. Using AI to shortcut these tasks can hinder essential competencies like clinical reasoning, hypothesis generation, and data interpretation.

Trainees risk becoming overly dependent on AI, leading to surface-level understanding instead of the deep knowledge required to make sound professional judgments.

2. Undermines Professional Judgment and Accountability

AI-generated suggestions or reports may appear polished but lack nuance or contain subtle errors that inexperienced trainees may overlook. Professionals must own their judgments—not defer to AI outputs without scrutiny.

Relying on AI may blur accountability. If a report or intervention plan is flawed, who is responsible? Ethical guidelines (e.g., NASP Principles for Professional Ethics) emphasize that professionals—not AI—are accountable for their work.

3. Reduces Deep Understanding of Assessment Tools and Intervention Frameworks

Effective school psychologists must deeply understand assessments, including test structure, standardization, and limitations. Using AI to interpret scores can bypass this learning process, leaving professionals ill-equipped to explain or defend their findings in meetings or legal contexts.

Planning interventions requires understanding evidence-based frameworks—something AI suggestions may not always align with.

4. Discourages Reflective Practice and Lifelong Learning

Good school psychologists engage in reflective practice, examining their methods, assumptions, and decisions. Overreliance on AI can short-circuit this process, leading to passive acceptance rather than active engagement in professional growth.

Reflective practice is critical for lifelong learning, which AI shortcuts can undermine.

Balanced Approach: Teaching AI Literacy Without Replacing Core Skills

While trainees shouldn’t rely on AI for core tasks, they can still be educated about AI literacy:

  • Teach how AI works: Understanding AI’s strengths, weaknesses, and ethical implications is valuable.

  • Use AI as a discussion tool: Critiquing AI-generated outputs builds critical analysis skills.

  • Integrate supervised AI exploration: Under faculty guidance, trainees can explore AI’s capabilities and limitations without compromising skill development.

Q3 - “Using AI is just like using Q-global and that’s okay. What’s the difference?”

Comparing AI to Pearson's Q-global system is misleading because they serve different purposes, operate on different technologies, and raise distinct ethical considerations:

1. Purpose and Functionality

  • Q-global: A closed system for administering, scoring, and reporting standardized assessments. Its outputs are predictable and rooted in validated procedures.

  • AI: A broad technology capable of generating content, interpreting data, or offering recommendations—but prone to hallucinations and potential bias.

2. Validation and Standardization

  • Q-global: Psychometrically validated, ensuring reliable and legally defensible results.

  • AI: Often lacks rigorous validation for psychological use. Models’ large, uncurated training data can perpetuate biases and yield unreliable outputs.

3. Ethical and Legal Considerations

  • Q-global: Complies with established standards (e.g., FERPA, HIPAA) with built-in safeguards to mitigate bias.

  • AI: Raises new ethical questions related to data privacy, algorithmic bias, and accountability concerns.

4. Human Expertise vs. Automation

  • Q-global: Assists professionals; human interpretation remains essential.

  • AI: Can tempt overreliance, risking ethically questionable conclusions.

5. Transparency and Interpretability

  • Q-global: Transparent scoring algorithms with clear interpretive guidelines.

  • AI: Often a "black box"—its outputs can be speculative and less defendable.

Summary:
Q-global is a validated platform with built-in safeguards. AI is a flexible technology requiring careful validation to prevent errors and bias. Comparing the two oversimplifies the ethical, legal, and professional complexities involved in psychological assessment.

So:
My conversations at NASP 2025 reinforced what many of us already know: AI is here to stay, and its impact on school psychology is growing. However, how we choose to engage with this technology will define whether it becomes a valuable ally or a source of unintended harm. While AI holds promise for improving efficiency and expanding access to information, we must not lose sight of the essential human elements of our work—clinical judgment, empathy, cultural responsiveness, and ethical accountability. Technologies like AI should enhance—not replace—our professional expertise.

Moving forward, it’s crucial to approach AI with both curiosity and caution. By fostering AI literacy, ensuring strong foundational training for students, and upholding ethical standards, we can thoughtfully integrate technology into our practice. Let’s keep this conversation going and work together to harness the benefits of AI while safeguarding the integrity and humanity of our profession.

Next
Next

The Unavoidable Politics of AI: A Call for Ethical Transformation