When AI Joins the Team: Rethinking Collaboration in School Psychology

As school psychologists, we're always looking to strengthen interpersonal collaboration—among students, educators, support staff—a foundational element of effective school systems. But as artificial intelligence (AI) becomes more prevalent in educational tools and decision-making systems, we face a timely question: What happens when AI isn’t just a tool—but potentially a teammate?

That concept, treating AI as a contributing team member rather than an isolated utility, was the focus of a compelling presentation during the National Academies’ Hauser Policy Impact Fund webinar series, Navigating the Era of Artificial Intelligence: Achieving Human–AI Harmony. Drawing on insights from this webinar this blog examines what it truly means to embed AI within collaborative systems—and why that matters deeply for school settings.

1. From Solo Tool to Teammate: Why It Matters

The traditional framing casts AI as a tool: helpful, yes…but separate. However, presenters from the National Academies reframe it: AI as a teammate—responsible for mutual support, coordination, and shared decision-making. Teams of humans and technology can adapt more flexibly, handle complex tasks better, and respond dynamically to changing goals.

For school psychologists, this is transformative. Imagine an AI that helps process behavioral assessment data—but instead of just producing raw scores, it collaborates by flagging trends, suggesting follow-up areas, or generating tailored interventions. That would be a system that “teams” with us—not just works for us.

2. Keys to Effective Human–AI Teaming

According to the Human–AI Teaming framework, effective human–AI teams rest on key foundations:

  • Clear communication, transparency, and trust: AI systems need to “explain themselves” or at least give cues about how they reached their conclusions.

  • Complementary strengths: Human intuition paired with AI's pattern recognition can yield far more than either alone.

  • Bias resilience: Both human and AI teammates have potential biases—anchoring, framing effects, algorithmic drift—so mitigating those requires joint vigilance and systems that support critical decision-making.

These are essential for school settings where equity, and ethical use of data are non-negotiable. As school psychologists, we must ensure any AI teammate enhances fairness—not perpetuate unintended harm.

3. Challenges in Practice: Awareness, Trust, and Role Clarity

  • Shared mental models: Human–AI teams need shared understanding of roles, context, and expectations—otherwise, predicting how the AI will act becomes difficult. That can undermine collaboration, especially in high-stakes or emotionally charged school settings.

  • Situation awareness: AI often collects and processes information differently than humans. Without transparency, educators or psychologists may not know when to rely on AI support or intervene.

  • Trust erosion: Concept drift (when AI performance degrades over time), shifting goals, or ethical missteps can quickly erode trust. Unlike human teammates, AI doesn’t “pace itself” or recalibrate unless designed to do so.

4. Practical Implications for School Psychology

Here are a few ways this can translate into practice:

  • Co-design AI processes with educators and psychologists so systems align with real-school logic, workflow, and professional values.

  • Conduct transparency checks – we have to be able to answer questions such as “Why did the AI recommend this intervention?” or “How was this profile generated?”

  • Use AI as a reflective spark, not a directive. Encourage staff to discuss or question AI-generated suggestions—AI prompts, human judgment confirms.

  • Build in review checkpoints, especially where AI might recommend high-stakes changes in assessments or interventions.

 

5. Looking Forward: Mindful AI Integration

We stand at a new frontier: one where school psychologists can leverage AI not as an isolated helper, but as a thoughtful collaborator—one that supports data-informed, equitable, and responsive decision-making.

The National Academies’ webinar serves as a timely call, urging practitioners like us to consider not just what AI can do, but how it integrates into teams that serve young people. When implemented thoughtfully, human–AI teaming could help us better meet diverse student needs—without compromising our ethical responsibilities.

Curious to talk about teamwork strategies for school psychologists using AI? Or want help drafting a staff training module on integrating AI as a teammate—not a boss? I’d love to support that conversation.

Next
Next

Can AI Help Fix MTSS? Opportunities and the Risks