NASP 2026: Artificial Intelligence Continues Expanding in School Psychology

Key Points (For Readers on the Go)

  • I tracked 42 sessions related to artificial intelligence at NASP 2026, up from 17 sessions in 2025.

  • Ethics were a dominant concern, with approximately 28 sessions addressing ethics, privacy, bias, or responsible use.

  • Report writing has become a central AI application, with 11 sessions focused on AI-assisted psychological reports.

  • Adoption among practitioners appears to be increasing rapidly, but confusion remains around ethical implementation, especially informed consent.

  • Overall sentiment at the conference was largely positive, though healthy skepticism and debate remain.

Introduction: AI Is No Longer a Side Conversation at NASP 2026

The 2026 National Association of School Psychologists (NASP) Conference made something very clear: artificial intelligence is no longer a niche topic within school psychology.

Last year I wrote about the growing presence of AI at the conference, where I counted 17 AI-focused presentations. This year, the number increased substantially. After reviewing the conference program, I tracked 42 sessions related to artificial intelligence.

That increase reflects how quickly the conversation around AI is evolving in our field. The focus is shifting from “What is AI?” to more practical questions:

  • How should school psychologists actually use these tools?

  • What ethical guidelines apply?

  • What risks need to be considered before implementation?

The Rapid Growth of AI in School Psychology

The increase from 17 AI-related sessions in 2025 to 42 sessions in 2026 suggests that AI is becoming a mainstream topic in professional conversations.

This growth was visible across many session formats, including:

  • Mini-skills sessions

  • Poster presentations

  • Paper presentations

  • Workshops

The diversity of presentation formats also suggests that AI is being explored across multiple domains of practice and research, rather than remaining confined to a single niche.

For most practitioners attending NASP, AI is no longer hypothetical. It is already influencing how professionals think about assessment, documentation, consultation, and training.

Ethics Remains the Central Concern

One theme stood out more than any other: ethics.

Of the 42 AI-related sessions, approximately 28 addressed ethical issues directly or indirectly. Topics included:

  • Responsible AI use in school psychology

  • Privacy considerations (including FERPA and HIPAA)

  • Bias in AI systems

  • Professional accountability

  • Guardrails for responsible implementation

Many of these sessions were framed explicitly around ethical decision-making. Others embedded ethics into broader conversations about practice.

In conversations throughout the conference, this concern came up repeatedly. Many practitioners expressed interest in AI tools but also uncertainty about issues such as:

  • When informed consent may be required

  • How to maintain professional responsibility when using AI assistance

  • How to ensure student privacy and data protection

Interestingly, while many practitioners are seeking guidance, there were also moments of pushback. Some attendees to my presentations questioned whether obtaining informed consent for AI-assisted workflows is necessary. However, both NASP and APA ethical guidance clearly emphasize transparency and professional responsibility when integrating emerging technologies.

The fact that these debates are happening openly is a healthy sign for the profession.

AI in Report Writing: A Major Area of Interest

Another clear trend was the growing interest in AI-assisted psychological report writing.

Among the 42 sessions, 11 focused specifically on report writing or psychoeducational reports.

This is not surprising. Report writing is one of the most time-intensive tasks in school psychology practice, and many practitioners are understandably interested in tools that might improve efficiency.

However, the conversations at NASP were not simply about speed. Many presenters addressed important questions such as:

  • How to maintain clinical judgment when using AI-assisted drafting

  • How to ensure reports remain accurate and individualized

  • How to avoid overreliance on automated systems

  • How to maintain defensible documentation practices

In other words, the discussion is evolving beyond “Can AI write reports?” (which our research suggest that they can) to a more nuanced question:

How can AI assist the process while keeping the psychologist firmly in control?

Prompting, Hallucinations, and AI Reliability

Another emerging theme was the importance of prompt design and reliability.

Several sessions focused on the concept of AI hallucinations, where language models generate inaccurate or fabricated information. Presenters emphasized that poorly structured prompts can increase the risk of these errors.

As a result, there is growing recognition that prompting is not just a technical skill—it is a professional safeguard. When AI tools are used for professional tasks, careful input design and human review are essential.

This area of practice is still evolving, but it is increasingly being treated as a core competency for responsible AI use.

AI, Bias, and Equity Considerations

Multiple sessions also addressed algorithmic bias and equity concerns.

These discussions emphasized that AI systems reflect biases present in their training data. For school psychologists, this raises important questions about fairness and representation.

Presenters discussed strategies for:

  • Recognizing potential biases in AI outputs

  • Maintaining culturally responsive practices

  • Ensuring that AI tools do not reinforce existing inequities

As AI becomes more integrated into educational systems, these conversations will likely become even more important.

Training the Next Generation of School Psychologists

Another theme emerging from the conference was how graduate training and supervision should adapt to the reality of AI use.

Some sessions focused specifically on issues such as:

  • Preventing inappropriate AI use among trainees

  • Teaching responsible AI workflows

  • Helping students understand the ethical boundaries of AI assistance

These conversations highlight an important reality: students are already experimenting with AI tools, whether training programs formally address them or not.

Preparing future school psychologists will require clear guidance, transparency, and thoughtful supervision.

Conversations with Practitioners

Beyond the formal presentations, many of the most valuable insights came from informal conversations with practitioners throughout the conference.

The overall tone was largely positive and curious.

Many attendees expressed appreciation for practical guidance around AI implementation. At the same time, several patterns emerged in those discussions:

  • Adoption is increasing. Many practitioners are already using AI tools in their workflow daily.

  • Ethical uncertainty remains common, particularly around informed consent and transparency.

  • Healthy skepticism persists, with practitioners having serious concerns about ethical concerns; also, some practitioners are questioning whether certain safeguards, such as informed consent, are necessary (which is troubling).

These conversations suggest that the profession is moving through a typical technology adoption cycle: early experimentation followed by deeper reflection about ethical and professional implications.

A Note on AI Vendors at NASP 2026

In previous years, I spent time speaking with AI vendors at NASP and evaluating their transparency, pricing, and security practices.

This year I did not conduct a vendor review because I have a professional relationship with Psychological Assessment Resources (PAR) related to AI-assisted report writing.

However, it was clear that AI vendors are increasingly visible at the conference. Multiple companies were actively marketing AI tools and services throughout the exhibit hall.

The growing vendor presence reflects the increasing demand for tools designed specifically for school-based practice.

Final Takeaways

  • AI discussions at NASP continue to grow rapidly, with 42 sessions in 2026 compared to about 17 in 2025.

  • Ethics remains the dominant concern, with most sessions addressing responsible implementation.

  • Report writing is emerging as a major application area for AI in school psychology.

  • Practitioners are increasingly experimenting with AI tools, but many are still seeking guidance.

  • Ongoing conversations about ethics, transparency, and professional responsibility will remain essential as these tools evolve.

Artificial intelligence is clearly becoming part of the professional landscape for school psychologists. The key challenge moving forward will be ensuring that adoption is guided by ethical standards, thoughtful implementation, and a continued focus on serving students and families effectively.

AI Use Disclosure: Portions of this post were drafted with the assistance of an AI writing tool and revised by the author for accuracy, clarity, and professional judgment.

Next
Next

NASP 2026 AI Sessions: A Practical Guide to AI at the Convention