Smart Tools, Wise Practice, Part 2: From Policy to Practice

In our first post, we explored how familiar ethical principles from APA and NASP—beneficence, fidelity, integrity, justice, and respect for rights—can guide school psychologists as they begin to integrate artificial intelligence (AI) into their work. These principles remind us that technology does not replace ethics; it simply gives ethics new terrain.

In this second post, we turn to application. How can school psychologists, trainers, and organizations translate ethical values into daily practice? Drawing from our preprint, Smart Tools, Wise Practice: Ethical Integration of AI in School Psychology (available for free on PsyArXiv), this post highlights actionable guidance for practitioners, supervisors, and systems implementing AI responsibly.

1. Building Ethical AI Practice: Guidance for Practitioners

Ethical AI integration begins with intentional structure. Before using any AI system for professional work, practitioners should ensure that it meets three basic conditions: transparency, security, and oversight.

Transparency means understanding what the tool does and how it works. School psychologists should review AI vendors’ documentation to identify data sources, model training dates, and privacy protections. Even if the tool seems user-friendly, a basic understanding of how outputs are generated is essential for evaluating accuracy and potential bias.

Security refers to safeguarding sensitive data. No student information—identified or de-identified—should ever be entered into a system lacking FERPA or HIPAA compliance. When possible, use institutional or district-approved AI platforms that have formal data agreements and clear retention policies.

Oversight keeps the psychologist firmly in the loop. Every AI-assisted product must be reviewed, edited, and verified by the psychologist before it becomes part of a student’s record or decision-making process. This includes verifying facts, checking for biased language, and ensuring that interpretations align with evidence-based practice and professional judgment.

A helpful mindset is to treat AI output as a first draft, not a final answer. Ethical practice means applying the same scrutiny we would to any other professional tool—behavior screeners, or progress-monitoring systems—while remembering that AI can introduce new kinds of errors and requires an extra level of caution.

2. Avoiding Professional Deskilling

AI’s convenience can create a subtle risk: deskilling. As technology handles more routine tasks, practitioners may lose confidence and proficiency in areas like report writing, data interpretation, or case conceptualization.

To guard against this, psychologists should maintain ongoing professional practice without AI assistance for a portion of their work. Periodically writing sections of a report manually or synthesizing data independently ensures that professional reasoning skills remain sharp. Use it or lose it!

The goal is not to reject automation but to preserve expertise. AI should augment—not erode—the psychologist’s capacity to think critically, interpret data contextually, and communicate with empathy.

3. Training and Supervision: Preparing the Next Generation

Graduate training programs are now at the front line of shaping how AI is used in psychology and education. Supervisors and faculty can foster ethical competence by explicitly teaching both the capabilities and limitations of AI tools.

Students should learn how AI systems process data, where bias originates, and how to verify AI-generated information. Ethical training should include case-based discussions that mirror real dilemmas—for example, whether a trainee can use AI to edit a psychoeducational report or how to handle a client’s objection to AI involvement.

Programs can also model best practice through structured policies. These should outline acceptable uses (e.g., literature summaries, document formatting) and prohibited uses (e.g., entering client data or using AI for interpretation).

Finally, supervisors should cultivate AI literacy for themselves. Understanding basic technical and ethical considerations allows supervisors to guide trainees thoughtfully and respond to mistakes developmentally rather than punitively.

4. Organizational and Systemic Responsibilities

Ethical AI use cannot rest solely on individuals. Schools, districts, and professional organizations must develop systems that enable safe, consistent, and equitable use of AI.

AI Governance and Policy Development:
Districts should create policies that specify which AI tools are approved, what kinds of data can be entered, and how outputs will be reviewed. Multidisciplinary oversight committees—comprising school psychologists, IT staff, administrators, members of marginalized groups and legal counsel—can ensure policies are both technically sound and ethically grounded.

Data Protection and Legal Compliance:
FERPA and HIPAA were not written with AI in mind. Organizations must therefore go beyond minimum compliance, establishing data retention limits, vendor agreements, and audit trails to track how student data are processed.

Professional Development:
Training on AI ethics and application should be ongoing, not a one-time event. As models evolve rapidly, so too must the understanding of their risks and benefits. Districts can support professional learning through workshops, consultation groups, or collaborative communities focused on AI integration.

Equity and Access:
Systems should monitor for disparities in access to AI-enhanced resources. If only certain schools or populations benefit from time-saving tools, inequities will deepen. Ethical governance requires intentional resource allocation to ensure that AI benefits are distributed fairly across settings and communities.

5. A Case Example: Ethical Supervision in the AI Era

Consider the case vignette from our preprint. Jordan, an experienced school psychologist, discovers that her practicum student, Rowan, used an AI tool to help write a psychoeducational report. Rowan believed it was harmless because names were removed—but this still violated confidentiality, program policy, and the spirit of informed consent.

Jordan faced a dilemma: how to uphold ethics while treating Rowan’s error as a learning opportunity. Using NASP’s ethical problem-solving framework, Jordan clarified the issues, consulted with colleagues, considered developmental factors, and responded with proportionate consequences. Rowan was required to rewrite the report independently and reflect on ethical decision-making.

This example illustrates that ethical AI supervision requires more than enforcement—it calls for guidance, transparency, and structured reflection. AI will continue to challenge the boundaries of professional training, but with thoughtful supervision, it can also deepen ethical awareness.

6. Moving Toward Ethical AI Systems

The next stage of ethical AI integration involves systematic monitoring and accountability. Districts can implement regular audits of AI tool use, track outcomes, and identify unintended consequences. Data from these reviews should inform ongoing revisions to policy and training.

Most importantly, psychologists and educators should advocate for human-centered AI—systems designed to amplify, not replace, human expertise. The future of school psychology will depend on maintaining the human connection that technology can never replicate: empathy, judgment, and moral reasoning.

Conclusion and Call to Action

AI in school psychology is no longer theoretical—it’s already embedded in how many practitioners work. What determines whether it strengthens or undermines the field is not the technology itself but the ethical framework guiding its use.

Our preprint, Smart Tools, Wise Practice: Ethical Integration of AI in School Psychology , offers detailed guidance, examples, and implementation recommendations for individuals and institutions.

Ethical innovation requires deliberate action. Start by reviewing your current practices, engaging colleagues in policy discussions, and ensuring that AI tools serve—not substitute—the professional values that define our field.

Full transparency: AI was used to help generate this blog content.

Next
Next

Smart Tools, Wise Practice, Part 1: Applying Ethical Principles to AI Use