Smart Tools, Wise Practice, Part 1: Applying Ethical Principles to AI Use

Artificial intelligence (AI) is being adopted across education faster than our ethical frameworks can keep up. School psychologists, like many helping professionals, are experimenting with tools that summarize reports, generate recommendations, and streamline communication. Yet many practitioners are doing so in the absence of clear standards or policies.


Our new preprint, Smart Tools, Wise Practice: Ethical Integration of AI in School Psychology (now available on PsyArXiv), addresses this gap by translating existing APA and NASP ethical principles into guidance for the responsible use of AI. This first post highlights how those principles—well known to psychologists—can serve as a compass for navigating emerging technologies.

Beneficence and Nonmaleficence: Balancing Benefit and Risk

The first principle in the APA Ethics Code calls on psychologists to act for the benefit of others while avoiding harm. AI clearly offers benefits: it can reduce administrative workload, help organize information, and support clearer communication with teams and families. For school psychologists balancing high caseloads, these gains can translate directly into more time with students.

But beneficence has a counterpart—nonmaleficence. AI can reproduce cultural bias, overgeneralize from limited data, or omit the nuance required for sound professional judgment. For that reason, “psychologist-in-the-loop” use is essential. AI should inform—not replace—decision-making. Practitioners must verify outputs, question assumptions, and ensure that any AI-assisted product ultimately reflects their own professional analysis. The ethical goal is not to avoid AI but to use it deliberately, keeping the welfare of students at the center.

Fidelity and Responsibility: Maintaining Trust and Accountability

Fidelity speaks to honesty and trust within professional relationships, and responsibility reminds us that accountability cannot be outsourced to technology. When AI tools assist with writing or data analysis, psychologists remain responsible for the content and decisions that follow.

Transparency is key. Just as clients have the right to know when AI tools contributed to reports, summaries, decision making, or communication. Informed consent and disclosure strengthens trust and models ethical transparency for colleagues and trainees. Even when AI improves efficiency, psychologists must remain clear: these tools are extensions of our practice, not replacements for it.

Integrity: Honesty in Representation and Communication

Integrity requires accuracy and truthfulness in how we describe our work. If AI tools contribute to a report or recommendation, presenting the output as solely one’s own is misleading. Ethical integrity means acknowledging the role of technology and ensuring that the resulting product accurately reflects both data and professional judgment.

Misrepresentation can occur in subtle ways—when an AI-generated paragraph is left unedited, or when a report’s tone implies a level of certainty that the underlying model cannot justify. Practicing integrity in the age of AI means developing new habits: verifying sources, correcting overstatements, and communicating honestly about what AI can and cannot do.

Justice: Ensuring Fairness and Access

The principle of justice extends beyond fairness in service delivery—it includes equitable access to the benefits of technology. AI has the potential to expand reach, especially in under-resourced schools, but inequities can widen if some districts or populations lack access, infrastructure, or training.

Bias within AI systems is a related justice concern. Algorithms trained primarily on Western or English-dominant data can produce outputs that overlook or misrepresent diverse communities. School psychologists should evaluate whether AI tools enhance fairness or unintentionally reinforce disparities. Justice demands a critical stance: using AI to serve all students equitably, not just those with the most resources or representation in training data.

Respect for People’s Rights and Dignity: Protecting Privacy and Autonomy

Respect for rights and dignity centers on privacy, consent, and self-determination. In practice, this means never entering personally identifiable student information into systems that are not explicitly FERPA or HIPAA compliant. Even de-identified data can sometimes be reconstructed through inference, creating new privacy risks.

Students and families also deserve a say in whether AI is used to process their information. Explaining how AI supports, but does not replace, professional judgment honors client autonomy and prevents misunderstanding. Respect begins with clarity: telling people not just that AI is used, but how and why it is part of their educational or psychological service.

NASP Framework: Complementary Guidance for Practice

The NASP Principles for Professional Ethics mirror the APA framework and bring additional, field-specific guidance.

  • Respecting Dignity and Rights of All Persons connects directly to informed consent and confidentiality in AI use.

  • Professional Competence and Responsibility reminds practitioners to understand both the strengths and limitations of AI tools—and to ensure that their use aligns with validated practice.

  • Integrity in Professional Relationships calls for honest representation when AI contributes to assessment or consultation work.

  • Responsibility to Schools, Families, Communities, and Society extends the conversation to policy: school psychologists should help shape district guidelines that balance innovation with ethical safeguards.

Together, APA and NASP frameworks reinforce that ethical AI use is not a departure from established principles—it is their modern application.

Conclusion and Call to Action

Ethical AI use in school psychology begins with principles we already know: doing good while avoiding harm, telling the truth, treating people fairly, and protecting their privacy. These timeless commitments remain the foundation for navigating new tools.

Our full preprint, Smart Tools, Wise Practice: Ethical Integration of AI in School Psychology, expands on these ideas and offers practical steps for implementation, training, and policy development. You can read it now on PsyArXiv [link forthcoming].

Stay tuned for Part 2, where I’ll move from principles to practice—offering concrete recommendations for practitioners, trainers, and school systems building ethical AI infrastructures.

Next
Next

Navigating the AI Frontier: Essential Guidance from the NASP AI Task Force