What Psychologists Were Asking About AI in 2025

For Psychologists on the Go (Key Takeaways)

  • Not everything marketed as “AI” raises ethical consent concerns; generative AI does

  • Publisher scoring software and report narratives are usually not the same as generative AI

  • Informed consent is not triggered only by identifiable data—interpretive influence matters

  • Most psychologists are not yet communicating much about AI—but should begin doing so

  • Ethical guidance is distinct from legal advice; both matter

Ethical Guidance Disclaimer

I am not a lawyer, and nothing in this post should be construed as legal advice. The guidance below reflects ethical and professional considerations related to AI use in psychology, not legal requirements. Practitioners and districts should consult legal counsel regarding applicable laws, policies, and compliance obligations.

Over 2025, I’ve given dozens of trainings on AI use in psychology and school psychology—conference sessions, district PD, graduate programs, and invited talks. What’s been striking is not how different the questions are from setting to setting, but how consistent they are.

This post brings together the most common questions I’ve been asked—and how I respond from an ethical and professional perspective. The goal is not to promote AI use or resolve legal questions, but to clarify how transparency, consent, and professional judgment intersect as practice evolves.

1. “Everything is being marketed as AI—what constitutes ‘GenAI’?”

From an ethical standpoint, the line is not whether a tool uses algorithms.

The meaningful distinction is whether a tool:

  • Generates novel content

  • Influences interpretation or recommendations

  • Introduces new risks (e.g., bias, hallucinations, confidentiality concerns)

Many commonly used tools—scoring software, templates, spell checkers—are automated but deterministic (with the same input, you always get the same output). Generative AI systems are probabilistic and can meaningfully shape professional language and conclusions. That difference is what matters ethically. Essentially, GenAI often acts as a co-author.

2. “Are publisher-generated reports (e.g., Q-global) a form of AI?”

Usually, no.

Publisher-generated reports:

  • Apply fixed scoring rules

  • Produce standardized narrative tied directly to scores

  • Are part of the assessment system itself

  • Generate the same output for the same input

They are automated, but they typically do not generate novel reasoning or synthesis.

That said, as publisher tools evolve, it remains important to examine what a system is actually doing, not how it is marketed. If a tool begins generating interpretive language beyond fixed rules, ethical considerations may shift.

3. “Do we only need informed consent if identifiable student or client data are used?”

No. Identifiability is one ethical trigger—but not the only one.

Informed consent may also be warranted when a tool:

  • Meaningfully contributes to interpretation or recommendations (is it essentially acting as a co-author?)

  • Generates authoritative-sounding content

  • Introduces risk of bias or error

  • Shapes how conclusions are communicated

Ethics in psychology extend beyond data privacy to include transparency, accountability, and client autonomy.

4. “Should AI disclosure be part of the evaluation consent form, or a separate document?”

Either approach can be ethically appropriate.

What matters is that disclosure is:

  • Clear

  • Explicit

  • Understandable

  • Documented

A separate AI disclosure or consent addendum is often easier to update and avoids burying important information, but embedding AI language in existing consent materials can also work if done thoughtfully.

5. “Is verbal consent sufficient, or does this need to be in writing?”

I would not be comfortable relying on verbal consent alone. Written consent promotes clarity, consistency, and shared understanding.

A useful check is simple:

Is verbal consent sufficient for other assessment-related or data-sharing decisions?

If not, written AI disclosure is the more ethically defensible approach.

6. “What, exactly, are psychologists communicating to parents or clients about AI?”

Based on trainings and survey data, the honest answer is: not enough—yet.

Most psychologists report:

  • Informal AI use

  • Limited or no structured disclosure

  • Uncertainty about what to communicate

This reflects how quickly these tools entered practice, not bad faith. Still, ethical practice calls for greater transparency.

At a minimum, disclosure should address:

  • How AI may be used

  • What AI does not do (it does not replace professional judgment)

  • Known limitations and risks

  • How confidentiality is handled

  • Client or parent rights

Actionable Next Steps

If you are a psychologist currently using—or considering using—AI:

  1. Review your consent process

    Ask whether AI use is clearly explained and documented.

  2. Use structured disclosure language

    Avoid vague references to “technology” or “software.”

  3. Adopt or adapt a written AI consent form

    I have made a sample AI-informed consent form available that psychologists and districts can adapt to their context.

  4. Stay informed about evolving practice norms

    The ongoing surveys of health service psychologists and school psychologists that I have contributed to track how AI is actually being used and where ethical guidance is most needed.

Where I Intentionally Stop

This post does not address legal discoverability, litigation risk, or jurisdiction-specific requirements-though I get asked about all of these a lot. Those are legal questions and should be directed to legal counsel. Ethical guidance is not a substitute for legal review.

AI Disclosure

Portions of this post were drafted with the assistance of an AI writing tool and reviewed, revised, and finalized by the author for accuracy, clarity, and professional judgment.

Next
Next

AI Use Among Health Service Psychologists: Current Use and Changes from 2024 to 2025