APA's Ethical Guidance for AI in Professional Practice
The document, Ethical Guidance for AI in the Professional Practice of Health Service Psychology, is incredibly vital. While not official APA policy, it offers essential considerations for us, aligned with fundamental ethical principles like Beneficence and Nonmaleficence, Fidelity and Responsibility, Integrity, Justice, and Respect for People’s Rights and Dignity. It's a proactive step in shaping AI's ethical, responsible, and equitable use in our field. This document also supplements APA’s official guidance on evaluating AI tools, which I previously blogged about here.
Let's dive into some of the key recommendations that, as health service psychologists, we absolutely need to integrate into our practice as these technologies become more prevalent.
Key Ethical Considerations & Recommendations for Health Service Psychologists
The guidance emphasizes several critical areas where our vigilance and informed action are paramount:
1. Transparency & Informed Consent
Be thoughtful in disclosing AI use.
Consider adding information about AI tools in your written informed consent.
Inform patients/clients about AI’s role, its limitations, potential risks, and benefits to truly ensure their autonomy in decision-making.
Inform patients/clients that they have the right to opt out of certain AI-driven interventions, and be prepared to provide them with alternative options.
Patients/clients should be informed whom to contact if they have concerns or wish to withdraw consent for AI use in their care.
2. Mitigating Bias & Promoting Equity
Evaluate AI tools for potential biases. This includes looking at how tools were normed and what data the AI was trained on. Are diverse lived experiences included?
3. Data Privacy & Security
Review AI tools for compliance with relevant data privacy and security laws and regulations.
Be knowledgeable about how patient/client data are used, stored, and shared, and inform your patients/clients accordingly.
Make sure that strong cybersecurity measures are in place.
4. Accuracy & Misinformation Risks
Critically evaluate AI-generated content.
Review AI tools for information on how they were validated by health care experts and supported by transparent, high-quality evidence.
Use products and services that rely on AI models that have undergone rigorous accuracy testing to ensure reliability. This testing data should be easily identifiable from developers.
Integrate tools that disclose their training data source and provide evidence of validation.
5. Human Oversight & Professional Judgment
Establish clear human intervention points ("human in the loop").
Psychologists must have autonomy in approving or rejecting AI-generated recommendations.
6. Liability & Ethical Responsibility
Understand the legal and ethical risks associated with AI tool selection and use.
Negligent reliance on AI without proper validation or oversight could create liability issues.
Transparency and competence in AI are crucial for managing legal risks and ensuring ethical practice.
Participate in continuing education about developments in mental health AI and how to leverage it responsibly.
This blog post draws on information from the Ethical Guidance for AI in the Professional Practice of Health Service Psychology document.