AI in Special Education: Legal Insights and Leadership Imperatives
I recently came across a really insightful article by Thomas A. Mayes titled Artificial Intelligence, Special Education, and the Law: Risks, Rewards, and Opportunities for Leadership, published in the Journal of Business & Technology Law. This article is a must-read for school psychologists, because it unpacks the complex relationship between AI, special education, and the law.
Mayes highlights just how pervasive AI has become in our daily lives, from phone security to marketing. Its rapid development has the potential to transform industries and economies, with predictions of generating $13 trillion for the global economy by 2030. However, this growth isn't without its risks with one study suggesting that "50% of AI researchers believe that there is a 10% or greater chance that humans will go extinct from our inability to control AI" (p. 116). This stark warning underscores the need for "meaningful ethical standards" and caution in AI's use.
The article really got me thinking about the Specific Perils of AI in Special Education Practice, and here are some key takeaways:
Undermining the Team-Based Approach: Special education decisions, like eligibility and instruction, are made by teams and these teams have procedural safeguards. AI, despite its capabilities, is not a team member; it's merely a tool. We cannot allow AI to take over critical decision making as doing so would compromise the rights of team members and the human judgment essential to these processes.
Threat to Individualized Education Programs (IEPs): The Individuals with Disabilities Education Act (IDEA) requires an individualized education program (IEP) tailored to a child's unique needs, designed by appropriately licensed professionals. AI's machine-driven pedagogy, relying on proprietary algorithms, risks the giving up human judgement. Research suggests that when we hand over drafting to a program like ChatGPT, we tend to be content with the output, potentially disenfranchising us from critical thinking and revision.
Limitations with Unique and Novel Needs: AI's predictive abilities are best with large datasets and stable situations, but they become less reliable with rapidly changing or novel circumstances. For children with complex or co-occurring disabilities (e.g., autism with intellectual disability or blindness), AI's general outputs may not reliably relate to their truly unique needs. Uncritically accepting AI's output could mean the child serves the AI, rather than the other way around.
Questionable Reliance on Peer-Reviewed Research: Special education instruction should be evidence-based. However, AI is susceptible to "erroneous inputs" and the "garbage in, garbage out" phenomenon from its aggregated data. Popular but weakly evidence-based approaches, like learning styles or whole language instruction, could be suggested by AI due to their prevalence in its data, despite limited scientific support. AI lacks the discernment to evaluate research quality and might even fabricate facts or case citations.
Inherent Bias: AI is particularly susceptible to the biases present in its training data and from its users. Given the long-standing issue of racial disproportionality in special education, there's a significant risk that AI could further entrench existing biases. The "black box" nature of commercial AI makes detecting these biases difficult, necessitating human verification of outputs.
Serious Privacy Concerns: Using commercially available AI risks violating privacy laws like the Family Educational Rights and Privacy Act (FERPA). Inputting personally identifiable information (PII) about students into an AI query, especially commercial ones, likely violates FERPA's consent requirements. This information can become part of the AI's large language model, risking further disclosure through data breaches or sophisticated queries by malicious actors. This is because "you are paying [for 'free' AI engines and tools] with your data" (p.120).
The Indeterminacy of Special Education: Special education is inherently complex, with evolving evidence, ambiguous data, and the unique humanity of each student. AI, which thrives on predictability, loses its power and reliability in these novel or unpredictable circumstances.
So, how do we navigate this landscape? The article offers excellent Recommendations for Accessing Benefits While Avoiding the Perils:
AI as a "Rough Drafting Service": AI can be profitably used to prepare initial drafts of routine documents like notices, letters to parents, or plain language summaries of technical evaluation reports.
Support for Compliance and Improvement: AI, particularly custom-built systems, could analyze large datasets of IEPs for patterns (e.g., lookalike goals, alignment with student needs) or identify trends in discipline referrals.
Human Confirmation is Non-Negotiable: All AI-generated outputs must be tentative and subject to human confirmation and verification. An AI-drafted IEP goal might not be appropriate for unarticulated reasons. Furthermore, AI cannot be used to make final decisions.
Avoid Offshoring Critical Decisions: It is inappropriate for educators to use AI to draft final documents, make conclusions, or allocate resources. This "self-disenfranchisement" goes against the duties of a learned profession. Humans make decisions; AI makes predictions.
Strict Privacy Protocols: Never input personally identifiable data into commercially available AI systems without explicit parental consent, as the school loses control of that information. Schools should set clear parameters for AI use, especially with "bring-your-own-device" policies.
Leadership and Education are Key: Education leaders have a responsibility to monitor AI use and educate staff about its risks and rewards. Corrective actions for misuse must target the root cause, whether it's overwork, lack of awareness, or other factors.
Human Oversight for Moral and Ethical Decisions: The article emphasizes that special education has a profound moral and ethical dimension, particularly concerning resource allocation and who benefits. Since AI lacks a moral compass, decisions with ethical dimensions must ultimately be made by humans. Allowing technology to make moral decisions without human oversight is "an abandonment of the moral project" (p. 129) because only humans bear the moral accountability for such actions.
In conclusion, AI offers compelling opportunities to enhance efficiency in special education, but its use must be carefully prescribed and tempered with extreme caution. Our ultimate goal should be to leverage these technologies while always preserving the best of human relationships and judgment. America's children with disabilities deserve nothing less than our diligent, informed, and human-centered decision-making.