Examining bias in school-based risk prediction
Key Points (For Readers on the Go)
School-based risk prediction tools are not neutral. They can reflect and amplify bias already present in school data and systems.
Students from marginalized groups may be harmed most. When biased data train a predictive system, inequitable outcomes can become harder to detect and harder to challenge.
These tools can create false confidence. A risk score may look objective, even when it rests on flawed assumptions or incomplete information.
School psychologists and educators need to stay critical. AI outputs should inform judgment, not replace it.
The real question is not just whether a tool works. It is whether it works fairly, transparently, and in ways that support students rather than stigmatize them.
Introduction
As schools adopt more data tools, many educators are hearing new terms like predictive analytics, early warning systems, and risk scoring. These tools are often presented as efficient, objective ways to identify which students may need support. But that promise deserves careful scrutiny.
This post is informed by the preprint Bias in School-Based Risk Prediction: Challenges for Equitable Practice, which I co-authored with Celeste Malone of Howard University. In that paper, we examine how bias can enter school-based risk prediction systems and why these tools may deepen inequities if schools use them uncritically.
For educators, school psychologists, and school leaders, this matters because these systems are not just technical tools. They shape who gets flagged, who gets watched more closely, and who is seen as “at risk” in the first place.
What Is School-Based Risk Prediction?
School-based risk prediction refers to systems that use student data to estimate the likelihood of a future outcome. That outcome might be dropout, chronic absenteeism, behavior concerns, self-harm risk, or academic failure. The goal is often framed as early identification so schools can intervene sooner.
On the surface, that can sound reasonable. Schools want to support students before problems get worse. But the moment a system starts assigning risk labels, it also starts shaping how adults interpret students.
That is where trouble can begin.
Why School-Based Risk Prediction Can Be Biased
One of the central arguments in our preprint is that bias in these systems is usually not accidental. It often begins with the data used to build them. If a model is trained on past school data that already reflect inequities in discipline, attendance enforcement, identification practices, or access to support, then the model may learn those patterns as if they are legitimate indicators of student risk.
For example, a student may be flagged as high risk not because of an internal trait, but because the student attends an under-resourced school, has experienced exclusionary discipline, or lives in a context shaped by structural disadvantage. A model may treat those patterns as prediction signals rather than evidence of systemic inequity.
This is one reason school-based risk prediction deserves more caution than many vendors acknowledge.
The Problem With “Objective” Risk Scores
Many people assume that a computer-generated score is more objective than human judgment. In practice, that is often too simple.
A risk score can look precise while hiding messy decisions underneath. Someone chose the data inputs. Someone chose what counts as risk. Someone decided how the system would be evaluated. And in many cases, schools are asked to trust a tool without being able to fully inspect how it works. The preprint describes this as a problem of black-box opacity—in plain language, the system gives an answer without clearly showing how it got there.
That lack of transparency matters. If a student is labeled high risk, families and professionals should be able to understand why. If they cannot, it becomes harder to challenge errors and harder to see whether bias is operating.
How Bias Can Get Worse Over Time
One of the most concerning patterns is the possibility of a feedback loop.
Imagine a student is flagged by a system as behaviorally risky. Adults may then watch that student more closely, interpret behavior more negatively, and document more incidents. Those new incidents then become more data that seem to confirm the original prediction. In other words, the system can help create the very pattern it claims to detect.
This is especially dangerous in schools because labels are self fulfilling. Once a student is seen through a risk lens, that label can influence referrals, meetings, disciplinary responses, and even how educators interpret ambiguous behavior.
Why This Matters for School Psychologists
School psychologists may be asked to interpret, explain, or act on these systems even when they were never trained to evaluate them. That is a major concern. The preprint argues that school psychology has important gaps in algorithmic literacy, meaning many practitioners have not been prepared to critically examine how these systems work, how bias enters them, or how to question vendor claims.
There is also a broader professional issue. If practitioners rely on tools they do not fully understand, especially in high-stakes situations, that raises ethical concerns. It can also make it easier for institutional bias to hide behind the language of data and efficiency.
That does not mean school psychologists should avoid all data tools. It means they should approach them with professional skepticism, strong ethical judgment, and attention to equity.
Practical Examples From Schools
Example 1: Early warning for dropout
A district uses an early warning system to flag students at risk of not graduating. The system heavily weighs attendance, discipline, and course failures. Students in historically under-resourced settings are flagged at higher rates. Staff interpret these flags as proof of individual student risk rather than as signs of systemic need.
Example 2: Behavioral risk monitoring
A school adopts software that identifies students who may pose a behavior or safety concern. Students who are already watched more closely generate more documented data. That increased documentation can reinforce existing disparities.
Example 3: Mental health screening with AI support
A school explores AI tools that analyze writing or digital content for signs of distress. If the tool has not been validated fairly across student groups, some students may be over-flagged while others are missed. Either outcome is a problem.
In each of these examples, the issue is not only whether a prediction is accurate. The issue is whether the system is fair, transparent, and used in a way that protects student dignity.
What Schools Should Do Instead
Schools do not need to reject every tool with predictive features. But they do need stronger guardrails.
A better approach includes:
asking what data were used to build the tool
asking whether the system was validated across different student groups
asking whether the model can be explained in terms educators and families can understand
using predictions as one data point, not as a conclusion
looking for patterns in classrooms, policies, or systems rather than only locating “risk” inside the child
This last point is especially important. A system may flag many students, but the real issue may be something structural: transportation barriers, exclusionary discipline, inconsistent instruction, or poor access to support.
Final Takeaways
School-based risk prediction tools can reproduce and intensify existing inequities.
Risk scores are not automatically objective just because they come from a computer.
Educators and school psychologists should treat these systems as decision-support tools, not decision-makers.
Schools should focus on transparency, fairness, and structural causes of student difficulty.
The most important question is not whether a tool is impressive, but whether it helps students in equitable and responsible ways.
Internal Links
For related reading, I recommend linking this post to other content on AI, ethics, and school practice from your site’s blog hub:
Related Blog: Mitigating AI Bias in School Psychology: Why It Matters and What We Can Do
Preprint: Bias in School-Based Risk Prediction: Challenges for Equitable Practice
Call to Action
If your school, district, or professional organization is trying to make sense of AI tools in education, this is exactly the kind of conversation worth having before adoption moves too far ahead of practice. For more practical guidance on ethical AI use in schools, subscribe through the blog page or reach out through Lockwood Educational & Psychological Consulting.
AI Use Disclosure - Portions of this post were drafted with the assistance of an AI writing tool and revised by the author for accuracy, clarity, and professional judgment.