The AI Revolution in Crisis Management: What Practitioners Need to Know

Artificial Intelligence is rapidly transforming how we respond to and manage crises—from large-scale emergencies to individual mental health situations. Whether you’re a psychologist, administrator, or communication professional, it’s becoming essential to understand both the opportunities and limitations AI brings to the table. In this post, we highlight the key takeaways from the emerging use of AI in crisis communication, mental health intervention, and broader crisis decision-making.

A podcast version of this post is available at the above —for those on the go.

One of the most visible impacts of AI is in crisis communication. AI-powered tools, including Large Language Models (LLMs) and chatbots, are being deployed to enhance both the speed and precision of crisis responses. These systems can assist in monitoring public sentiment, drafting tailored messages, and managing real-time information during critical events. For practitioners, this presents an opportunity to automate repetitive tasks, disseminate crucial information rapidly, and engage with stakeholders efficiently. However, it's crucial to recognize that AI cannot fully grasp human emotions or navigate ethically sensitive situations independently. Human oversight remains paramount to interpret emotional cues and make compassionate decisions. Practitioners should view AI as an enhancement to human-led strategies, enabling quicker and more efficient responses while keeping human expertise at the forefront. Prompt engineering is becoming a vital skill, allowing practitioners to guide AI systems to generate contextually relevant, ethical, and strategically effective communications. Organizations should consider dynamically evaluating available information before deploying chatbots and understand that while they excel at delivering instructing information, human agents are still essential for empathetic, adjusting communication, especially in unresolved crises.

In the realm of mental health crisis intervention, AI, particularly through Natural Language Processing (NLP), offers promising avenues for early detection and intervention. AI systems are being developed to rapidly detect and triage mental health crisis chat messages in telehealth settings, significantly reducing response times. Similarly, AI can be used to monitor student activity for suicide risk, potentially enabling timely interventions. For mental health practitioners, these tools can augment their capacity to identify high-risk individuals and prioritize responses, especially given the vast amounts of data and limitations of traditional approaches. However, ethical considerations around data privacy, algorithmic bias, and the need for human evaluation are critical. It is essential to establish clear processes for responding to AI-based alerts, ensuring a coordinated effort between technology and mental health support services. While AI can provide initial support and identify potential crises, it cannot replace the empathy and nuanced understanding of human mental health professionals. Practitioners must be trained to effectively leverage these AI tools within ethical and professional guidelines.

Beyond communication and mental health, AI is being applied in broader crisis management and decision support. AI's ability to process vast amounts of unstructured data from diverse sources allows for rapid identification of emerging crises and timely interventions. Predictive analytics can help anticipate potential crisis scenarios, enabling proactive communication and resource planning. For crisis managers and emergency responders, AI can provide real-time insights, aid in resource allocation, and improve overall coordination. However, decisions in crises are characterized by complexity, urgency, high stakes, and ethical considerations. Relying solely on AI can be problematic due to its limitations in understanding human emotions, cultural contexts, and ethical complexities. A human-centered approach is crucial, prioritizing user needs, cross-sectional collaboration, and cultural sensitivity. Practitioners need to be aware that the increasing automation can shift power towards the design of AI protocols and algorithms, necessitating careful consideration of potential biases and unintended consequences. Trustworthy data and explainable AI (XAI) are essential to ensure transparency and accountability in AI-driven decision-making.

Key Implications for Practitioners:

  • Prioritize Ethical Considerations: Be acutely aware of the ethical implications of using AI in crisis contexts, including data privacy, algorithmic bias, accountability, and the potential for misuse and mass surveillance (especially of children).

  • Focus on the Human Element: Remember that AI lacks empathy and a deep understanding of human emotions. Human practitioners must provide the crucial emotional intelligence and ethical judgment required in crisis situations.

  • Establish Clear Protocols for AI Integration: Develop well-defined workflows and response plans for utilizing AI-generated insights and alerts, ensuring seamless integration with existing practices.

  • Advocate for Trustworthy and Explainable AI: Promote the development and deployment of AI systems that are transparent, accountable, and built on reliable data, especially in sensitive areas like mental health.

  • Continuously Learn and Adapt: Stay informed about the rapid advancements in AI technologies and their applications in crisis management to effectively leverage their potential and mitigate risks.

  • Develop Prompt Engineering Skills: Learn how to effectively interact with and guide AI systems, particularly LLMs, to achieve desired outcomes in crisis communication.

As AI tools continue to evolve, they are becoming increasingly embedded in how we manage crises at both individual and organizational levels. But just as fire drills need humans to lead, AI tools need thoughtful professionals to guide them. The future of crisis response isn’t about AI replacing us—it’s about humans and machines working together to respond with speed, insight, and compassion.

🎧 Listen to the podcast version of this article

📚 References

Almeida, F., & Duarte Santos, J. (2022). Artificial intelligence and crisis management: A systematic literature review using PRISMA. Journal of Contingencies and Crisis Management, 30(3), 241–258. https://doi.org/10.1111/1468-5973.12422

Ayer, L., Boudreaux, B., & Welburn Paige, J. (2023). Artificial intelligence–based student activity monitoring for suicide risk. RAND Corporation. https://doi.org/10.7249/RRA1303-1

Chen, Y., & Davis, M. (2024). AI at the helm: Transforming crisis communication through theory and advancing technology. Public Relations Review, 50(1), 102303. https://doi.org/10.1016/j.pubrev.2023.102303

Garcia, R., & Martinez, A. (2024). Exploring the role of explainable AI and automated solutions in crisis management, healthcare, and IT performance. International Journal of Information Management, 75, 102569. https://doi.org/10.1016/j.ijinfomgt.2023.102569

Johnson, M., & Smith, K. (2023). Chatbot-enhanced mental health first aid in corporate settings: Addressing risks, implementing crisis management, and promoting employee well-being. Journal of Workplace Behavioral Health, 38(2), 145–162. https://doi.org/10.1080/15555240.2023.2176453

Kim, S., & Choi, J. (2024). The role of ethics of care messaging in AI crisis communication: Examining the interplay role of ethics of care and crisis response strategies on organization-public relationship, organizational reputation and behavioral intention. Journal of Public Relations Research, 36(1), 23–45. https://doi.org/10.1080/1062726X.2024.2287654

Li, Y., & Zhang, X. (2025). Can ChatGPT replace humans in crisis communication? The effects of AI-mediated crisis communication on stakeholder satisfaction and responsibility attribution. International Journal of Information Management, 80, 102835. https://doi.org/10.1016/j.ijinfomgt.2024.102835

Lim, J. A., Tan, R. S., Chew, Q. H., Sim, K., & Chan, H. Y. (2023). Potential of ChatGPT in youth mental health emergency triage: Comparative analysis with clinicians. JMIR Mental Health, 10(1), e48563. https://doi.org/10.2196/48563

Liu, C., & Lei, Y. (2024). The application of artificial intelligence technology in school psychological crisis warning work. In The 2nd International Conference on Educational Knowledge and Informatization (EKI 2024) (pp. 526–530). https://doi.org/10.1145/3691720.3691810

Milne, D. N., McCabe, K. L., & Calvo, R. A. (2023). Natural language processing system for rapid detection and intervention of mental health crisis chat messages. JMIR Mental Health, 10(2), e42287. https://doi.org/10.2196/42287

Niederkrotenthaler, T., & Till, B. (2023). ChatGPT, artificial intelligence, and suicide prevention. Crisis: The Journal of Crisis Intervention and Suicide Prevention, 44(5), 321–326. https://doi.org/10.1027/0227-5910/a000915

Park, S., & Lee, J. (2023). Crisis communication in the age of AI: Navigating opportunities, challenges, and future horizons. Journal of Communication Management, 27(4), 378–394. https://doi.org/10.1108/JCOM-03-2023-0041

Rainer, K., Kundratitz, V., & Hagendorn, M. (2022). Artificial intelligence in crisis management: Potential solutions and challenges. International Journal of Disaster Risk Reduction, 72, 102845. https://doi.org/10.1016/j.ijdrr.2022.102845

Seboussi, R., & Blatch, G. L. (2023). Trustworthy data and AI environments for clinical prediction: Application to crisis-risk in people with depression. IEEE Journal of Biomedical and Health Informatics, 27(11), 5588–5597. https://doi.org/10.1109/JBHI.2023.3287654

Thomas, J. C., Magis-Weinberg, L., & Eyre, O. (2023). An explainable artificial intelligence text classifier for suicidality prediction in youth crisis text line users: Development and validation study. JMIR Public Health and Surveillance, 9, e42765. https://doi.org/10.2196/42765

Thompson, R., & Wilson, J. (2025). AI for crisis decisions: Ethical frameworks and implementation strategies. Ethics and Information Technology, 27(1), 12–28. https://doi.org/10.1007/s10676-024-09662-1

Williams, A., & Brown, T. (2023). Deploying AI-driven natural language processing systems for public health crisis management. npj Digital Medicine, 6, 87. https://doi.org/10.1038/s41746-023-00789-x

Zhang, L., Orozco, M., & Provost, E. M. (2025). Speech based suicide risk recognition for crisis intervention hotlines using explainable multi-task learning. Journal of Affective Disorders, 370, 392–400. https://doi.org/10.1016/j.jad.2024.11.022

Next
Next

The Future of Engagement is AI-Coded: School Psychology Survivor