Between Care and Code: Clinicians’ Perspectives as an Ethical Lens for AI in Mental Health – Insights from an Interview-Based Case Study
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
Artificial intelligence (AI) is rapidly transforming the field of mental health care, offering new opportunities for diagnosis, treatment, documentation, and access to services. While AI tools promise efficiency, scalability, and innovation, they also raise concerns related to empathy, ethics, bias, inclusivity, and professional identity. This paper explores clinicians’ responses to the integration of AI in mental health, highlighting the tensions that emerge between efficiency and empathy, automation and therapeutic presence, and access and equity. Drawing on thematic analysis of recent empirical and conceptual literature, the discussion identifies three central themes: (1) clinicians’ ambivalence toward AI reflects both anxiety about displacement and optimism about support; (2) the ethical challenges of AI—including risks of bias, loss of trust, and inequitable access—require careful design and governance; and (3) AI should supplement, rather than replace, human care, preserving therapeutic relationships as the cornerstone of mental health practice. The paper argues that ambivalence itself is a valuable ethical resource: it reflects clinicians’ constructive visions for responsible innovation. By integrating these perspectives, the review underscores the need for balanced policies, inclusive design, and ongoing dialogue among clinicians, developers, and policymakers. A conceptual table is provided to illustrate the interplay of efficiency, empathy, and equity in shaping clinicians’ experiences and expectations. Overall, the paper contributes to critical debates on the future of AI in mental health, emphasizing that sustainable adoption depends not only on technological advancement but also on protecting the human dimensions of care.
##plugins.themes.bootstrap3.article.details##
Artificial Intelligence, Mental Health, Clinicians, Ethics, Ambivalence, Therapeutic Care
No funding source declared.
Babu, A., & Joseph, A. P. (2024). Artificial intelligence in mental healthcare: Transformative potential vs. the necessity of human interaction. Frontiers in Psychology, 15, 1378904. DOI: https://doi.org/10.3389/fpsyg.2024.1378904
BMC Psychiatry. (2025). Artificial intelligence in mental health: Opportunities and challenges. BMC Psychiatry, 25, 6483. DOI: https://doi.org/10.1186/s12888-025-06483-2
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. DOI: https://doi.org/10.1191/1478088706qp063oa
Carter, B., Al-Mufti, R., & Wilson, P. (2020). Artificial intelligence (AI) in healthcare and research: The potential, pitfalls, and future directions. British Journal of General Practice, 70(696), 193–194. DOI: https://doi.org/10.3399/bjgp20X709877
Carter, H., et al. (2020). The emergence of digital mental health in low-income and middle-income countries: A scoping review. The Lancet Psychiatry, 7(10), 878–890. DOI: https://doi.org/10.1016/S2215-0366(20)30332-1
Chang, C. L., et al. (2024). AI-led mental health support (Wysa) for healthcare workers: A real-world data evaluation mixed-methods study. JMIR Formative Research, 8, e51858. DOI: https://doi.org/10.2196/51858
Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE Publications.
D’Alfonso, S. (2020). AI in mental health. Current Opinion in Psychology, 36, 61–65. DOI: https://doi.org/10.1016/j.copsyc.2020.04.005
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. DOI: https://doi.org/10.2196/mental.7785
Frontiers in Digital Health. (2025). Ethical challenges in the implementation of AI in mental health care. Frontiers in Digital Health, 5, 1606291. DOI: https://doi.org/10.3389/fdgth.2025.1606291
Green, J., & Thorogood, N. (2018). Qualitative methods for health research (4th ed.). SAGE Publications.
Hoose, S., & Králiková, K. (2024). Artificial intelligence in mental health care: Management implications, ethical challenges, and policy considerations. Administrative Sciences, 14, 227. DOI: https://doi.org/10.3390/admsci14090227
JAMA Network Open. (2024). Evaluation of an ambient artificial intelligence documentation tool in outpatient care: A randomized clinical trial. JAMA Network Open, 7(3), e233433. DOI: https://doi.org/10.1001/jamanetworkopen.2023.34333
Joseph, A. P. (2025). Reimagining digital mental health literacy from the Global South: Challenges and opportunities. Frontiers in Psychiatry, 16, 1611988. DOI: https://doi.org/10.3389/fpsyt.2025.1611988
Malik, T., et al. (2022). Evaluating user feedback for an artificial intelligence–based mental health intervention: A real-world study. Journal of Medical Internet Research, 24, e34567. DOI: https://doi.org/10.2196/34567
Mokaya, A. G., et al. (2025). Digital mental health interventions for adolescents and young people (10–24 years) in Africa: A protocol for a systematic review of mental health outcomes, engagement, and equity considerations. Wellcome Open Research, 10, 58. DOI: https://doi.org/10.12688/wellcomeopenres.17585.1
Nature Mental Health. (2024). AI in mental health care: Risks of bias and equity challenges. Nature Mental Health, 2, 457–459. DOI: https://doi.org/10.1038/s44184-024-00057-y
O’Connor, C., Sujan, M., & Cornet, R. (2023). AI in clinical decision support: Balancing reliability, trust, and patient safety. BMJ Health & Care Informatics, 30(1), e100683. DOI: https://doi.org/10.1136/bmjhci-2022-100683
Pandey, H. M. (2024). Artificial intelligence in mental health and well-being: Evolution, current applications, future challenges, and emerging evidence. arXiv preprint. Available at: https://arxiv.org/abs/2501.10374
Patel, V., et al. (2025). Evaluating predictive artificial intelligence models for mental health in low-resource settings: Challenges and opportunities. The Lancet Digital Health, 7(2), e145–e157. DOI: https://doi.org/10.1016/S2589-7500(24)00123-5
Rebelo, A. D., et al. (2023). The impact of artificial intelligence on the tasks of mental healthcare workers by providing support and enabling greater insights. International Journal of Medical Informatics, 174, 104802. DOI: https://doi.org/10.1016/j.ijmedinf.2023.104802
Saeidnia, H. R., et al. (2024). Ethical considerations in artificial intelligence interventions for mental health and well-being: A systematic review. Societies, 13(7), 381. DOI: https://doi.org/10.3390/socsci13070381
Shahid, N., Rappon, T., & Berta, W. (2019). Applications of artificial intelligence in health care delivery: A systematic review. BMC Medical Informatics and Decision Making, 19, 241. DOI: https://doi.org/10.1186/s12911-019-0912-0
Sinha, C., et al. (2023). Understanding digital mental health needs and usage patterns: A global perspective. Journal of Medical Internet Research, 25, e42311. DOI: https://doi.org/10.2196/42311
Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.
Wambua, G. N., & Mokaya, A. G. (2025). Digital mental health interventions for adolescents and young people in Africa: A protocol for a systematic review of mental health outcomes, engagement, and equity considerations. ResearchGate. Available at: https://www.researchgate.net/publication/393411023
Wysa. (2022). Wysa to develop Hindi version of world's most popular mental health app. Available at: https://blogs.wysa.io/blog/b2b-partnerships/wysa-to-develop-hindi-version-of-worlds-most-popular-mental-health-app
Wysa. (2025). Wysa – Everyday mental health. Available at: https://www.wysa.com

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.