This study explores the opportunities, risks, and ethical considerations surrounding the use of artificial intelligence (AI) systems in mental healthcare and provides recommendations for their responsible implementation and regulation.
Healthcare forms one of the most popular sectors for AI deployment in the EU. In mental healthcare, AI systems are used in diverse ways, from administrative tasks to communication platforms, professional decision support, digital therapies like chatbots and personal sensing technologies, and patient monitoring. Beyond healthcare settings, AI applications that use mental health-related data also extend to criminal justice, consumer products, education, and employment.
AI systems offer significant potential benefits, including improved accessibility to mental health support, particularly for underserved populations, and reducing administrative burdens in healthcare systems. Clinically, proponents advance AI for personalising treatments, improving diagnostic accuracy, and supporting timely interventions. Additionally, AI can accelerate mental health research and facilitate the development of novel therapeutic approaches, such as virtual reality therapies. Socially-oriented opportunities include enhancing peer support and practitioner training, promoting networked collaboration, and fostering greater community connections.
However, AI systems in mental health also pose serious risks, which can be identified at three levels: individual, collective and societal. At the individual level, concerns include safety risks, privacy violations and inadequate informed consent. Broader challenges include strengthening inequalities or creating new ones, oversurveillance, reinforcing individualistic views of mental health, depersonalisation of care, and diverting limited resources.
The report highlights key principles and possible courses of action for policymakers to address these risks, by balancing innovation with ethical responsibility and human-centered care in AI-driven mental health systems. It is argued that AI tools need to be developed with ethics, inclusivity, accuracy, safety and the genuine needs of end users in mind. Possible solutions mainly include robust regulation and oversight, transparency and explainability, as well as human rights-centric and co-creation approaches.
The active participation and leadership of affected communities – people with lived experience – should be regarded as a fundamental ethical and political principle guiding all discussions and decisions on digitalisation and mental health. Ensuring that they have an equal voice in shaping policies and innovation is essential to achieving fair and balanced outcomes and preventing the deepening of health inequalities in society.
By Piers Gooding, Hannah van Kolfschooten & Francesca Centola (for Mental Health Europe)
Link to report: https://www.mentalhealtheurope.org/wp-content/uploads/2025/02/Study-on-AI-in-mental-health-care-final-for-publication.pdf

