When people become data points – the potential impact of AI in mental healthcare

Published by

on

Hannah van Kolfschooten & Janneke van Oirschot

Introduction

Artificial intelligence (AI) holds great promise for transforming mental healthcare. From personalized treatment plans to early detection of mental health conditions, AI could make mental health services more accessible and effective. AI systems are already being developed for various applications, including diagnostic tools, therapeutic support (such as chatbots), monitoring of mental health through wearable devices, and even educational tools aimed at promoting mental health literacy. These applications span both clinical and non-clinical settings, addressing a broad spectrum of conditions from depressive disorders and anxiety to non-medical issues such as loneliness. Despite its potential, AI introduces new risks that extend beyond individual patients to broader societal concerns, raising questions about equity, safety, and ethics.

To understand these complexities, it is useful to categorize AI applications in mental healthcare according to their primary purposes: screeningmonitoringdiagnosistreatment, and education. Screening tools often rely on natural language processing (NLP) or machine learning to detect signs of mental health conditions from social media activity or smartphone usage. Monitoring tools, such as wearable devices, track biometric or behavioral data to identify patterns indicative of mental health changes. Diagnostic tools use neuroimaging, voice pattern analysis, and other forms of data to assist in identifying mental health conditions. Treatment applications include AI-driven cognitive behavioral therapy (CBT) chatbots or emotion recognition software for exposure therapy. Finally, educational tools provide psychoeducation and training for clinicians and patients using conversational AI or virtual learning environments. This typology not only underscores the versatility of AI in mental healthcare but also highlights the potential risks that arise with its adoption. 

Potential risks can be identified at three levels. At the individual level, concerns include misdiagnosis, inappropriate treatment recommendations, and privacy breaches. At the collective level, issues such as biased datasets, accessibility barriers, and the marginalization of vulnerable groups come to the forefront. At the societal level, challenges like over-surveillance, erosion of trust in healthcare, and the commodification of mental health services emerge, revealing broader implications for equity and justice. Addressing these risks requires a comprehensive and inclusive approach to AI development and governance. This policy paper applies a multi-level risk framework, based on Nathalie Smuha’s typology of individual, collective, and societal harms, to analyze these challenges and propose actionable solutions. By exploring these risks holistically, this paper aims to provide pathways for responsibly integrating AI into mental healthcare while safeguarding individual rights, collective interests, and societal values.

Individual Risks:

  • Health risks: Mental health apps pose significant health risks due to misdiagnoses, misleading feedback, lack of contextual understanding, and over-reliance on app-generated advice, which can delay treatment, provide inappropriate responses in critical situations, and exacerbate users’ distress, as illustrated by cases like Woebot’s mishandling of abuse disclosures and Eliza’s role in a suicide incident.
  • Privacy and autonomy: AI mental health tools risk exposing sensitive user data to breaches and unauthorized sharing, undermining trust and autonomy through opaque recommendation processes and inadequate privacy safeguards, as demonstrated by BetterHelp’s 2021 data-sharing controversy.

Collective Risks:

  • Exclusion of marginalized population groups: AI mental health tools often exclude marginalized populations due to biased training data, cultural and linguistic limitations, inaccessibility for low-income or disabled users, and a lack of personalized approaches, leading to disparities in mental health support and effectiveness.
  • Exploitation of vulnerable groups: Companies may exploit vulnerable individuals with mental health conditions by marketing unproven or overpriced AI tools, risking harm, undermining trust in mental healthcare, and leaving patients without effective support.
  • Medicalization and reductionist disease conceptions: The dominance of biological data in health records and the limitations of AI in capturing complex social and emotional aspects of mental health can lead to an overemphasis on biological determinants, medicalization of everyday life, and an unjust focus on individual responsibility for mental health.
  • Additional risks for institutionalized people: AI in institutional settings risks amplifying the loss of autonomy and dignity for people with mental health conditions by automating decision-making, standardizing practices that are hard to opt out of, and perpetuating paternalistic, coercive, and stigmatizing approaches in care.

Societal Risks:

  • Over-surveillance and dehumanization: AI-driven mental health tools risk eroding privacy through excessive data collection, fostering over-surveillance that deepens power imbalances, and diminishing human empathy in therapeutic contexts, leading to dehumanized care.
  • Erosion of trust in healthcare: Errors, opacity, and unreliability in AI decision-making can undermine trust in mental healthcare systems, discouraging patients from seeking care and harming provider-patient relationships.
  • Increasing health disparities: Biases in AI systems and systemic barriers exacerbate health disparities, perpetuating stereotypes, misdiagnoses, and inequitable access to care for marginalized populations, deepening societal inequalities.
  • Efficiency over quality: The prioritization of cost-saving and time-saving AI technologies, driven by financial incentives, risks overshadowing considerations of care quality and patient well-being in mental healthcare.
  • Mental health AI as a profitable business: The commodification of mental healthcare prioritizes profit over patient needs, exploiting sensitive data for commercial gain while concentrating power in private entities, undermining equitable care and public trust.

Addressing Individual Risks of AI in Mental Healthcare

  1. Mandatory Evidence-Based Validation:
    • Introduce national certification requirements that mandate clinical trials for AI systems in mental healthcare before deployment, similar to drug approvals.
    • Develop standardized protocols for testing the safety, efficacy, and reliability of AI tools, with independent oversight to ensure unbiased results.
  2. Emergency Protocols for High-Risk Scenarios:
    • Implement legal mandates requiring AI systems to have “human-in-the-loop” mechanisms for handling mental health crises.
    • Develop a national registry of qualified mental health professionals available for immediate escalation when AI flags emergencies.
  3. Enhanced Privacy Protection:
    • Enact mental health-specific data protection laws that require explicit consent for data use and implement encryption standards for sensitive data storage and transmission.
    • Impose significant financial penalties for data breaches involving mental health information to deter misuse and negligence.
  4. Transparency Certification Programs:
    • Establish government-backed certification systems to verify AI tools’ compliance with ethical and safety standards, requiring clear labeling on approved tools.
    • Require AI developers to provide simplified, user-friendly disclosures on how their systems function, tailored for diverse literacy levels.

Addressing Collective Risks of AI in Mental Healthcare

  1. Diverse Dataset Mandates:
    • Enforce legal requirements for the use of datasets representing diverse demographics, including race, gender, socio-economic status, and geography.
    • Fund public repositories of diverse mental health datasets to reduce barriers for smaller developers and ensure equity in AI system training.
  2. Digital Inclusion Initiatives:
    • Subsidize access to validated mental health AI tools for underprivileged communities through public healthcare programs.
    • Ensure public health clinics are equipped with AI tools designed for underserved populations, including those with disabilities or non-English speakers.
  3. Ethical Marketing and Pricing Regulations:
    • Prohibit manipulative advertising that targets vulnerable individuals, with regulatory bodies conducting regular audits of AI marketing practices.
    • Introduce pricing caps for AI mental health tools and encourage open-source development through public grants and non-commercial incentives.
  4. Accessibility Standards:
    • Develop and enforce accessibility guidelines for AI tools to ensure usability for individuals with disabilities, language barriers, or low digital literacy.

Addressing Societal Risks of AI in Mental Healthcare

  1. Ethical Impact Assessments:
    • Require developers to submit ethical impact assessments as part of the regulatory approval process, including evaluations of equity, fairness, and societal trust.
    • Create standardized templates for these assessments, guided by mental health professionals, ethicists, and civil society groups.
  2. Algorithmic Accountability Laws:
    • Mandate routine audits of AI systems to ensure compliance with ethical and safety standards, with results disclosed in publicly accessible reports.
    • Establish an independent regulatory body for monitoring and addressing complaints related to AI misuse in mental healthcare.
  3. Community Advisory Boards:
    • Form multi-stakeholder advisory boards to oversee the deployment of AI tools, ensuring alignment with societal values and the needs of marginalized communities.
  4. Public Awareness Campaigns:
    • Launch national education initiatives highlighting the appropriate use, limitations, and risks of mental health AI tools.
    • Partner with schools, workplaces, and healthcare providers to distribute accessible educational materials.
  5. Preventing Commodification:
    • Incentivize non-commercial AI development through public funding for tools addressing neglected areas of mental healthcare.
    • Require AI developers to adhere to a “patient-first” ethos, emphasizing quality of care over profit, as a condition for public funding.

How to do this?

  1. Stakeholder Engagement:
    • Organize bi-annual stakeholder forums involving developers, patients, clinicians, policymakers, and ethicists to review progress and update guidelines.
    • Use participatory approaches, such as patient advisory groups, to ensure real-world applicability of AI tools in mental healthcare.
  2. Bridging Research and Practice:
    • Create a national fund to support the translation of research into clinical applications, prioritizing projects with proven efficacy in controlled settings.
    • Develop a “sandbox” regulatory model allowing experimental deployments in supervised environments before full market release.
  3. Monitoring and Oversight:
    • Establish an independent committee to assess the long-term societal impact of mental health AI, publishing biennial reports to inform adaptive regulations.
    • Introduce metrics to evaluate the success of AI tools, including reductions in healthcare disparities, patient outcomes, and ethical compliance.