The use of Artificial Intelligence (AI) medical devices is rapidly growing.
Although AI may benefitthe quality and safety of healthcare for older adults,
it simultaneously introduces new ethical and legal issues. Many AI medical devices exhibit age-related biases. The first part of this paper explains
how ‘digital ageism’ is produced throughout the entire lifecycle of medical
AI and may lead to health inequity for older people: systemic, avoidable
differences in the health status of different population groups. This paper
takes digital ageism as a use case to show the potential inequitable effects
of AI, conceptualized as the ‘AI cycle of health inequity’. The second part
of this paper explores how the European Union (EU) regulatory framework
addressesthe issue of digital ageism. It arguesthatthe negative effects of agerelated bias in AI medical devices are insufficiently recognized within the
regulatory framework of the EU Medical Devices Regulation and the new
AI Act. It concludes that while the EU framework does address some of the
key issues relatedto technical biases in AI medical devices by stipulating rules
for performance and data quality, it does not account for contextual biases,
therefore neglecting part of the AI cycle of health inequity.
Citation: Hannah van Kolfschooten, The AI cycle of health inequity and digital ageism: mitigating biases through the EU regulatory framework on medical devices, Journal of Law and the Biosciences, Volume 10, Issue 2, July-December 2023, lsad031, https://doi.org/10.1093/jlb/lsad031

