健康不公平和数字时代歧视的人工智能循环:通过欧盟医疗设备监管框架减少偏见

IF 2.5 2区 哲学 Q1 ETHICS
Hannah van Kolfschooten
{"title":"健康不公平和数字时代歧视的人工智能循环:通过欧盟医疗设备监管框架减少偏见","authors":"Hannah van Kolfschooten","doi":"10.1093/jlb/lsad031","DOIUrl":null,"url":null,"abstract":"Abstract The use of Artificial Intelligence (AI) medical devices is rapidly growing. Although AI may benefit the quality and safety of healthcare for older adults, it simultaneously introduces new ethical and legal issues. Many AI medical devices exhibit age-related biases. The first part of this paper explains how ‘digital ageism’ is produced throughout the entire lifecycle of medical AI and may lead to health inequity for older people: systemic, avoidable differences in the health status of different population groups. This paper takes digital ageism as a use case to show the potential inequitable effects of AI, conceptualized as the ‘AI cycle of health inequity’. The second part of this paper explores how the European Union (EU) regulatory framework addresses the issue of digital ageism. It argues that the negative effects of age-related bias in AI medical devices are insufficiently recognized within the regulatory framework of the EU Medical Devices Regulation and the new AI Act. It concludes that while the EU framework does address some of the key issues related to technical biases in AI medical devices by stipulating rules for performance and data quality, it does not account for contextual biases, therefore neglecting part of the AI cycle of health inequity.","PeriodicalId":56266,"journal":{"name":"Journal of Law and the Biosciences","volume":"40 15","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The AI cycle of health inequity and digital ageism: mitigating biases through the EU regulatory framework on medical devices\",\"authors\":\"Hannah van Kolfschooten\",\"doi\":\"10.1093/jlb/lsad031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract The use of Artificial Intelligence (AI) medical devices is rapidly growing. Although AI may benefit the quality and safety of healthcare for older adults, it simultaneously introduces new ethical and legal issues. Many AI medical devices exhibit age-related biases. The first part of this paper explains how ‘digital ageism’ is produced throughout the entire lifecycle of medical AI and may lead to health inequity for older people: systemic, avoidable differences in the health status of different population groups. This paper takes digital ageism as a use case to show the potential inequitable effects of AI, conceptualized as the ‘AI cycle of health inequity’. The second part of this paper explores how the European Union (EU) regulatory framework addresses the issue of digital ageism. It argues that the negative effects of age-related bias in AI medical devices are insufficiently recognized within the regulatory framework of the EU Medical Devices Regulation and the new AI Act. It concludes that while the EU framework does address some of the key issues related to technical biases in AI medical devices by stipulating rules for performance and data quality, it does not account for contextual biases, therefore neglecting part of the AI cycle of health inequity.\",\"PeriodicalId\":56266,\"journal\":{\"name\":\"Journal of Law and the Biosciences\",\"volume\":\"40 15\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Law and the Biosciences\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/jlb/lsad031\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Law and the Biosciences","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/jlb/lsad031","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)医疗设备的使用正在迅速增长。虽然人工智能可能有利于老年人医疗保健的质量和安全,但它同时引入了新的伦理和法律问题。许多人工智能医疗设备表现出与年龄相关的偏见。本文的第一部分解释了“数字年龄歧视”是如何在医疗人工智能的整个生命周期中产生的,并可能导致老年人的健康不平等:不同人群健康状况的系统性、可避免的差异。本文以数字年龄歧视为用例,展示了人工智能的潜在不公平影响,概念为“人工智能健康不平等周期”。本文的第二部分探讨了欧盟(EU)监管框架如何解决数字年龄歧视问题。它认为,在《欧盟医疗器械条例》和新的《人工智能法案》的监管框架内,人工智能医疗器械中与年龄相关的偏见的负面影响没有得到充分认识。它的结论是,虽然欧盟框架确实通过规定性能和数据质量规则解决了与人工智能医疗设备中的技术偏差相关的一些关键问题,但它没有考虑到情境偏差,因此忽略了人工智能健康不平等周期的一部分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The AI cycle of health inequity and digital ageism: mitigating biases through the EU regulatory framework on medical devices
Abstract The use of Artificial Intelligence (AI) medical devices is rapidly growing. Although AI may benefit the quality and safety of healthcare for older adults, it simultaneously introduces new ethical and legal issues. Many AI medical devices exhibit age-related biases. The first part of this paper explains how ‘digital ageism’ is produced throughout the entire lifecycle of medical AI and may lead to health inequity for older people: systemic, avoidable differences in the health status of different population groups. This paper takes digital ageism as a use case to show the potential inequitable effects of AI, conceptualized as the ‘AI cycle of health inequity’. The second part of this paper explores how the European Union (EU) regulatory framework addresses the issue of digital ageism. It argues that the negative effects of age-related bias in AI medical devices are insufficiently recognized within the regulatory framework of the EU Medical Devices Regulation and the new AI Act. It concludes that while the EU framework does address some of the key issues related to technical biases in AI medical devices by stipulating rules for performance and data quality, it does not account for contextual biases, therefore neglecting part of the AI cycle of health inequity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Law and the Biosciences
Journal of Law and the Biosciences Medicine-Medicine (miscellaneous)
CiteScore
7.40
自引率
5.90%
发文量
35
审稿时长
13 weeks
期刊介绍: The Journal of Law and the Biosciences (JLB) is the first fully Open Access peer-reviewed legal journal focused on the advances at the intersection of law and the biosciences. A co-venture between Duke University, Harvard University Law School, and Stanford University, and published by Oxford University Press, this open access, online, and interdisciplinary academic journal publishes cutting-edge scholarship in this important new field. The Journal contains original and response articles, essays, and commentaries on a wide range of topics, including bioethics, neuroethics, genetics, reproductive technologies, stem cells, enhancement, patent law, and food and drug regulation. JLB is published as one volume with three issues per year with new articles posted online on an ongoing basis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信