Ethical and regulatory challenges in machine learning-based healthcare systems: A review of implementation barriers and future directions

Shehu Mohammed, Neha Malhotra
{"title":"Ethical and regulatory challenges in machine learning-based healthcare systems: A review of implementation barriers and future directions","authors":"Shehu Mohammed,&nbsp;Neha Malhotra","doi":"10.1016/j.tbench.2025.100215","DOIUrl":null,"url":null,"abstract":"<div><div>Machine learning significantly enhances clinical decision-making quality, directly impacting patient care with early diagnosis, personalized treatment,  and predictive analytics. Nonetheless, the increasing proliferation of such ML applications in practice raises potential ethical and regulatory obstacles that may prevent their widespread adoption in healthcare. Key issues concern patient data privacy, algorithmic bias, absence of transparency, and ambiguous legal liability. Fortunately, regulations like the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA),  and the FDA AI/ML guidance have raised important ways of addressing things like fairness, explainability, legal compliance, etc.; however, the landscape is far from risk-free. AI liability is another one of the gray areas approaching black, worrying about who is liable for an AI medical error — the developers, the physicians, or the institutions. The study reviews ethical risks and potential opportunities, as well as regulatory frameworks and emerging challenges in AI-driven healthcare. It proposes solutions to reduce bias, improve transparency, and enhance legal accountability. This research addresses these challenges to support the safe, fair, and effective deployment of ML-based systems in clinical practice, guaranteeing that patients can trust, regulators can approve, and healthcare can use them.</div></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"5 1","pages":"Article 100215"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772485925000286","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning significantly enhances clinical decision-making quality, directly impacting patient care with early diagnosis, personalized treatment,  and predictive analytics. Nonetheless, the increasing proliferation of such ML applications in practice raises potential ethical and regulatory obstacles that may prevent their widespread adoption in healthcare. Key issues concern patient data privacy, algorithmic bias, absence of transparency, and ambiguous legal liability. Fortunately, regulations like the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA),  and the FDA AI/ML guidance have raised important ways of addressing things like fairness, explainability, legal compliance, etc.; however, the landscape is far from risk-free. AI liability is another one of the gray areas approaching black, worrying about who is liable for an AI medical error — the developers, the physicians, or the institutions. The study reviews ethical risks and potential opportunities, as well as regulatory frameworks and emerging challenges in AI-driven healthcare. It proposes solutions to reduce bias, improve transparency, and enhance legal accountability. This research addresses these challenges to support the safe, fair, and effective deployment of ML-based systems in clinical practice, guaranteeing that patients can trust, regulators can approve, and healthcare can use them.
基于机器学习的医疗保健系统中的伦理和监管挑战:对实施障碍和未来方向的回顾
机器学习显著提高了临床决策质量,通过早期诊断、个性化治疗和预测分析直接影响患者护理。尽管如此,在实践中,这种ML应用程序的日益普及引发了潜在的伦理和监管障碍,可能会阻止它们在医疗保健领域的广泛采用。关键问题涉及患者数据隐私、算法偏见、缺乏透明度和模糊的法律责任。幸运的是,《通用数据保护条例》(GDPR)、《健康保险可移植性和责任法案》(HIPAA)和FDA人工智能/机器学习指南等法规提出了解决公平、可解释性、法律合规性等问题的重要方法;然而,前景远非没有风险。人工智能的责任是另一个接近黑色的灰色地带,人们担心谁应该为人工智能的医疗错误负责——是开发人员、医生还是机构。该研究回顾了人工智能驱动的医疗保健领域的伦理风险和潜在机遇,以及监管框架和新出现的挑战。报告提出了减少偏见、提高透明度和加强法律问责制的解决方案。本研究解决了这些挑战,以支持在临床实践中安全、公平和有效地部署基于ml的系统,确保患者可以信任,监管机构可以批准,医疗保健可以使用它们。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信