{"title":"Ethical and regulatory challenges in machine learning-based healthcare systems: A review of implementation barriers and future directions","authors":"Shehu Mohammed, Neha Malhotra","doi":"10.1016/j.tbench.2025.100215","DOIUrl":null,"url":null,"abstract":"<div><div>Machine learning significantly enhances clinical decision-making quality, directly impacting patient care with early diagnosis, personalized treatment, and predictive analytics. Nonetheless, the increasing proliferation of such ML applications in practice raises potential ethical and regulatory obstacles that may prevent their widespread adoption in healthcare. Key issues concern patient data privacy, algorithmic bias, absence of transparency, and ambiguous legal liability. Fortunately, regulations like the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the FDA AI/ML guidance have raised important ways of addressing things like fairness, explainability, legal compliance, etc.; however, the landscape is far from risk-free. AI liability is another one of the gray areas approaching black, worrying about who is liable for an AI medical error — the developers, the physicians, or the institutions. The study reviews ethical risks and potential opportunities, as well as regulatory frameworks and emerging challenges in AI-driven healthcare. It proposes solutions to reduce bias, improve transparency, and enhance legal accountability. This research addresses these challenges to support the safe, fair, and effective deployment of ML-based systems in clinical practice, guaranteeing that patients can trust, regulators can approve, and healthcare can use them.</div></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"5 1","pages":"Article 100215"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772485925000286","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning significantly enhances clinical decision-making quality, directly impacting patient care with early diagnosis, personalized treatment, and predictive analytics. Nonetheless, the increasing proliferation of such ML applications in practice raises potential ethical and regulatory obstacles that may prevent their widespread adoption in healthcare. Key issues concern patient data privacy, algorithmic bias, absence of transparency, and ambiguous legal liability. Fortunately, regulations like the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the FDA AI/ML guidance have raised important ways of addressing things like fairness, explainability, legal compliance, etc.; however, the landscape is far from risk-free. AI liability is another one of the gray areas approaching black, worrying about who is liable for an AI medical error — the developers, the physicians, or the institutions. The study reviews ethical risks and potential opportunities, as well as regulatory frameworks and emerging challenges in AI-driven healthcare. It proposes solutions to reduce bias, improve transparency, and enhance legal accountability. This research addresses these challenges to support the safe, fair, and effective deployment of ML-based systems in clinical practice, guaranteeing that patients can trust, regulators can approve, and healthcare can use them.