在医疗保健领域推进可解释人工智能:必要性、进展和未来方向

IF 3.1 4区 生物学 Q2 BIOLOGY
Rashmita Kumari Mohapatra , Lochan Jolly , Sarada Prasad Dakua
{"title":"在医疗保健领域推进可解释人工智能:必要性、进展和未来方向","authors":"Rashmita Kumari Mohapatra ,&nbsp;Lochan Jolly ,&nbsp;Sarada Prasad Dakua","doi":"10.1016/j.compbiolchem.2025.108599","DOIUrl":null,"url":null,"abstract":"<div><div>Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modelling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.</div></div>","PeriodicalId":10616,"journal":{"name":"Computational Biology and Chemistry","volume":"119 ","pages":"Article 108599"},"PeriodicalIF":3.1000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing explainable AI in healthcare: Necessity, progress, and future directions\",\"authors\":\"Rashmita Kumari Mohapatra ,&nbsp;Lochan Jolly ,&nbsp;Sarada Prasad Dakua\",\"doi\":\"10.1016/j.compbiolchem.2025.108599\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modelling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.</div></div>\",\"PeriodicalId\":10616,\"journal\":{\"name\":\"Computational Biology and Chemistry\",\"volume\":\"119 \",\"pages\":\"Article 108599\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Biology and Chemistry\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1476927125002609\",\"RegionNum\":4,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Biology and Chemistry","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1476927125002609","RegionNum":4,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

临床医生通常的目标是在治疗计划中了解肝脏的形状,这可能会最大限度地减少对周围健康组织和肝血管的伤害,因此,构建肝脏的精确几何模型变得至关重要。多年来,出现了各种肝脏图像分割方法,机器学习和计算机视觉技术因其自动化,适用性和令人印象深刻的结果而迅速普及。人工智能(AI)利用系统和机器来模拟人类智能,解决现实世界的问题。最近人工智能的进步已经导致了广泛的工业应用,展示了在许多任务中具有超人性能的机器学习系统。然而,这些系统固有的模糊性阻碍了它们在敏感而关键的领域(如医疗保健)的采用,在这些领域,它们的潜在价值是巨大的。本研究侧重于机器学习方法的可解释性方面,提出了文献综述和分类,作为理论家和实践者的参考。本文系统回顾了2019年至2023年的可解释人工智能(XAI)方法。所提供的分类法旨在作为XAI方法特征和方面的全面概述,适合初学者、研究人员和实践者。研究发现,在彻底验证、适当的数据质量、交叉验证和适当监管的情况下,可解释的模型可能有助于可靠的人工智能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Advancing explainable AI in healthcare: Necessity, progress, and future directions
Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modelling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computational Biology and Chemistry
Computational Biology and Chemistry 生物-计算机:跨学科应用
CiteScore
6.10
自引率
3.20%
发文量
142
审稿时长
24 days
期刊介绍: Computational Biology and Chemistry publishes original research papers and review articles in all areas of computational life sciences. High quality research contributions with a major computational component in the areas of nucleic acid and protein sequence research, molecular evolution, molecular genetics (functional genomics and proteomics), theory and practice of either biology-specific or chemical-biology-specific modeling, and structural biology of nucleic acids and proteins are particularly welcome. Exceptionally high quality research work in bioinformatics, systems biology, ecology, computational pharmacology, metabolism, biomedical engineering, epidemiology, and statistical genetics will also be considered. Given their inherent uncertainty, protein modeling and molecular docking studies should be thoroughly validated. In the absence of experimental results for validation, the use of molecular dynamics simulations along with detailed free energy calculations, for example, should be used as complementary techniques to support the major conclusions. Submissions of premature modeling exercises without additional biological insights will not be considered. Review articles will generally be commissioned by the editors and should not be submitted to the journal without explicit invitation. However prospective authors are welcome to send a brief (one to three pages) synopsis, which will be evaluated by the editors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信