{"title":"Advancing explainable AI in healthcare: Necessity, progress, and future directions","authors":"Rashmita Kumari Mohapatra , Lochan Jolly , Sarada Prasad Dakua","doi":"10.1016/j.compbiolchem.2025.108599","DOIUrl":null,"url":null,"abstract":"<div><div>Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modelling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.</div></div>","PeriodicalId":10616,"journal":{"name":"Computational Biology and Chemistry","volume":"119 ","pages":"Article 108599"},"PeriodicalIF":3.1000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Biology and Chemistry","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1476927125002609","RegionNum":4,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modelling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.
期刊介绍:
Computational Biology and Chemistry publishes original research papers and review articles in all areas of computational life sciences. High quality research contributions with a major computational component in the areas of nucleic acid and protein sequence research, molecular evolution, molecular genetics (functional genomics and proteomics), theory and practice of either biology-specific or chemical-biology-specific modeling, and structural biology of nucleic acids and proteins are particularly welcome. Exceptionally high quality research work in bioinformatics, systems biology, ecology, computational pharmacology, metabolism, biomedical engineering, epidemiology, and statistical genetics will also be considered.
Given their inherent uncertainty, protein modeling and molecular docking studies should be thoroughly validated. In the absence of experimental results for validation, the use of molecular dynamics simulations along with detailed free energy calculations, for example, should be used as complementary techniques to support the major conclusions. Submissions of premature modeling exercises without additional biological insights will not be considered.
Review articles will generally be commissioned by the editors and should not be submitted to the journal without explicit invitation. However prospective authors are welcome to send a brief (one to three pages) synopsis, which will be evaluated by the editors.