通过交替训练和正则化在胸部CT扫描中提高CNN的可解释性和评价

Rodrigo Ramos-Díaz , Jesús García-Ramírez , Jimena Olveres , Boris Escalante-Ramírez
{"title":"通过交替训练和正则化在胸部CT扫描中提高CNN的可解释性和评价","authors":"Rodrigo Ramos-Díaz ,&nbsp;Jesús García-Ramírez ,&nbsp;Jimena Olveres ,&nbsp;Boris Escalante-Ramírez","doi":"10.1016/j.ibmed.2025.100211","DOIUrl":null,"url":null,"abstract":"<div><div>Interpretable machine learning is an emerging trend that holds significant importance, considering the growing impact of machine learning systems on society and human lives. Many interpretability methods are applied in CNN after training to provide deeper insights into the outcomes, but only a few have tried to promote interpretability during training. The aim of this experimental study is to investigate the interpretability of CNN. This research was applied to chest computed tomography scans, as understanding CNN predictions has particular importance in the automatic classification of medical images. We attempted to implement a CNN technique aimed at improving interpretability by relating filters in the last convolutional to specific output classes. Variations of such a technique were explored and assessed using chest CT images for classification based on the presence of lungs and lesions. A search was conducted to optimize the specific hyper-parameters necessary for the evaluated strategies. A novel strategy is proposed employing transfer learning and regularization. Models obtained with this strategy and the optimized hyperparameters were statistically compared to standard models, demonstrating greater interpretability without a significant loss in predictive accuracy. We achieved CNN models with improved interpretability, which is crucial for the development of more explainable and reliable AI systems.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100211"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving CNN interpretability and evaluation via alternating training and regularization in chest CT scans\",\"authors\":\"Rodrigo Ramos-Díaz ,&nbsp;Jesús García-Ramírez ,&nbsp;Jimena Olveres ,&nbsp;Boris Escalante-Ramírez\",\"doi\":\"10.1016/j.ibmed.2025.100211\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Interpretable machine learning is an emerging trend that holds significant importance, considering the growing impact of machine learning systems on society and human lives. Many interpretability methods are applied in CNN after training to provide deeper insights into the outcomes, but only a few have tried to promote interpretability during training. The aim of this experimental study is to investigate the interpretability of CNN. This research was applied to chest computed tomography scans, as understanding CNN predictions has particular importance in the automatic classification of medical images. We attempted to implement a CNN technique aimed at improving interpretability by relating filters in the last convolutional to specific output classes. Variations of such a technique were explored and assessed using chest CT images for classification based on the presence of lungs and lesions. A search was conducted to optimize the specific hyper-parameters necessary for the evaluated strategies. A novel strategy is proposed employing transfer learning and regularization. Models obtained with this strategy and the optimized hyperparameters were statistically compared to standard models, demonstrating greater interpretability without a significant loss in predictive accuracy. We achieved CNN models with improved interpretability, which is crucial for the development of more explainable and reliable AI systems.</div></div>\",\"PeriodicalId\":73399,\"journal\":{\"name\":\"Intelligence-based medicine\",\"volume\":\"11 \",\"pages\":\"Article 100211\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligence-based medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666521225000146\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666521225000146","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

考虑到机器学习系统对社会和人类生活的影响越来越大,可解释的机器学习是一种具有重要意义的新兴趋势。许多可解释性方法在训练后应用于CNN,以提供对结果的更深入的了解,但只有少数方法试图在训练过程中提高可解释性。本实验研究的目的是探讨CNN的可解释性。这项研究应用于胸部计算机断层扫描,因为理解CNN预测在医学图像的自动分类中特别重要。我们试图实现一种CNN技术,旨在通过将最后一个卷积中的过滤器与特定的输出类关联来提高可解释性。这种技术的变化被探索和评估使用胸部CT图像进行分类基于肺和病变的存在。进行搜索以优化评估策略所需的特定超参数。提出了一种利用迁移学习和正则化的新策略。用该策略获得的模型和优化的超参数与标准模型进行统计比较,显示出更大的可解释性,而不会显著降低预测精度。我们实现了具有改进可解释性的CNN模型,这对于开发更具可解释性和可靠性的人工智能系统至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Improving CNN interpretability and evaluation via alternating training and regularization in chest CT scans

Improving CNN interpretability and evaluation via alternating training and regularization in chest CT scans
Interpretable machine learning is an emerging trend that holds significant importance, considering the growing impact of machine learning systems on society and human lives. Many interpretability methods are applied in CNN after training to provide deeper insights into the outcomes, but only a few have tried to promote interpretability during training. The aim of this experimental study is to investigate the interpretability of CNN. This research was applied to chest computed tomography scans, as understanding CNN predictions has particular importance in the automatic classification of medical images. We attempted to implement a CNN technique aimed at improving interpretability by relating filters in the last convolutional to specific output classes. Variations of such a technique were explored and assessed using chest CT images for classification based on the presence of lungs and lesions. A search was conducted to optimize the specific hyper-parameters necessary for the evaluated strategies. A novel strategy is proposed employing transfer learning and regularization. Models obtained with this strategy and the optimized hyperparameters were statistically compared to standard models, demonstrating greater interpretability without a significant loss in predictive accuracy. We achieved CNN models with improved interpretability, which is crucial for the development of more explainable and reliable AI systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Intelligence-based medicine
Intelligence-based medicine Health Informatics
CiteScore
5.00
自引率
0.00%
发文量
0
审稿时长
187 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信