多阶段合作学习联合增强胸部x线自动诊断与放射学凝视预测

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2025-07-15 DOI:10.1002/mp.17977
Zirui Qiu, Hassan Rivaz, Yiming Xiao
{"title":"多阶段合作学习联合增强胸部x线自动诊断与放射学凝视预测","authors":"Zirui Qiu,&nbsp;Hassan Rivaz,&nbsp;Yiming Xiao","doi":"10.1002/mp.17977","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>As visual inspection is an inherent process during radiological screening, the associated eye gaze data can provide valuable insights into relevant clinical decision processes and facilitate computer-assisted diagnosis. However, the relevant techniques are still under-explored.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>With deep learning becoming the state-of-the-art for computer-assisted diagnosis, integrating human behavior, such as eye gaze data, into these systems is instrumental to help guide machine predictions with clinical diagnostic criteria, thus enhancing the quality of automatic radiological diagnosis. In addition, the ability to predict a radiologist's gaze saliency from a clinical scan along with the automatic diagnostic result could be instrumental for the end users.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>We propose a novel deep learning framework for joint disease diagnosis and prediction of corresponding radiological gaze saliency maps for chest x-ray scans. Specifically, we introduce a new dual-encoder multitask UNet, which leverages both a DenseNet201 backbone and a Residual and Squeeze-and-Excitation block-based encoder to extract diverse features for visual saliency map prediction and a multiscale feature-fusion classifier to perform disease classification. To tackle the issue of asynchronous training schedules of individual tasks in multitask learning, we propose a multistage cooperative learning strategy, with contrastive learning for feature encoder pretraining to boost performance.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Our proposed method is shown to significantly outperform existing techniques for chest radiography diagnosis (AUC = 0.93) and the quality of visual saliency map prediction (correlation coefficient = 0.58).</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Benefiting from the proposed multitask, multistage cooperative learning, our technique demonstrates the benefit of integrating clinicians' eye gaze into radiological AI systems to boost performance and potentially explainability.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 7","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17977","citationCount":"0","resultStr":"{\"title\":\"Joint enhancement of automatic chest x-ray diagnosis and radiological gaze prediction with multistage cooperative learning\",\"authors\":\"Zirui Qiu,&nbsp;Hassan Rivaz,&nbsp;Yiming Xiao\",\"doi\":\"10.1002/mp.17977\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>As visual inspection is an inherent process during radiological screening, the associated eye gaze data can provide valuable insights into relevant clinical decision processes and facilitate computer-assisted diagnosis. However, the relevant techniques are still under-explored.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>With deep learning becoming the state-of-the-art for computer-assisted diagnosis, integrating human behavior, such as eye gaze data, into these systems is instrumental to help guide machine predictions with clinical diagnostic criteria, thus enhancing the quality of automatic radiological diagnosis. In addition, the ability to predict a radiologist's gaze saliency from a clinical scan along with the automatic diagnostic result could be instrumental for the end users.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>We propose a novel deep learning framework for joint disease diagnosis and prediction of corresponding radiological gaze saliency maps for chest x-ray scans. Specifically, we introduce a new dual-encoder multitask UNet, which leverages both a DenseNet201 backbone and a Residual and Squeeze-and-Excitation block-based encoder to extract diverse features for visual saliency map prediction and a multiscale feature-fusion classifier to perform disease classification. To tackle the issue of asynchronous training schedules of individual tasks in multitask learning, we propose a multistage cooperative learning strategy, with contrastive learning for feature encoder pretraining to boost performance.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Our proposed method is shown to significantly outperform existing techniques for chest radiography diagnosis (AUC = 0.93) and the quality of visual saliency map prediction (correlation coefficient = 0.58).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>Benefiting from the proposed multitask, multistage cooperative learning, our technique demonstrates the benefit of integrating clinicians' eye gaze into radiological AI systems to boost performance and potentially explainability.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"52 7\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17977\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17977\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17977","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

由于视觉检查是放射筛查的固有过程,相关的眼睛注视数据可以为相关的临床决策过程提供有价值的见解,并促进计算机辅助诊断。然而,相关技术仍未得到充分探索。随着深度学习成为计算机辅助诊断的最先进技术,将人类行为(如眼睛注视数据)集成到这些系统中,有助于指导机器根据临床诊断标准进行预测,从而提高自动放射诊断的质量。此外,通过临床扫描预测放射科医生的凝视显著性以及自动诊断结果的能力对最终用户来说可能是有用的。我们提出了一个新的深度学习框架,用于关节疾病诊断和预测相应的胸部x线扫描的放射凝视显著性图。具体来说,我们引入了一种新的双编码器多任务UNet,它利用DenseNet201主干和基于残差和挤压和激励块的编码器来提取不同的特征以进行视觉显著性图预测,并利用多尺度特征融合分类器进行疾病分类。为了解决多任务学习中单个任务的异步训练计划问题,我们提出了一种多阶段合作学习策略,在特征编码器预训练中使用对比学习来提高性能。结果我们提出的方法在胸片诊断(AUC = 0.93)和视觉显著性图预测质量(相关系数= 0.58)方面明显优于现有技术。得益于所提出的多任务、多阶段合作学习,我们的技术展示了将临床医生的目光整合到放射人工智能系统中,以提高性能和潜在的可解释性的好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Joint enhancement of automatic chest x-ray diagnosis and radiological gaze prediction with multistage cooperative learning

Joint enhancement of automatic chest x-ray diagnosis and radiological gaze prediction with multistage cooperative learning

Background

As visual inspection is an inherent process during radiological screening, the associated eye gaze data can provide valuable insights into relevant clinical decision processes and facilitate computer-assisted diagnosis. However, the relevant techniques are still under-explored.

Purpose

With deep learning becoming the state-of-the-art for computer-assisted diagnosis, integrating human behavior, such as eye gaze data, into these systems is instrumental to help guide machine predictions with clinical diagnostic criteria, thus enhancing the quality of automatic radiological diagnosis. In addition, the ability to predict a radiologist's gaze saliency from a clinical scan along with the automatic diagnostic result could be instrumental for the end users.

Methods

We propose a novel deep learning framework for joint disease diagnosis and prediction of corresponding radiological gaze saliency maps for chest x-ray scans. Specifically, we introduce a new dual-encoder multitask UNet, which leverages both a DenseNet201 backbone and a Residual and Squeeze-and-Excitation block-based encoder to extract diverse features for visual saliency map prediction and a multiscale feature-fusion classifier to perform disease classification. To tackle the issue of asynchronous training schedules of individual tasks in multitask learning, we propose a multistage cooperative learning strategy, with contrastive learning for feature encoder pretraining to boost performance.

Results

Our proposed method is shown to significantly outperform existing techniques for chest radiography diagnosis (AUC = 0.93) and the quality of visual saliency map prediction (correlation coefficient = 0.58).

Conclusion

Benefiting from the proposed multitask, multistage cooperative learning, our technique demonstrates the benefit of integrating clinicians' eye gaze into radiological AI systems to boost performance and potentially explainability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信