A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

IF 4.1 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Qianhe Liu, Jiahui Jiang, Kewei Wu, Yan Zhang, Nan Sun, Jiawen Luo, Te Ba, Aiqing Lv, Chuane Liu, Yiyu Yin, Zhenghan Yang, Hui Xu
{"title":"A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.","authors":"Qianhe Liu, Jiahui Jiang, Kewei Wu, Yan Zhang, Nan Sun, Jiawen Luo, Te Ba, Aiqing Lv, Chuane Liu, Yiyu Yin, Zhenghan Yang, Hui Xu","doi":"10.1186/s13244-025-01995-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images.</p><p><strong>Methods: </strong>A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated.</p><p><strong>Results: </strong>In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP).</p><p><strong>Conclusion: </strong>The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy.</p><p><strong>Critical relevance statement: </strong>Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data.</p><p><strong>Key points: </strong>Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.</p>","PeriodicalId":13639,"journal":{"name":"Insights into Imaging","volume":"16 1","pages":"139"},"PeriodicalIF":4.1000,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12204963/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insights into Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13244-025-01995-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images.

Methods: A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated.

Results: In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP).

Conclusion: The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy.

Critical relevance statement: Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data.

Key points: Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

基于残差网络的腹部CT图像对比相位两步自动识别。
目的:建立一种基于残差网络(ResNet)的深度学习模型,用于腹部CT图像对比相位的自动准确识别。方法:回顾性收集1175个腹部增强CT扫描数据集用于模型开发,并收集来自五家医院的另一个独立数据集215个扫描数据集用于外部测试。每个对比阶段由两名放射科医生独立注释。建立基于resnet的模型,自动将期分为早期动脉期(EAP)或晚期动脉期(LAP)、门静脉期(PVP)和延迟期(DP)。策略A在一个步骤中确定EAP或LAP、PVP和DP。策略B采用两步方法:首先将图像分为动脉期(AP)、PVP和DP,然后将AP图像进一步分为EAP或LAP。对模型性能和策略比较进行了评价。结果:在内部测试集中,两步策略的总体准确率为98.3% (283/288;结论:本文提出的基于resnet的两步模型对腹部CT图像的对比相位识别具有高度的准确性和鲁棒性,优于传统的一步识别策略。关键相关性声明:腹部CT图像中对比相位的自动准确识别为改善图像质量控制提供了强大的工具,并为人工智能驱动的应用奠定了坚实的基础,特别是那些利用对比度增强的腹部成像数据的应用。重点:在腹部CT成像中,准确识别对比期是至关重要的。基于resnet的两步模型在内部和外部数据集上实现了卓越的准确性。自动相位分类加强了成像质量控制,并支持精确的人工智能应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Insights into Imaging
Insights into Imaging Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
7.30
自引率
4.30%
发文量
182
审稿时长
13 weeks
期刊介绍: Insights into Imaging (I³) is a peer-reviewed open access journal published under the brand SpringerOpen. All content published in the journal is freely available online to anyone, anywhere! I³ continuously updates scientific knowledge and progress in best-practice standards in radiology through the publication of original articles and state-of-the-art reviews and opinions, along with recommendations and statements from the leading radiological societies in Europe. Founded by the European Society of Radiology (ESR), I³ creates a platform for educational material, guidelines and recommendations, and a forum for topics of controversy. A balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes I³ an indispensable source for current information in this field. I³ is owned by the ESR, however authors retain copyright to their article according to the Creative Commons Attribution License (see Copyright and License Agreement). All articles can be read, redistributed and reused for free, as long as the author of the original work is cited properly. The open access fees (article-processing charges) for this journal are kindly sponsored by ESR for all Members. The journal went open access in 2012, which means that all articles published since then are freely available online.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信