增强CT成像的双阶段AI模型:肾脏和肿瘤的精确分割。

IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Nalan Karunanayake, Lin Lu, Hao Yang, Pengfei Geng, Oguz Akin, Helena Furberg, Lawrence H Schwartz, Binsheng Zhao
{"title":"增强CT成像的双阶段AI模型:肾脏和肿瘤的精确分割。","authors":"Nalan Karunanayake, Lin Lu, Hao Yang, Pengfei Geng, Oguz Akin, Helena Furberg, Lawrence H Schwartz, Binsheng Zhao","doi":"10.3390/tomography11010003","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and convolutional neural networks (CNNs) to detect and segment kidneys and kidney tumors in Contrast-Enhanced (CECT) scans, with a focus on improving sensitivity for small, indistinct tumors.</p><p><strong>Methods: </strong>The segmentation framework employs a ViT-based model for the kidney organ, followed by a 3D UNet model with enhanced connections and attention mechanisms for tumor detection and segmentation. Two CECT datasets were used: a public dataset (KiTS23: 489 scans) and a private institutional dataset (Private: 592 scans). The AI model was trained on 389 public scans, with validation performed on the remaining 100 scans and external validation performed on all 592 private scans. Tumors were categorized by TNM staging as small (≤4 cm) (KiTS23: 54%, Private: 41%), medium (>4 cm to ≤7 cm) (KiTS23: 24%, Private: 35%), and large (>7 cm) (KiTS23: 22%, Private: 24%) for detailed evaluation.</p><p><strong>Results: </strong>Kidney and kidney tumor segmentations were evaluated against manual annotations as the reference standard. The model achieved a Dice score of 0.97 ± 0.02 for kidney organ segmentation. For tumor detection and segmentation on the KiTS23 dataset, the sensitivities and average false-positive rates per patient were as follows: 0.90 and 0.23 for small tumors, 1.0 and 0.08 for medium tumors, and 0.96 and 0.04 for large tumors. The corresponding Dice scores were 0.84 ± 0.11, 0.89 ± 0.07, and 0.91 ± 0.06, respectively. External validation on the private data confirmed the model's effectiveness, achieving the following sensitivities and average false-positive rates per patient: 0.89 and 0.15 for small tumors, 0.99 and 0.03 for medium tumors, and 1.0 and 0.01 for large tumors. The corresponding Dice scores were 0.84 ± 0.08, 0.89 ± 0.08, and 0.92 ± 0.06.</p><p><strong>Conclusions: </strong>The proposed model demonstrates consistent and robust performance in segmenting kidneys and kidney tumors of various sizes, with effective generalization to unseen data. This underscores the model's significant potential for clinical integration, offering enhanced diagnostic precision and reliability in radiological assessments.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"11 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11769543/pdf/","citationCount":"0","resultStr":"{\"title\":\"Dual-Stage AI Model for Enhanced CT Imaging: Precision Segmentation of Kidney and Tumors.\",\"authors\":\"Nalan Karunanayake, Lin Lu, Hao Yang, Pengfei Geng, Oguz Akin, Helena Furberg, Lawrence H Schwartz, Binsheng Zhao\",\"doi\":\"10.3390/tomography11010003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and convolutional neural networks (CNNs) to detect and segment kidneys and kidney tumors in Contrast-Enhanced (CECT) scans, with a focus on improving sensitivity for small, indistinct tumors.</p><p><strong>Methods: </strong>The segmentation framework employs a ViT-based model for the kidney organ, followed by a 3D UNet model with enhanced connections and attention mechanisms for tumor detection and segmentation. Two CECT datasets were used: a public dataset (KiTS23: 489 scans) and a private institutional dataset (Private: 592 scans). The AI model was trained on 389 public scans, with validation performed on the remaining 100 scans and external validation performed on all 592 private scans. Tumors were categorized by TNM staging as small (≤4 cm) (KiTS23: 54%, Private: 41%), medium (>4 cm to ≤7 cm) (KiTS23: 24%, Private: 35%), and large (>7 cm) (KiTS23: 22%, Private: 24%) for detailed evaluation.</p><p><strong>Results: </strong>Kidney and kidney tumor segmentations were evaluated against manual annotations as the reference standard. The model achieved a Dice score of 0.97 ± 0.02 for kidney organ segmentation. For tumor detection and segmentation on the KiTS23 dataset, the sensitivities and average false-positive rates per patient were as follows: 0.90 and 0.23 for small tumors, 1.0 and 0.08 for medium tumors, and 0.96 and 0.04 for large tumors. The corresponding Dice scores were 0.84 ± 0.11, 0.89 ± 0.07, and 0.91 ± 0.06, respectively. External validation on the private data confirmed the model's effectiveness, achieving the following sensitivities and average false-positive rates per patient: 0.89 and 0.15 for small tumors, 0.99 and 0.03 for medium tumors, and 1.0 and 0.01 for large tumors. The corresponding Dice scores were 0.84 ± 0.08, 0.89 ± 0.08, and 0.92 ± 0.06.</p><p><strong>Conclusions: </strong>The proposed model demonstrates consistent and robust performance in segmenting kidneys and kidney tumors of various sizes, with effective generalization to unseen data. This underscores the model's significant potential for clinical integration, offering enhanced diagnostic precision and reliability in radiological assessments.</p>\",\"PeriodicalId\":51330,\"journal\":{\"name\":\"Tomography\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11769543/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tomography\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3390/tomography11010003\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tomography","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/tomography11010003","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

目的:计算机断层扫描(CT)准确的肾脏和肿瘤分割对诊断和治疗至关重要,但人工方法耗时且不一致,凸显了人工智能自动化的价值。本研究开发了一种全自动人工智能模型,使用视觉变压器(ViTs)和卷积神经网络(cnn)在对比增强(CECT)扫描中检测和分割肾脏和肾脏肿瘤,重点是提高对小的、不清晰的肿瘤的敏感性。方法:分割框架采用基于vit的肾脏器官模型,然后采用增强连接和注意机制的3D UNet模型进行肿瘤检测和分割。使用了两个CECT数据集:一个公共数据集(KiTS23: 489次扫描)和一个私人机构数据集(私人:592次扫描)。人工智能模型在389次公共扫描中进行了训练,对其余100次扫描进行了验证,并对所有592次私人扫描进行了外部验证。肿瘤根据TNM分期分为小(≤4cm) (KiTS23: 54%,私密:41%),中(>4 cm至≤7cm) (KiTS23: 24%,私密:35%)和大(> 7cm) (KiTS23: 22%,私密:24%)进行详细评估。结果:以手工标注作为参考标准,对肾脏和肾肿瘤的分割进行评价。该模型在肾器官分割方面的Dice评分为0.97±0.02。对于KiTS23数据集上的肿瘤检测和分割,每位患者的敏感性和平均假阳性率分别为:小肿瘤0.90和0.23,中等肿瘤1.0和0.08,大肿瘤0.96和0.04。相应的Dice评分分别为0.84±0.11、0.89±0.07、0.91±0.06。对私有数据的外部验证证实了模型的有效性,实现了以下敏感性和平均假阳性率:小肿瘤0.89和0.15,中等肿瘤0.99和0.03,大肿瘤1.0和0.01。相应的Dice评分分别为0.84±0.08、0.89±0.08、0.92±0.06。结论:提出的模型在分割不同大小的肾脏和肾脏肿瘤方面表现出一致和稳健的性能,对未见数据具有有效的泛化。这强调了该模型在临床整合方面的巨大潜力,在放射学评估中提供了更高的诊断精度和可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Dual-Stage AI Model for Enhanced CT Imaging: Precision Segmentation of Kidney and Tumors.

Objectives: Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and convolutional neural networks (CNNs) to detect and segment kidneys and kidney tumors in Contrast-Enhanced (CECT) scans, with a focus on improving sensitivity for small, indistinct tumors.

Methods: The segmentation framework employs a ViT-based model for the kidney organ, followed by a 3D UNet model with enhanced connections and attention mechanisms for tumor detection and segmentation. Two CECT datasets were used: a public dataset (KiTS23: 489 scans) and a private institutional dataset (Private: 592 scans). The AI model was trained on 389 public scans, with validation performed on the remaining 100 scans and external validation performed on all 592 private scans. Tumors were categorized by TNM staging as small (≤4 cm) (KiTS23: 54%, Private: 41%), medium (>4 cm to ≤7 cm) (KiTS23: 24%, Private: 35%), and large (>7 cm) (KiTS23: 22%, Private: 24%) for detailed evaluation.

Results: Kidney and kidney tumor segmentations were evaluated against manual annotations as the reference standard. The model achieved a Dice score of 0.97 ± 0.02 for kidney organ segmentation. For tumor detection and segmentation on the KiTS23 dataset, the sensitivities and average false-positive rates per patient were as follows: 0.90 and 0.23 for small tumors, 1.0 and 0.08 for medium tumors, and 0.96 and 0.04 for large tumors. The corresponding Dice scores were 0.84 ± 0.11, 0.89 ± 0.07, and 0.91 ± 0.06, respectively. External validation on the private data confirmed the model's effectiveness, achieving the following sensitivities and average false-positive rates per patient: 0.89 and 0.15 for small tumors, 0.99 and 0.03 for medium tumors, and 1.0 and 0.01 for large tumors. The corresponding Dice scores were 0.84 ± 0.08, 0.89 ± 0.08, and 0.92 ± 0.06.

Conclusions: The proposed model demonstrates consistent and robust performance in segmenting kidneys and kidney tumors of various sizes, with effective generalization to unseen data. This underscores the model's significant potential for clinical integration, offering enhanced diagnostic precision and reliability in radiological assessments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Tomography
Tomography Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
2.70
自引率
10.50%
发文量
222
期刊介绍: TomographyTM publishes basic (technical and pre-clinical) and clinical scientific articles which involve the advancement of imaging technologies. Tomography encompasses studies that use single or multiple imaging modalities including for example CT, US, PET, SPECT, MR and hyperpolarization technologies, as well as optical modalities (i.e. bioluminescence, photoacoustic, endomicroscopy, fiber optic imaging and optical computed tomography) in basic sciences, engineering, preclinical and clinical medicine. Tomography also welcomes studies involving exploration and refinement of contrast mechanisms and image-derived metrics within and across modalities toward the development of novel imaging probes for image-based feedback and intervention. The use of imaging in biology and medicine provides unparalleled opportunities to noninvasively interrogate tissues to obtain real-time dynamic and quantitative information required for diagnosis and response to interventions and to follow evolving pathological conditions. As multi-modal studies and the complexities of imaging technologies themselves are ever increasing to provide advanced information to scientists and clinicians. Tomography provides a unique publication venue allowing investigators the opportunity to more precisely communicate integrated findings related to the diverse and heterogeneous features associated with underlying anatomical, physiological, functional, metabolic and molecular genetic activities of normal and diseased tissue. Thus Tomography publishes peer-reviewed articles which involve the broad use of imaging of any tissue and disease type including both preclinical and clinical investigations. In addition, hardware/software along with chemical and molecular probe advances are welcome as they are deemed to significantly contribute towards the long-term goal of improving the overall impact of imaging on scientific and clinical discovery.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信