MRI-based Ovarian Lesion Classification via a Foundation Segmentation Model and Multimodal Analysis: A Multicenter Study.

IF 15.2 1区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Radiology Pub Date : 2025-08-01 DOI:10.1148/radiol.243412
Wen-Chi Hsu,Yuli Wang,Yu-Fu Wu,Ruohua Chen,Shadi Afyouni,Jhehong Liu,Somasundaram Vin,Victoria Shi,Maliha Imami,Jill S Chotiyanonta,Ghazal Zandieh,Yeyu Cai,Jeffrey P Leal,Kenichi Oishi,Atif Zaheer,Robert C Ward,Paul J L Zhang,Jing Wu,Zhicheng Jiao,Ihab R Kamel,Gigin Lin,Harrison X Bai
{"title":"MRI-based Ovarian Lesion Classification via a Foundation Segmentation Model and Multimodal Analysis: A Multicenter Study.","authors":"Wen-Chi Hsu,Yuli Wang,Yu-Fu Wu,Ruohua Chen,Shadi Afyouni,Jhehong Liu,Somasundaram Vin,Victoria Shi,Maliha Imami,Jill S Chotiyanonta,Ghazal Zandieh,Yeyu Cai,Jeffrey P Leal,Kenichi Oishi,Atif Zaheer,Robert C Ward,Paul J L Zhang,Jing Wu,Zhicheng Jiao,Ihab R Kamel,Gigin Lin,Harrison X Bai","doi":"10.1148/radiol.243412","DOIUrl":null,"url":null,"abstract":"Background Artificial intelligence may enhance diagnostic accuracy in classifying ovarian lesions on MRI scans; however, its applicability across diverse datasets is uncertain. Purpose To develop an efficient, generalizable pipeline for MRI-based ovarian lesion characterization. Materials and Methods In this retrospective study, multiparametric MRI datasets of patients with ovarian lesions from a primary institution (January 2008 to January 2019) and two external institutions (January 2010 to October 2020) were analyzed. Lesions were automatically segmented using Meta's Segment Anything Model (SAM). A DenseNet-121 deep learning (DL) model incorporating both imaging and clinical data was then trained and validated externally for ovarian lesion classification. Lesions were evaluated by radiologists using the Ovarian-Adnexal Reporting and Data System for MRI and subjective assessment, classifying them as benign or malignant. The classification performances of the DL model and radiologists were compared using the DeLong test. Results The primary dataset included 534 lesions from 448 women (mean age, 52 years ± 15 [SD]) from institution A (United States), whereas the external datasets included 58 lesions from 55 women (mean age, 51 years ± 19) from institution B (United States) and 29 lesions from 29 women (mean age, 49 years ± 10) from institution C (Taiwan). SAM-assisted segmentation had a Dice coefficient of 0.86-0.88, reducing the processing time per lesion by 4 minutes compared with manual segmentation. The DL classification model achieved an area under the receiver operating characteristic curve (AUC) of 0.85 (95% CI: 0.85, 0.85) on the internal test and 0.79 (95% CI: 0.79, 0.79 and 0.78, 0.79) across both external datasets with SAM-segmented images, comparable with the radiologists' performance (AUC: 0.84-0.93; all P > .05). Conclusion These results describe an accurate, efficient pipeline that integrates SAM with DL-based classification for differentiating malignant from benign ovarian lesions on MRI scans. It reduced segmentation time and achieved classification performance comparable with that of radiologists. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Bhayana and Wang in this issue.","PeriodicalId":20896,"journal":{"name":"Radiology","volume":"39 1","pages":"e243412"},"PeriodicalIF":15.2000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/radiol.243412","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background Artificial intelligence may enhance diagnostic accuracy in classifying ovarian lesions on MRI scans; however, its applicability across diverse datasets is uncertain. Purpose To develop an efficient, generalizable pipeline for MRI-based ovarian lesion characterization. Materials and Methods In this retrospective study, multiparametric MRI datasets of patients with ovarian lesions from a primary institution (January 2008 to January 2019) and two external institutions (January 2010 to October 2020) were analyzed. Lesions were automatically segmented using Meta's Segment Anything Model (SAM). A DenseNet-121 deep learning (DL) model incorporating both imaging and clinical data was then trained and validated externally for ovarian lesion classification. Lesions were evaluated by radiologists using the Ovarian-Adnexal Reporting and Data System for MRI and subjective assessment, classifying them as benign or malignant. The classification performances of the DL model and radiologists were compared using the DeLong test. Results The primary dataset included 534 lesions from 448 women (mean age, 52 years ± 15 [SD]) from institution A (United States), whereas the external datasets included 58 lesions from 55 women (mean age, 51 years ± 19) from institution B (United States) and 29 lesions from 29 women (mean age, 49 years ± 10) from institution C (Taiwan). SAM-assisted segmentation had a Dice coefficient of 0.86-0.88, reducing the processing time per lesion by 4 minutes compared with manual segmentation. The DL classification model achieved an area under the receiver operating characteristic curve (AUC) of 0.85 (95% CI: 0.85, 0.85) on the internal test and 0.79 (95% CI: 0.79, 0.79 and 0.78, 0.79) across both external datasets with SAM-segmented images, comparable with the radiologists' performance (AUC: 0.84-0.93; all P > .05). Conclusion These results describe an accurate, efficient pipeline that integrates SAM with DL-based classification for differentiating malignant from benign ovarian lesions on MRI scans. It reduced segmentation time and achieved classification performance comparable with that of radiologists. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Bhayana and Wang in this issue.
基于基础分割模型和多模态分析的mri卵巢病变分类:一项多中心研究。
人工智能可以提高卵巢病变MRI诊断的准确性;然而,它在不同数据集上的适用性是不确定的。目的建立一种高效、通用的基于mri的卵巢病变诊断方法。材料与方法在本回顾性研究中,对来自一家主要机构(2008年1月至2019年1月)和两家外部机构(2010年1月至2020年10月)的卵巢病变患者的多参数MRI数据集进行分析。使用Meta的任何部分模型(SAM)自动分割病变。然后,结合影像学和临床数据的DenseNet-121深度学习(DL)模型进行外部训练和验证,用于卵巢病变分类。放射科医生使用卵巢-附件报告和数据系统进行MRI和主观评估,将病变分为良性或恶性。采用DeLong检验比较DL模型和放射科医生的分类性能。结果主要数据集包括来自A机构(美国)的448名女性(平均年龄52岁±15岁[SD])的534个病变,而外部数据集包括来自B机构(美国)的55名女性(平均年龄51岁±19岁)的58个病变,以及来自C机构(台湾)的29名女性(平均年龄49岁±10岁)的29个病变。sam辅助分割的Dice系数为0.86-0.88,与人工分割相比,每个病灶的处理时间减少了4分钟。DL分类模型在内部测试中的接受者工作特征曲线下面积(AUC)为0.85 (95% CI: 0.85, 0.85),在两个具有sam分割图像的外部数据集上的接受者工作特征曲线下面积(AUC)为0.79 (95% CI: 0.79, 0.79和0.78,0.79),与放射科医生的表现相当(AUC: 0.84-0.93;P < 0.05)。结论这些结果描述了一个准确,有效的管道,将SAM与基于dl的分类相结合,用于在MRI扫描上区分卵巢病变的恶性和良性。它减少了分割时间,实现了与放射科医生相当的分类性能。©RSNA, 2025本文可获得补充材料。请参阅Bhayana和Wang在本期的社论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Radiology
Radiology 医学-核医学
CiteScore
35.20
自引率
3.00%
发文量
596
审稿时长
3.6 months
期刊介绍: Published regularly since 1923 by the Radiological Society of North America (RSNA), Radiology has long been recognized as the authoritative reference for the most current, clinically relevant and highest quality research in the field of radiology. Each month the journal publishes approximately 240 pages of peer-reviewed original research, authoritative reviews, well-balanced commentary on significant articles, and expert opinion on new techniques and technologies. Radiology publishes cutting edge and impactful imaging research articles in radiology and medical imaging in order to help improve human health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信