A Study on Automatic O-RADS Classification of Sonograms of Ovarian Adnexal Lesions Based on Deep Convolutional Neural Networks

IF 2.4 3区 医学 Q2 ACOUSTICS
Tao Liu, Kuo Miao, Gaoqiang Tan, Hanqi Bu, Xiaohui Shao, Siming Wang, Xiaoqiu Dong
{"title":"A Study on Automatic O-RADS Classification of Sonograms of Ovarian Adnexal Lesions Based on Deep Convolutional Neural Networks","authors":"Tao Liu,&nbsp;Kuo Miao,&nbsp;Gaoqiang Tan,&nbsp;Hanqi Bu,&nbsp;Xiaohui Shao,&nbsp;Siming Wang,&nbsp;Xiaoqiu Dong","doi":"10.1016/j.ultrasmedbio.2024.11.009","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>This study explored a new method for automatic O-RADS classification of sonograms based on a deep convolutional neural network (DCNN).</div></div><div><h3>Methods</h3><div>A development dataset (DD) of 2,455 2D grayscale sonograms of 870 ovarian adnexal lesions and an intertemporal validation dataset (IVD) of 426 sonograms of 280 lesions were collected and classified according to O-RADS v2022 (categories 2–5) by three senior sonographers. Classification results verified by a two-tailed z-test to be consistent with the O-RADS v2022 malignancy rate indicated the diagnostic performance was comparable to that of a previous study and were used for training; otherwise, the classification was repeated by two different sonographers. The DD was used to develop three DCNN models (ResNet34, DenseNet121, and ConvNeXt-Tiny) that employed transfer learning techniques. Model performance was assessed for accuracy, precision, and F1 score, among others. The optimal model was selected and validated over time using the IVD and to analyze whether the efficiency of O-RADS classification was improved with the assistance of this model for three sonographers with different years of experience.</div></div><div><h3>Results</h3><div>The proportion of malignant tumors in the DD and IVD in each O-RADS-defined risk category was verified using a two-tailed z-test. Malignant lesions (O-RADS categories 4 and 5) were diagnosed in the DD and IVD with sensitivities of 0.949 and 0.962 and specificities of 0.892 and 0.842, respectively. ResNet34, DenseNet121, and ConvNeXt-Tiny had overall accuracies of 0.737, 0.752, and 0.878, respectively, for sonogram prediction in the DD. The ConvNeXt-Tiny model's accuracy for sonogram prediction in the IVD was 0.859, with no significant difference between test sets. The modeling aid significantly reduced O-RADS classification time for three sonographers (Cohen's d = 5.75).</div></div><div><h3>Conclusion</h3><div>ConvNeXt-Tiny showed robust and stable performance in classifying O-RADS 2–5, improving sonologists' classification efficacy.</div></div>","PeriodicalId":49399,"journal":{"name":"Ultrasound in Medicine and Biology","volume":"51 2","pages":"Pages 387-395"},"PeriodicalIF":2.4000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasound in Medicine and Biology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0301562924004307","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

This study explored a new method for automatic O-RADS classification of sonograms based on a deep convolutional neural network (DCNN).

Methods

A development dataset (DD) of 2,455 2D grayscale sonograms of 870 ovarian adnexal lesions and an intertemporal validation dataset (IVD) of 426 sonograms of 280 lesions were collected and classified according to O-RADS v2022 (categories 2–5) by three senior sonographers. Classification results verified by a two-tailed z-test to be consistent with the O-RADS v2022 malignancy rate indicated the diagnostic performance was comparable to that of a previous study and were used for training; otherwise, the classification was repeated by two different sonographers. The DD was used to develop three DCNN models (ResNet34, DenseNet121, and ConvNeXt-Tiny) that employed transfer learning techniques. Model performance was assessed for accuracy, precision, and F1 score, among others. The optimal model was selected and validated over time using the IVD and to analyze whether the efficiency of O-RADS classification was improved with the assistance of this model for three sonographers with different years of experience.

Results

The proportion of malignant tumors in the DD and IVD in each O-RADS-defined risk category was verified using a two-tailed z-test. Malignant lesions (O-RADS categories 4 and 5) were diagnosed in the DD and IVD with sensitivities of 0.949 and 0.962 and specificities of 0.892 and 0.842, respectively. ResNet34, DenseNet121, and ConvNeXt-Tiny had overall accuracies of 0.737, 0.752, and 0.878, respectively, for sonogram prediction in the DD. The ConvNeXt-Tiny model's accuracy for sonogram prediction in the IVD was 0.859, with no significant difference between test sets. The modeling aid significantly reduced O-RADS classification time for three sonographers (Cohen's d = 5.75).

Conclusion

ConvNeXt-Tiny showed robust and stable performance in classifying O-RADS 2–5, improving sonologists' classification efficacy.
基于深度卷积神经网络的卵巢附件病变声像图 O-RADS 自动分类研究
目的本研究探索了一种基于深度卷积神经网络(DCNN)的声像图 O-RADS 自动分类新方法:方法:由三位资深超声技师根据 O-RADS v2022(2-5 类)收集并分类了包含 870 例卵巢附件病变的 2,455 张二维灰度声像图的开发数据集(DD)和包含 280 例病变的 426 张声像图的时际验证数据集(IVD)。分类结果经双尾z检验验证与O-RADS v2022恶性率一致,表明诊断性能与之前的一项研究相当,并用于训练;否则,由两名不同的超声技师重复进行分类。DD 被用于开发采用迁移学习技术的三个 DCNN 模型(ResNet34、DenseNet121 和 ConvNeXt-Tiny)。对模型性能进行了准确度、精确度和 F1 分数等方面的评估。选择了最佳模型,并使用 IVD 进行了长期验证,以分析在该模型的帮助下,三位具有不同年限经验的超声技师的 O-RADS 分类效率是否有所提高:采用双尾z检验验证了DD和IVD中恶性肿瘤在每个O-RADS定义的风险类别中所占的比例。在 DD 和 IVD 诊断出恶性病变(O-RADS 类别 4 和 5)的敏感性分别为 0.949 和 0.962,特异性分别为 0.892 和 0.842。ResNet34、DenseNet121 和 ConvNeXt-Tiny 对 DD 声像图预测的总体准确度分别为 0.737、0.752 和 0.878。ConvNeXt-Tiny 模型对 IVD 声像图预测的准确率为 0.859,不同测试集之间没有显著差异。建模辅助工具大大缩短了三位超声技师的 O-RADS 分类时间(Cohen's d = 5.75):结论:ConvNeXt-Tiny 在 O-RADS 2-5 级分类中表现出强大而稳定的性能,提高了超声技师的分类效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.20
自引率
6.90%
发文量
325
审稿时长
70 days
期刊介绍: Ultrasound in Medicine and Biology is the official journal of the World Federation for Ultrasound in Medicine and Biology. The journal publishes original contributions that demonstrate a novel application of an existing ultrasound technology in clinical diagnostic, interventional and therapeutic applications, new and improved clinical techniques, the physics, engineering and technology of ultrasound in medicine and biology, and the interactions between ultrasound and biological systems, including bioeffects. Papers that simply utilize standard diagnostic ultrasound as a measuring tool will be considered out of scope. Extended critical reviews of subjects of contemporary interest in the field are also published, in addition to occasional editorial articles, clinical and technical notes, book reviews, letters to the editor and a calendar of forthcoming meetings. It is the aim of the journal fully to meet the information and publication requirements of the clinicians, scientists, engineers and other professionals who constitute the biomedical ultrasonic community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信