卵巢肿瘤的深层卷积神经网络超声诊断。

Min Xi, Runan Zheng, Mingyue Wang, Xiu Shi, Chaomei Chen, Jun Qian, Xinxian Gu, Jinhua Zhou
{"title":"卵巢肿瘤的深层卷积神经网络超声诊断。","authors":"Min Xi,&nbsp;Runan Zheng,&nbsp;Mingyue Wang,&nbsp;Xiu Shi,&nbsp;Chaomei Chen,&nbsp;Jun Qian,&nbsp;Xinxian Gu,&nbsp;Jinhua Zhou","doi":"10.5603/gpl.94956","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>The objective of this study was to develop and validate an ovarian tumor ultrasonographic diagnostic model based on deep convolutional neural networks (DCNN) and compare its diagnostic performance with that of human experts.</p><p><strong>Material and methods: </strong>We collected 486 ultrasound images of 192 women with malignant ovarian tumors and 617 ultrasound images of 213 women with benign ovarian tumors, all confirmed by pathological examination. The image dataset was split into a training set and a validation set according to a 7:3 ratio. We selected 5 DCNNs to develop our model: MobileNet, Xception, Inception, ResNet and DenseNet. We compared the performance of the five models through the area under the curve (AUC), sensitivity, specificity, and accuracy. We then randomly selected 200 images from the validation set as the test set. We asked three expert radiologists to diagnose the images to compare the performance of radiologists and the DCNN model.</p><p><strong>Results: </strong>In the validation set, AUC of DenseNet was 0.997 while AUC was 0.988 of ResNet, 0.987 of Inception, 0.968 of Xception and 0.836 of MobileNet. In the test set, the accuracy was 0.975 with the DenseNet model versus 0.825 (p < 0.0001) with the radiologists, and sensitivity was 0.975 versus 0.700 (p < 0.0001), and specificity was 0.975 versus 0.908 (p < 0.001).</p><p><strong>Conclusions: </strong>DensNet performed better than other DCNNs and expert radiologists in identifying malignant ovarian tumors from benign ovarian tumors based on ultrasound images, a finding that needs to be further explored in clinical trials.</p>","PeriodicalId":94021,"journal":{"name":"Ginekologia polska","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ultrasonographic diagnosis of ovarian tumors through the deep convolutional neural network.\",\"authors\":\"Min Xi,&nbsp;Runan Zheng,&nbsp;Mingyue Wang,&nbsp;Xiu Shi,&nbsp;Chaomei Chen,&nbsp;Jun Qian,&nbsp;Xinxian Gu,&nbsp;Jinhua Zhou\",\"doi\":\"10.5603/gpl.94956\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>The objective of this study was to develop and validate an ovarian tumor ultrasonographic diagnostic model based on deep convolutional neural networks (DCNN) and compare its diagnostic performance with that of human experts.</p><p><strong>Material and methods: </strong>We collected 486 ultrasound images of 192 women with malignant ovarian tumors and 617 ultrasound images of 213 women with benign ovarian tumors, all confirmed by pathological examination. The image dataset was split into a training set and a validation set according to a 7:3 ratio. We selected 5 DCNNs to develop our model: MobileNet, Xception, Inception, ResNet and DenseNet. We compared the performance of the five models through the area under the curve (AUC), sensitivity, specificity, and accuracy. We then randomly selected 200 images from the validation set as the test set. We asked three expert radiologists to diagnose the images to compare the performance of radiologists and the DCNN model.</p><p><strong>Results: </strong>In the validation set, AUC of DenseNet was 0.997 while AUC was 0.988 of ResNet, 0.987 of Inception, 0.968 of Xception and 0.836 of MobileNet. In the test set, the accuracy was 0.975 with the DenseNet model versus 0.825 (p < 0.0001) with the radiologists, and sensitivity was 0.975 versus 0.700 (p < 0.0001), and specificity was 0.975 versus 0.908 (p < 0.001).</p><p><strong>Conclusions: </strong>DensNet performed better than other DCNNs and expert radiologists in identifying malignant ovarian tumors from benign ovarian tumors based on ultrasound images, a finding that needs to be further explored in clinical trials.</p>\",\"PeriodicalId\":94021,\"journal\":{\"name\":\"Ginekologia polska\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ginekologia polska\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5603/gpl.94956\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ginekologia polska","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5603/gpl.94956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:本研究的目的是开发和验证基于深度卷积神经网络(DCNN)的卵巢肿瘤超声诊断模型,并将其诊断性能与人类专家的诊断性能进行比较。材料与方法:我们收集了192例卵巢恶性肿瘤患者的486张超声图像和213例卵巢良性肿瘤患者的617张超声照片,均经病理检查证实。根据7:3的比例将图像数据集划分为训练集和验证集。我们选择了5个DCNN来开发我们的模型:MobileNet、Xception、Inception、ResNet和DenseNet。我们通过曲线下面积(AUC)、敏感性、特异性和准确性对五种模型的性能进行了比较。然后,我们从验证集中随机选择200张图像作为测试集。我们请了三位放射科医生对图像进行诊断,以比较放射科医生和DCNN模型的性能。结果:在验证集中,DenseNet的AUC为0.997,而ResNet的AUC=0.988,Inception的AUC:0.987,Xception的AUC0.968和MobileNet的AUC0.836。在测试集中,DenseNet模型的准确度为0.975,放射科医生为0.825(p<0.0001),灵敏度为0.975和0.700(p<0.001),特异性为0.975对0.908(p<001)。结论:DensNet在基于超声图像从良性卵巢肿瘤中识别恶性卵巢肿瘤方面比其他DCNN和专业放射科医生表现更好,这一发现需要在临床试验中进一步探索。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ultrasonographic diagnosis of ovarian tumors through the deep convolutional neural network.

Objectives: The objective of this study was to develop and validate an ovarian tumor ultrasonographic diagnostic model based on deep convolutional neural networks (DCNN) and compare its diagnostic performance with that of human experts.

Material and methods: We collected 486 ultrasound images of 192 women with malignant ovarian tumors and 617 ultrasound images of 213 women with benign ovarian tumors, all confirmed by pathological examination. The image dataset was split into a training set and a validation set according to a 7:3 ratio. We selected 5 DCNNs to develop our model: MobileNet, Xception, Inception, ResNet and DenseNet. We compared the performance of the five models through the area under the curve (AUC), sensitivity, specificity, and accuracy. We then randomly selected 200 images from the validation set as the test set. We asked three expert radiologists to diagnose the images to compare the performance of radiologists and the DCNN model.

Results: In the validation set, AUC of DenseNet was 0.997 while AUC was 0.988 of ResNet, 0.987 of Inception, 0.968 of Xception and 0.836 of MobileNet. In the test set, the accuracy was 0.975 with the DenseNet model versus 0.825 (p < 0.0001) with the radiologists, and sensitivity was 0.975 versus 0.700 (p < 0.0001), and specificity was 0.975 versus 0.908 (p < 0.001).

Conclusions: DensNet performed better than other DCNNs and expert radiologists in identifying malignant ovarian tumors from benign ovarian tumors based on ultrasound images, a finding that needs to be further explored in clinical trials.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信