{"title":"应用深度学习在磁共振成像上鉴别子宫内膜癌与癌肉瘤。","authors":"Tsukasa Saida, Kensaku Mori, Sodai Hoshiai, Masafumi Sakai, Aiko Urushibara, Toshitaka Ishiguro, Toyomi Satoh, Takahito Nakajima","doi":"10.5114/pjr.2022.119806","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To verify whether deep learning can be used to differentiate between carcinosarcomas (CSs) and endometrial carcinomas (ECs) using several magnetic resonance imaging (MRI) sequences.</p><p><strong>Material and methods: </strong>This retrospective study included 52 patients with CS and 279 patients with EC. A deep-learning model that uses convolutional neural networks (CNN) was trained with 572 T2-weighted images (T2WI) from 42 patients, 488 apparent diffusion coefficient of water maps from 33 patients, and 539 fat-saturated contrast-enhanced T1-weighted images from 40 patients with CS, as well as 1612 images from 223 patients with EC for each sequence. These were tested with 9-10 images of 9-10 patients with CS and 56 images of 56 patients with EC for each sequence, respectively. Three experienced radiologists independently interpreted these test images. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) for each sequence were compared between the CNN models and the radiologists.</p><p><strong>Results: </strong>The CNN model of each sequence had sensitivity 0.89-0.93, specificity 0.44-0.70, accuracy 0.83-0.89, and AUC 0.80-0.94. It also showed an equivalent or better diagnostic performance than the 3 readers (sensitivity 0.43-0.91, specificity 0.30-0.78, accuracy 0.45-0.88, and AUC 0.49-0.92). The CNN model displayed the highest diagnostic performance on T2WI (sensitivity 0.93, specificity 0.70, accuracy 0.89, and AUC 0.94).</p><p><strong>Conclusions: </strong>Deep learning provided diagnostic performance comparable to or better than experienced radiologists when distinguishing between CS and EC on MRI.</p>","PeriodicalId":47128,"journal":{"name":"Polish Journal of Radiology","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/67/54/PJR-87-47888.PMC9536210.pdf","citationCount":"0","resultStr":"{\"title\":\"Differentiation of carcinosarcoma from endometrial carcinoma on magnetic resonance imaging using deep learning.\",\"authors\":\"Tsukasa Saida, Kensaku Mori, Sodai Hoshiai, Masafumi Sakai, Aiko Urushibara, Toshitaka Ishiguro, Toyomi Satoh, Takahito Nakajima\",\"doi\":\"10.5114/pjr.2022.119806\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To verify whether deep learning can be used to differentiate between carcinosarcomas (CSs) and endometrial carcinomas (ECs) using several magnetic resonance imaging (MRI) sequences.</p><p><strong>Material and methods: </strong>This retrospective study included 52 patients with CS and 279 patients with EC. A deep-learning model that uses convolutional neural networks (CNN) was trained with 572 T2-weighted images (T2WI) from 42 patients, 488 apparent diffusion coefficient of water maps from 33 patients, and 539 fat-saturated contrast-enhanced T1-weighted images from 40 patients with CS, as well as 1612 images from 223 patients with EC for each sequence. These were tested with 9-10 images of 9-10 patients with CS and 56 images of 56 patients with EC for each sequence, respectively. Three experienced radiologists independently interpreted these test images. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) for each sequence were compared between the CNN models and the radiologists.</p><p><strong>Results: </strong>The CNN model of each sequence had sensitivity 0.89-0.93, specificity 0.44-0.70, accuracy 0.83-0.89, and AUC 0.80-0.94. It also showed an equivalent or better diagnostic performance than the 3 readers (sensitivity 0.43-0.91, specificity 0.30-0.78, accuracy 0.45-0.88, and AUC 0.49-0.92). The CNN model displayed the highest diagnostic performance on T2WI (sensitivity 0.93, specificity 0.70, accuracy 0.89, and AUC 0.94).</p><p><strong>Conclusions: </strong>Deep learning provided diagnostic performance comparable to or better than experienced radiologists when distinguishing between CS and EC on MRI.</p>\",\"PeriodicalId\":47128,\"journal\":{\"name\":\"Polish Journal of Radiology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2022-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/67/54/PJR-87-47888.PMC9536210.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Polish Journal of Radiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5114/pjr.2022.119806\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q4\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Polish Journal of Radiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5114/pjr.2022.119806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/1 0:00:00","PubModel":"eCollection","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Differentiation of carcinosarcoma from endometrial carcinoma on magnetic resonance imaging using deep learning.
Purpose: To verify whether deep learning can be used to differentiate between carcinosarcomas (CSs) and endometrial carcinomas (ECs) using several magnetic resonance imaging (MRI) sequences.
Material and methods: This retrospective study included 52 patients with CS and 279 patients with EC. A deep-learning model that uses convolutional neural networks (CNN) was trained with 572 T2-weighted images (T2WI) from 42 patients, 488 apparent diffusion coefficient of water maps from 33 patients, and 539 fat-saturated contrast-enhanced T1-weighted images from 40 patients with CS, as well as 1612 images from 223 patients with EC for each sequence. These were tested with 9-10 images of 9-10 patients with CS and 56 images of 56 patients with EC for each sequence, respectively. Three experienced radiologists independently interpreted these test images. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) for each sequence were compared between the CNN models and the radiologists.
Results: The CNN model of each sequence had sensitivity 0.89-0.93, specificity 0.44-0.70, accuracy 0.83-0.89, and AUC 0.80-0.94. It also showed an equivalent or better diagnostic performance than the 3 readers (sensitivity 0.43-0.91, specificity 0.30-0.78, accuracy 0.45-0.88, and AUC 0.49-0.92). The CNN model displayed the highest diagnostic performance on T2WI (sensitivity 0.93, specificity 0.70, accuracy 0.89, and AUC 0.94).
Conclusions: Deep learning provided diagnostic performance comparable to or better than experienced radiologists when distinguishing between CS and EC on MRI.