Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong
求助PDF
{"title":"人工智能在乳腺诊断和报告生成中的应用。","authors":"Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong","doi":"10.1148/ryai.240625","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast US reports. Materials and Methods This retrospective study included 104 364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82 896 cases, validated on 10 385 cases, and tested on an internal set (10 383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (each with >10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (with 7 years of experience), as well as reports from three junior radiologists (each with 2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for noninferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% (285 of 300) versus 92.33% (277 of 300) (<i>P</i> < .001, noninferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% (348 of 400) versus 90.75% (363 of 400) (<i>P</i> = .06), 86.50% (346 of 400) versus 92.00% (368 of 400) (<i>P</i> = .007), and 84.75% (339 of 400) versus 90.25% (361 of 400) (<i>P</i> = .02) without and with AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. <b>Keywords:</b> Neural Networks, Computer-aided Diagnosis, CAD, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240625"},"PeriodicalIF":13.2000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in Breast US Diagnosis and Report Generation.\",\"authors\":\"Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong\",\"doi\":\"10.1148/ryai.240625\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast US reports. Materials and Methods This retrospective study included 104 364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82 896 cases, validated on 10 385 cases, and tested on an internal set (10 383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (each with >10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (with 7 years of experience), as well as reports from three junior radiologists (each with 2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for noninferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% (285 of 300) versus 92.33% (277 of 300) (<i>P</i> < .001, noninferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% (348 of 400) versus 90.75% (363 of 400) (<i>P</i> = .06), 86.50% (346 of 400) versus 92.00% (368 of 400) (<i>P</i> = .007), and 84.75% (339 of 400) versus 90.25% (361 of 400) (<i>P</i> = .02) without and with AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. <b>Keywords:</b> Neural Networks, Computer-aided Diagnosis, CAD, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>\",\"PeriodicalId\":29787,\"journal\":{\"name\":\"Radiology-Artificial Intelligence\",\"volume\":\" \",\"pages\":\"e240625\"},\"PeriodicalIF\":13.2000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology-Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/ryai.240625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
引用
批量引用