人工智能在乳腺诊断和报告生成中的应用。

IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong
{"title":"人工智能在乳腺诊断和报告生成中的应用。","authors":"Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong","doi":"10.1148/ryai.240625","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast US reports. Materials and Methods This retrospective study included 104 364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82 896 cases, validated on 10 385 cases, and tested on an internal set (10 383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (each with >10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (with 7 years of experience), as well as reports from three junior radiologists (each with 2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for noninferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% (285 of 300) versus 92.33% (277 of 300) (<i>P</i> < .001, noninferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% (348 of 400) versus 90.75% (363 of 400) (<i>P</i> = .06), 86.50% (346 of 400) versus 92.00% (368 of 400) (<i>P</i> = .007), and 84.75% (339 of 400) versus 90.25% (361 of 400) (<i>P</i> = .02) without and with AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. <b>Keywords:</b> Neural Networks, Computer-aided Diagnosis, CAD, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240625"},"PeriodicalIF":13.2000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in Breast US Diagnosis and Report Generation.\",\"authors\":\"Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong\",\"doi\":\"10.1148/ryai.240625\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast US reports. Materials and Methods This retrospective study included 104 364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82 896 cases, validated on 10 385 cases, and tested on an internal set (10 383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (each with >10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (with 7 years of experience), as well as reports from three junior radiologists (each with 2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for noninferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% (285 of 300) versus 92.33% (277 of 300) (<i>P</i> < .001, noninferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% (348 of 400) versus 90.75% (363 of 400) (<i>P</i> = .06), 86.50% (346 of 400) versus 92.00% (368 of 400) (<i>P</i> = .007), and 84.75% (339 of 400) versus 90.25% (361 of 400) (<i>P</i> = .02) without and with AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. <b>Keywords:</b> Neural Networks, Computer-aided Diagnosis, CAD, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>\",\"PeriodicalId\":29787,\"journal\":{\"name\":\"Radiology-Artificial Intelligence\",\"volume\":\" \",\"pages\":\"e240625\"},\"PeriodicalIF\":13.2000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology-Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/ryai.240625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发和评估用于生成乳腺超声(BUS)报告的人工智能(AI)系统。材料与方法回顾性研究纳入2020年1月- 2022年12月三家医院104364例病例。人工智能系统在82896个案例上进行了训练,在10385个案例上进行了验证,并在一个内部集(10383个案例)和两个外部集(300和400个案例)上进行了测试。在盲评的情况下,三名资深放射科医生(50 - 10年经验)评估了人工智能生成的报告、一名中级放射科医生(7年经验)撰写的报告,以及三名初级放射科医生(2-3年经验)在有和没有人工智能帮助的情况下的报告。主要结果包括乳腺成像报告和数据系统(BI-RADS)分类的接受率和病变特征。统计分析包括单侧和双侧McNemar非劣效性检验和显著性检验。结果在外部测试集1(300例)中,中级放射科医师和人工智能系统的BI-RADS满意率分别为95.00%[285/300]和92.33% [277/300](P < .001;非劣效性检验,预设裕度为10%)。在外部测试集2(400例)中,3名初级放射科医师在人工智能辅助和非人工智能辅助下的BI-RADS满意率分别为87.00%[348/400]对90.75% [363/400](P = .06)、86.50%[346/400]对92.00% [368/400](P = .007)和84.75%[339/400]对90.25% [361/400](P = .02)。结论人工智能系统对中级放射科医生和辅助初级放射科医生进行BI-RADS分类的效果相当。©RSNA, 2025年。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence in Breast US Diagnosis and Report Generation.

Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast US reports. Materials and Methods This retrospective study included 104 364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82 896 cases, validated on 10 385 cases, and tested on an internal set (10 383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (each with >10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (with 7 years of experience), as well as reports from three junior radiologists (each with 2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for noninferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% (285 of 300) versus 92.33% (277 of 300) (P < .001, noninferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% (348 of 400) versus 90.75% (363 of 400) (P = .06), 86.50% (346 of 400) versus 92.00% (368 of 400) (P = .007), and 84.75% (339 of 400) versus 90.25% (361 of 400) (P = .02) without and with AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. Keywords: Neural Networks, Computer-aided Diagnosis, CAD, Ultrasound Supplemental material is available for this article. © RSNA, 2025.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
16.20
自引率
1.00%
发文量
0
期刊介绍: Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信