Pilar López-Úbeda PhD , Teodoro Martín-Noguerol MD , Jorge Escartín MD , Antonio Luna MD, PhD
{"title":"自然语言处理在自动检测放射学报告中的意外发现中的作用:RoBERTa、CNN 和 ChatGPT 的比较研究。","authors":"Pilar López-Úbeda PhD , Teodoro Martín-Noguerol MD , Jorge Escartín MD , Antonio Luna MD, PhD","doi":"10.1016/j.acra.2024.07.057","DOIUrl":null,"url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>Large Language Models can capture the context of radiological reports, offering high accuracy in detecting unexpected findings. We aim to fine-tune a Robustly Optimized BERT Pretraining Approach (RoBERTa) model for the automatic detection of unexpected findings in radiology reports to assist radiologists in this relevant task. Second, we compared the performance of RoBERTa with classical convolutional neural network (CNN) and with GPT4 for this goal.</div></div><div><h3>Materials and Methods</h3><div>For this study, a dataset consisting of 44,631 radiological reports for training and 5293 for the initial test set was used. A smaller subset comprising 100 reports was utilized for the comparative test set. The complete dataset was obtained from our institution's Radiology Information System, including reports from various dates, examinations, genders, ages, etc. For the study's methodology, we evaluated two Large Language Models, specifically performing fine-tuning on RoBERTa and developing a prompt for ChatGPT. Furthermore, extending previous studies, we included a CNN in our comparison.</div></div><div><h3>Results</h3><div>The results indicate an accuracy of 86.15% in the initial test set using the RoBERTa model. Regarding the comparative test set, RoBERTa achieves an accuracy of 79%, ChatGPT 64%, and the CNN 49%. Notably, RoBERTa outperforms the other systems by 30% and 15%, respectively.</div></div><div><h3>Conclusion</h3><div>Fine-tuned RoBERTa model can accurately detect unexpected findings in radiology reports outperforming the capability of CNN and ChatGPT for this task.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"31 12","pages":"Pages 4833-4842"},"PeriodicalIF":3.8000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Role of Natural Language Processing in Automatic Detection of Unexpected Findings in Radiology Reports: A Comparative Study of RoBERTa, CNN, and ChatGPT\",\"authors\":\"Pilar López-Úbeda PhD , Teodoro Martín-Noguerol MD , Jorge Escartín MD , Antonio Luna MD, PhD\",\"doi\":\"10.1016/j.acra.2024.07.057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Rationale and Objectives</h3><div>Large Language Models can capture the context of radiological reports, offering high accuracy in detecting unexpected findings. We aim to fine-tune a Robustly Optimized BERT Pretraining Approach (RoBERTa) model for the automatic detection of unexpected findings in radiology reports to assist radiologists in this relevant task. Second, we compared the performance of RoBERTa with classical convolutional neural network (CNN) and with GPT4 for this goal.</div></div><div><h3>Materials and Methods</h3><div>For this study, a dataset consisting of 44,631 radiological reports for training and 5293 for the initial test set was used. A smaller subset comprising 100 reports was utilized for the comparative test set. The complete dataset was obtained from our institution's Radiology Information System, including reports from various dates, examinations, genders, ages, etc. For the study's methodology, we evaluated two Large Language Models, specifically performing fine-tuning on RoBERTa and developing a prompt for ChatGPT. Furthermore, extending previous studies, we included a CNN in our comparison.</div></div><div><h3>Results</h3><div>The results indicate an accuracy of 86.15% in the initial test set using the RoBERTa model. Regarding the comparative test set, RoBERTa achieves an accuracy of 79%, ChatGPT 64%, and the CNN 49%. Notably, RoBERTa outperforms the other systems by 30% and 15%, respectively.</div></div><div><h3>Conclusion</h3><div>Fine-tuned RoBERTa model can accurately detect unexpected findings in radiology reports outperforming the capability of CNN and ChatGPT for this task.</div></div>\",\"PeriodicalId\":50928,\"journal\":{\"name\":\"Academic Radiology\",\"volume\":\"31 12\",\"pages\":\"Pages 4833-4842\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Academic Radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1076633224005622\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1076633224005622","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
摘要
理由和目标:大语言模型可以捕捉放射学报告的上下文,在检测意外发现方面具有很高的准确性。我们的目标是对用于自动检测放射学报告中意外发现的鲁棒优化 BERT 预训练方法(RoBERTa)模型进行微调,以协助放射科医生完成这项相关任务。其次,我们将 RoBERTa 的性能与经典卷积神经网络(CNN)和 GPT4 进行了比较:在这项研究中,我们使用了一个由 44631 份放射报告组成的数据集作为训练集,5293 份报告作为初始测试集。对比测试集则使用了由 100 份报告组成的较小子集。完整的数据集来自本机构的放射学信息系统,包括不同日期、检查、性别、年龄等的报告。在研究方法上,我们评估了两个大型语言模型,特别是对 RoBERTa 进行了微调,并为 ChatGPT 开发了一个提示。此外,在以往研究的基础上,我们还在比较中加入了 CNN:结果表明,使用 RoBERTa 模型,初始测试集的准确率为 86.15%。在比较测试集中,RoBERTa 的准确率为 79%,ChatGPT 为 64%,CNN 为 49%。值得注意的是,RoBERTa 的准确率分别比其他系统高出 30% 和 15%:经过微调的 RoBERTa 模型可以准确检测出放射学报告中的意外发现,在这项任务中的表现优于 CNN 和 ChatGPT。
Role of Natural Language Processing in Automatic Detection of Unexpected Findings in Radiology Reports: A Comparative Study of RoBERTa, CNN, and ChatGPT
Rationale and Objectives
Large Language Models can capture the context of radiological reports, offering high accuracy in detecting unexpected findings. We aim to fine-tune a Robustly Optimized BERT Pretraining Approach (RoBERTa) model for the automatic detection of unexpected findings in radiology reports to assist radiologists in this relevant task. Second, we compared the performance of RoBERTa with classical convolutional neural network (CNN) and with GPT4 for this goal.
Materials and Methods
For this study, a dataset consisting of 44,631 radiological reports for training and 5293 for the initial test set was used. A smaller subset comprising 100 reports was utilized for the comparative test set. The complete dataset was obtained from our institution's Radiology Information System, including reports from various dates, examinations, genders, ages, etc. For the study's methodology, we evaluated two Large Language Models, specifically performing fine-tuning on RoBERTa and developing a prompt for ChatGPT. Furthermore, extending previous studies, we included a CNN in our comparison.
Results
The results indicate an accuracy of 86.15% in the initial test set using the RoBERTa model. Regarding the comparative test set, RoBERTa achieves an accuracy of 79%, ChatGPT 64%, and the CNN 49%. Notably, RoBERTa outperforms the other systems by 30% and 15%, respectively.
Conclusion
Fine-tuned RoBERTa model can accurately detect unexpected findings in radiology reports outperforming the capability of CNN and ChatGPT for this task.
期刊介绍:
Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.