ChatGPT's diagnostic performance based on textual vs. visual information compared to radiologists' diagnostic performance in musculoskeletal radiology.

IF 4.7 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
European Radiology Pub Date : 2025-01-01 Epub Date: 2024-07-12 DOI:10.1007/s00330-024-10902-5
Daisuke Horiuchi, Hiroyuki Tatekawa, Tatsushi Oura, Taro Shimono, Shannon L Walston, Hirotaka Takita, Shu Matsushita, Yasuhito Mitsuyama, Yukio Miki, Daiju Ueda
{"title":"ChatGPT's diagnostic performance based on textual vs. visual information compared to radiologists' diagnostic performance in musculoskeletal radiology.","authors":"Daisuke Horiuchi, Hiroyuki Tatekawa, Tatsushi Oura, Taro Shimono, Shannon L Walston, Hirotaka Takita, Shu Matsushita, Yasuhito Mitsuyama, Yukio Miki, Daiju Ueda","doi":"10.1007/s00330-024-10902-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To compare the diagnostic accuracy of Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT-4 with vision (GPT-4V) based ChatGPT, and radiologists in musculoskeletal radiology.</p><p><strong>Materials and methods: </strong>We included 106 \"Test Yourself\" cases from Skeletal Radiology between January 2014 and September 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Two radiologists (a radiology resident and a board-certified radiologist) independently provided diagnoses for all cases. The diagnostic accuracy rates were determined based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists.</p><p><strong>Results: </strong>GPT-4-based ChatGPT significantly outperformed GPT-4V-based ChatGPT (p < 0.001) with accuracy rates of 43% (46/106) and 8% (9/106), respectively. The radiology resident and the board-certified radiologist achieved accuracy rates of 41% (43/106) and 53% (56/106). The diagnostic accuracy of GPT-4-based ChatGPT was comparable to that of the radiology resident, but was lower than that of the board-certified radiologist although the differences were not significant (p = 0.78 and 0.22, respectively). The diagnostic accuracy of GPT-4V-based ChatGPT was significantly lower than those of both radiologists (p < 0.001 and < 0.001, respectively).</p><p><strong>Conclusion: </strong>GPT-4-based ChatGPT demonstrated significantly higher diagnostic accuracy than GPT-4V-based ChatGPT. While GPT-4-based ChatGPT's diagnostic performance was comparable to radiology residents, it did not reach the performance level of board-certified radiologists in musculoskeletal radiology.</p><p><strong>Clinical relevance statement: </strong>GPT-4-based ChatGPT outperformed GPT-4V-based ChatGPT and was comparable to radiology residents, but it did not reach the level of board-certified radiologists in musculoskeletal radiology. Radiologists should comprehend ChatGPT's current performance as a diagnostic tool for optimal utilization.</p><p><strong>Key points: </strong>This study compared the diagnostic performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists in musculoskeletal radiology. GPT-4-based ChatGPT was comparable to radiology residents, but did not reach the level of board-certified radiologists. When utilizing ChatGPT, it is crucial to input appropriate descriptions of imaging findings rather than the images.</p>","PeriodicalId":12076,"journal":{"name":"European Radiology","volume":" ","pages":"506-516"},"PeriodicalIF":4.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11632015/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00330-024-10902-5","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: To compare the diagnostic accuracy of Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT-4 with vision (GPT-4V) based ChatGPT, and radiologists in musculoskeletal radiology.

Materials and methods: We included 106 "Test Yourself" cases from Skeletal Radiology between January 2014 and September 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Two radiologists (a radiology resident and a board-certified radiologist) independently provided diagnoses for all cases. The diagnostic accuracy rates were determined based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists.

Results: GPT-4-based ChatGPT significantly outperformed GPT-4V-based ChatGPT (p < 0.001) with accuracy rates of 43% (46/106) and 8% (9/106), respectively. The radiology resident and the board-certified radiologist achieved accuracy rates of 41% (43/106) and 53% (56/106). The diagnostic accuracy of GPT-4-based ChatGPT was comparable to that of the radiology resident, but was lower than that of the board-certified radiologist although the differences were not significant (p = 0.78 and 0.22, respectively). The diagnostic accuracy of GPT-4V-based ChatGPT was significantly lower than those of both radiologists (p < 0.001 and < 0.001, respectively).

Conclusion: GPT-4-based ChatGPT demonstrated significantly higher diagnostic accuracy than GPT-4V-based ChatGPT. While GPT-4-based ChatGPT's diagnostic performance was comparable to radiology residents, it did not reach the performance level of board-certified radiologists in musculoskeletal radiology.

Clinical relevance statement: GPT-4-based ChatGPT outperformed GPT-4V-based ChatGPT and was comparable to radiology residents, but it did not reach the level of board-certified radiologists in musculoskeletal radiology. Radiologists should comprehend ChatGPT's current performance as a diagnostic tool for optimal utilization.

Key points: This study compared the diagnostic performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists in musculoskeletal radiology. GPT-4-based ChatGPT was comparable to radiology residents, but did not reach the level of board-certified radiologists. When utilizing ChatGPT, it is crucial to input appropriate descriptions of imaging findings rather than the images.

Abstract Image

基于文本信息和视觉信息的 ChatGPT 诊断性能与放射科医生在肌肉骨骼放射学方面的诊断性能比较。
目的比较基于生成预训练变换器(GPT)-4 的 ChatGPT、基于视觉的 GPT-4 的 ChatGPT 和放射科医生在肌肉骨骼放射学方面的诊断准确性:我们纳入了 2014 年 1 月至 2023 年 9 月期间骨骼放射科的 106 个 "自我测试 "病例。我们将病史和成像结果输入基于 GPT-4 的 ChatGPT,将病史和图像输入基于 GPT-4V 的 ChatGPT,然后两者为每个病例生成诊断。两名放射科医生(一名放射科住院医师和一名获得医学会认证的放射科医生)独立为所有病例做出诊断。诊断准确率是根据已公布的基本事实确定的。对基于 GPT-4 的 ChatGPT、基于 GPT-4V 的 ChatGPT 和放射科医生的诊断准确率进行了卡方检验:结果:基于 GPT-4 的 ChatGPT 明显优于基于 GPT-4V 的 ChatGPT(p 结论:基于 GPT-4 的 ChatGPT 的诊断准确性明显优于基于 GPT-4V 的 ChatGPT:基于 GPT-4 的 ChatGPT 的诊断准确性明显高于基于 GPT-4V 的 ChatGPT。虽然基于 GPT-4 的 ChatGPT 的诊断性能可与放射科住院医师媲美,但还达不到肌肉骨骼放射科委员会认证放射科医师的性能水平:基于 GPT-4 的 ChatGPT 优于基于 GPT-4V 的 ChatGPT,与放射科住院医师相当,但未达到肌肉骨骼放射科委员会认证放射科医师的水平。放射科医生应了解 ChatGPT 作为诊断工具的当前性能,以实现最佳利用:本研究比较了基于 GPT-4 的 ChatGPT、基于 GPT-4V 的 ChatGPT 和肌肉骨骼放射科放射医师的诊断性能。基于 GPT-4 的 ChatGPT 可与放射科住院医生媲美,但达不到放射科医师的水平。在使用 ChatGPT 时,输入适当的成像结果描述而不是图像至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
European Radiology
European Radiology 医学-核医学
CiteScore
11.60
自引率
8.50%
发文量
874
审稿时长
2-4 weeks
期刊介绍: European Radiology (ER) continuously updates scientific knowledge in radiology by publication of strong original articles and state-of-the-art reviews written by leading radiologists. A well balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes ER an indispensable source for current information in this field. This is the Journal of the European Society of Radiology, and the official journal of a number of societies. From 2004-2008 supplements to European Radiology were published under its companion, European Radiology Supplements, ISSN 1613-3749.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信