Comparative Performance of Anthropic Claude and OpenAI GPT Models in Basic Radiological Imaging Tasks.

IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Cindy Nguyen, Daniel Carrion, Mohamed K Badawy
{"title":"Comparative Performance of Anthropic Claude and OpenAI GPT Models in Basic Radiological Imaging Tasks.","authors":"Cindy Nguyen, Daniel Carrion, Mohamed K Badawy","doi":"10.1111/1754-9485.13858","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Publicly available artificial intelligence (AI) Vision Language Models (VLMs) are constantly improving. The advent of vision capabilities on these models could enhance radiology workflows. Evaluating their performance in radiological image interpretation is vital to their potential integration into practice.</p><p><strong>Aim: </strong>This study aims to evaluate the proficiency and consistency of the publicly available VLMs, Anthropic's Claude and OpenAI's GPT, across multiple iterations in basic image interpretation tasks.</p><p><strong>Method: </strong>Subsets from publicly available datasets, ROCOv2 and MURAv1.1, were used to evaluate 6 VLMs. A system prompt and image were input into each model three times. The outputs were compared to the dataset captions to evaluate each model's accuracy in recognising the modality, anatomy, and detecting fractures on radiographs. The consistency of the output across iterations was also analysed.</p><p><strong>Results: </strong>Evaluation of the ROCOv2 dataset showed high accuracy in modality recognition, with some models achieving 100%. Anatomical recognition ranged between 61% and 85% accuracy across all models tested. On the MURAv1.1 dataset, Claude-3.5-Sonnet had the highest anatomical recognition with 57% accuracy, while GPT-4o had the best fracture detection with 62% accuracy. Claude-3.5-Sonnet was the most consistent model, with 83% and 92% consistency in anatomy and fracture detection, respectively.</p><p><strong>Conclusion: </strong>Given Claude and GPT's current accuracy and reliability, the integration of these models into clinical settings is not yet feasible. This study highlights the need for ongoing development and establishment of standardised testing techniques to ensure these models achieve reliable performance.</p>","PeriodicalId":16218,"journal":{"name":"Journal of Medical Imaging and Radiation Oncology","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging and Radiation Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/1754-9485.13858","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Publicly available artificial intelligence (AI) Vision Language Models (VLMs) are constantly improving. The advent of vision capabilities on these models could enhance radiology workflows. Evaluating their performance in radiological image interpretation is vital to their potential integration into practice.

Aim: This study aims to evaluate the proficiency and consistency of the publicly available VLMs, Anthropic's Claude and OpenAI's GPT, across multiple iterations in basic image interpretation tasks.

Method: Subsets from publicly available datasets, ROCOv2 and MURAv1.1, were used to evaluate 6 VLMs. A system prompt and image were input into each model three times. The outputs were compared to the dataset captions to evaluate each model's accuracy in recognising the modality, anatomy, and detecting fractures on radiographs. The consistency of the output across iterations was also analysed.

Results: Evaluation of the ROCOv2 dataset showed high accuracy in modality recognition, with some models achieving 100%. Anatomical recognition ranged between 61% and 85% accuracy across all models tested. On the MURAv1.1 dataset, Claude-3.5-Sonnet had the highest anatomical recognition with 57% accuracy, while GPT-4o had the best fracture detection with 62% accuracy. Claude-3.5-Sonnet was the most consistent model, with 83% and 92% consistency in anatomy and fracture detection, respectively.

Conclusion: Given Claude and GPT's current accuracy and reliability, the integration of these models into clinical settings is not yet feasible. This study highlights the need for ongoing development and establishment of standardised testing techniques to ensure these models achieve reliable performance.

Anthropic Claude和OpenAI GPT模型在基本放射成像任务中的比较性能。
背景:公开可用的人工智能(AI)视觉语言模型(vlm)正在不断改进。这些模型上视觉功能的出现可以增强放射学工作流程。评估其在放射图像解释中的表现对其潜在的整合实践至关重要。目的:本研究旨在评估公开可用的vlm, Anthropic的Claude和OpenAI的GPT在基本图像解释任务中的多次迭代的熟练度和一致性。方法:采用公开数据集ROCOv2和MURAv1.1中的子集对6例VLMs进行评估。在每个模型中输入系统提示符和图像三次。将输出结果与数据集标题进行比较,以评估每个模型在识别形态、解剖结构和x线照片上检测骨折方面的准确性。分析了迭代输出的一致性。结果:对ROCOv2数据集的评估显示,模态识别的准确率较高,部分模型达到100%。在所有测试的模型中,解剖识别的准确率在61%到85%之间。在MURAv1.1数据集上,Claude-3.5-Sonnet具有最高的解剖识别准确率(57%),而gpt - 40具有最佳的骨折检测准确率(62%)。Claude-3.5-Sonnet是最一致的模型,解剖和骨折检测的一致性分别为83%和92%。结论:考虑到Claude和GPT目前的准确性和可靠性,将这些模型整合到临床环境中尚不可行。这项研究强调了持续开发和建立标准化测试技术的必要性,以确保这些模型达到可靠的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.30
自引率
6.20%
发文量
133
审稿时长
6-12 weeks
期刊介绍: Journal of Medical Imaging and Radiation Oncology (formerly Australasian Radiology) is the official journal of The Royal Australian and New Zealand College of Radiologists, publishing articles of scientific excellence in radiology and radiation oncology. Manuscripts are judged on the basis of their contribution of original data and ideas or interpretation. All articles are peer reviewed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信