Assessing appropriate responses to ACR urologic imaging scenarios using ChatGPT and Bard

IF 1.5 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Sishir Doddi , Taryn Hibshman , Oscar Salichs , Kaustav Bera , Charit Tippareddy , Nikhil Ramaiya , Sree Harsha Tirumani
{"title":"Assessing appropriate responses to ACR urologic imaging scenarios using ChatGPT and Bard","authors":"Sishir Doddi ,&nbsp;Taryn Hibshman ,&nbsp;Oscar Salichs ,&nbsp;Kaustav Bera ,&nbsp;Charit Tippareddy ,&nbsp;Nikhil Ramaiya ,&nbsp;Sree Harsha Tirumani","doi":"10.1067/j.cpradiol.2023.10.022","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligence (AI) has recently become a trending tool and topic regarding productivity especially with publicly available free services such as ChatGPT and Bard. In this report, we investigate if two widely available chatbots chatGPT and Bard, are able to show consistent accurate responses for the best imaging modality for urologic clinical situations and if they are in line with American College of Radiology (ACR) Appropriateness Criteria (AC). All clinical scenarios provided by the ACR were inputted into ChatGPT and Bard with result compared to the ACR AC and recorded. Both chatbots had an appropriate imaging modality rate of of 62% and no significant difference in proportion of correct imaging modality was found overall between the two services (p&gt;0.05). The results of our study found that both ChatGPT and Bard are similar in their ability to suggest the most appropriate imaging modality in a variety of urologic scenarios based on ACR AC criteria. Nonetheless, both chatbots lack consistent accuracy and further development is necessary for implementation in clinical settings. For proper use of these AI services in clinical decision making, further developments are needed to improve the workflow of physicians.</p></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"53 2","pages":"Pages 226-229"},"PeriodicalIF":1.5000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Problems in Diagnostic Radiology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0363018823001755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) has recently become a trending tool and topic regarding productivity especially with publicly available free services such as ChatGPT and Bard. In this report, we investigate if two widely available chatbots chatGPT and Bard, are able to show consistent accurate responses for the best imaging modality for urologic clinical situations and if they are in line with American College of Radiology (ACR) Appropriateness Criteria (AC). All clinical scenarios provided by the ACR were inputted into ChatGPT and Bard with result compared to the ACR AC and recorded. Both chatbots had an appropriate imaging modality rate of of 62% and no significant difference in proportion of correct imaging modality was found overall between the two services (p>0.05). The results of our study found that both ChatGPT and Bard are similar in their ability to suggest the most appropriate imaging modality in a variety of urologic scenarios based on ACR AC criteria. Nonetheless, both chatbots lack consistent accuracy and further development is necessary for implementation in clinical settings. For proper use of these AI services in clinical decision making, further developments are needed to improve the workflow of physicians.

使用ChatGPT和Bard评估对ACR泌尿系统成像场景的适当反应。
人工智能(AI)最近已成为生产力的热门工具和话题,尤其是在ChatGPT和Bard等公开免费服务的情况下。在这份报告中,我们调查了两个广泛可用的聊天机器人chatGPT和Bard是否能够对泌尿外科临床情况下的最佳成像模式显示出一致的准确反应,以及它们是否符合美国放射学会(ACR)的适当性标准(AC)。将ACR提供的所有临床方案输入ChatGPT和Bard,并将结果与ACR AC进行比较并记录。两个聊天机器人的适当成像模式率均为62%,两个服务之间的正确成像模式比例总体上没有显著差异(p>0.05)。我们的研究结果发现,ChatGPT和Bard在基于ACR AC标准的各种泌尿外科场景中建议最合适成像模式的能力相似。尽管如此,这两种聊天机器人都缺乏一致的准确性,需要进一步开发才能在临床环境中实施。为了在临床决策中正确使用这些人工智能服务,需要进一步开发以改进医生的工作流程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Current Problems in Diagnostic Radiology
Current Problems in Diagnostic Radiology RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
3.00
自引率
0.00%
发文量
113
审稿时长
46 days
期刊介绍: Current Problems in Diagnostic Radiology covers important and controversial topics in radiology. Each issue presents important viewpoints from leading radiologists. High-quality reproductions of radiographs, CT scans, MR images, and sonograms clearly depict what is being described in each article. Also included are valuable updates relevant to other areas of practice, such as medical-legal issues or archiving systems. With new multi-topic format and image-intensive style, Current Problems in Diagnostic Radiology offers an outstanding, time-saving investigation into current topics most relevant to radiologists.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信