Assessing ChatGPT responses to frequently asked questions regarding total shoulder arthroplasty

Q4 Medicine
Jeremy M. Adelstein MD, Margaret A. Sinkler MD, Lambert T. Li MD, Raymond Chen MD, Robert J. Gillespie MD, Jacob Calcei MD
{"title":"Assessing ChatGPT responses to frequently asked questions regarding total shoulder arthroplasty","authors":"Jeremy M. Adelstein MD,&nbsp;Margaret A. Sinkler MD,&nbsp;Lambert T. Li MD,&nbsp;Raymond Chen MD,&nbsp;Robert J. Gillespie MD,&nbsp;Jacob Calcei MD","doi":"10.1053/j.sart.2024.01.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>“Dr. Google” has long been a resource for health information-seeking individuals. With the well-established presence of artificial intelligence in the healthcare world, it is reasonable to imagine that ChatGPT, an artificial intelligence-powered online chatbot, could become the next outlet for seeking medical advice online. Similar to Mika et al, this study aims to analyze the ChatGPT’s ability to answer frequently asked questions (FAQs) regarding total shoulder arthroplasty (TSA).</p></div><div><h3>Methods</h3><p>Ten FAQs regarding TSA were presented to ChatGPT and initial responses were recorded and analyzed against evidence-based literature. Responses were rated as “excellent response requiring no clarification,” “satisfactory response requiring minimal clarification,” “satisfactory response requiring moderate clarification,” or “unsatisfactory response requiring substantial clarification.”</p></div><div><h3>Results</h3><p>Only one response from ChatGPT was rated unsatisfactory and required substantial clarification. While no responses received an excellent rating, the average rating was considered to only require minimal or moderate clarification.</p></div><div><h3>Conclusion</h3><p>ChatGPT was able to provide largely accurate responses to FAQs regarding TSA while appropriately reiterating the importance of always consulting a medical professional. ChatGPT could prove to be another avenue for supplementary medical information regarding TSA.</p></div>","PeriodicalId":39885,"journal":{"name":"Seminars in Arthroplasty","volume":"34 2","pages":"Pages 416-424"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seminars in Arthroplasty","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1045452724000142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Background

“Dr. Google” has long been a resource for health information-seeking individuals. With the well-established presence of artificial intelligence in the healthcare world, it is reasonable to imagine that ChatGPT, an artificial intelligence-powered online chatbot, could become the next outlet for seeking medical advice online. Similar to Mika et al, this study aims to analyze the ChatGPT’s ability to answer frequently asked questions (FAQs) regarding total shoulder arthroplasty (TSA).

Methods

Ten FAQs regarding TSA were presented to ChatGPT and initial responses were recorded and analyzed against evidence-based literature. Responses were rated as “excellent response requiring no clarification,” “satisfactory response requiring minimal clarification,” “satisfactory response requiring moderate clarification,” or “unsatisfactory response requiring substantial clarification.”

Results

Only one response from ChatGPT was rated unsatisfactory and required substantial clarification. While no responses received an excellent rating, the average rating was considered to only require minimal or moderate clarification.

Conclusion

ChatGPT was able to provide largely accurate responses to FAQs regarding TSA while appropriately reiterating the importance of always consulting a medical professional. ChatGPT could prove to be another avenue for supplementary medical information regarding TSA.

评估 ChatGPT 对有关全肩关节置换术常见问题的回答
背景 "谷歌医生 "一直以来都是人们寻求健康信息的资源。随着人工智能在医疗保健领域的广泛应用,我们有理由想象,由人工智能驱动的在线聊天机器人 ChatGPT 可能会成为下一个在线寻求医疗建议的渠道。与 Mika 等人的研究类似,本研究旨在分析 ChatGPT 回答有关全肩关节置换术(TSA)的常见问题(FAQ)的能力。回复被评为 "无需澄清的优秀回复"、"需要少量澄清的满意回复"、"需要适度澄清的满意回复 "或 "需要大量澄清的不满意回复"。结论 ChatGPT 能够对有关 TSA 的常见问题提供基本准确的答复,同时适当重申了始终咨询医疗专业人员的重要性。事实证明,ChatGPT 可以成为补充有关 TSA 医学信息的另一个途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Seminars in Arthroplasty
Seminars in Arthroplasty Medicine-Surgery
CiteScore
1.00
自引率
0.00%
发文量
104
期刊介绍: Each issue of Seminars in Arthroplasty provides a comprehensive, current overview of a single topic in arthroplasty. The journal addresses orthopedic surgeons, providing authoritative reviews with emphasis on new developments relevant to their practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信