Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.

IF 1.5 Q3 ORTHOPEDICS
Shoulder and Elbow Pub Date : 2024-07-01 Epub Date: 2024-09-20 DOI:10.1177/17585732241259754
Ariana L Shaari, Adam N Fano, Oke Anakwenze, Christopher Klifto
{"title":"Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.","authors":"Ariana L Shaari, Adam N Fano, Oke Anakwenze, Christopher Klifto","doi":"10.1177/17585732241259754","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has progressed at a fast pace. ChatGPT, a rapidly expanding AI platform, has several growing applications in medicine and patient care. However, its ability to provide high-quality answers to patient questions about orthopedic procedures such as Tommy John surgery is unknown. Our objective is to evaluate the quality of information provided by ChatGPT 3.5 and 4.0 in response to patient questions regarding Tommy John surgery.</p><p><strong>Methods: </strong>Twenty-five patient questions regarding Tommy John surgery were posed to ChatGPT 3.5 and 4.0. Readability was assessed via Flesch Kincaid Reading Ease, Flesh Kinkaid Grade Level, Gunning Fog Score, Simple Measure of Gobbledygook, Coleman Liau, and Automated Readability Index. The quality of each response was graded using a 5-point Likert scale.</p><p><strong>Results: </strong>ChatGPT generated information at an educational level that greatly exceeds the recommended level. ChatGPT 4.0 produced slightly better responses to common questions regarding Tommy John surgery with fewer inaccuracies than ChatGPT 3.5.</p><p><strong>Conclusion: </strong>Although ChatGPT can provide accurate information regarding Tommy John surgery, its responses may not be easily comprehended by the average patient. As AI platforms become more accessible to the public, patients must be aware of their limitations.</p>","PeriodicalId":36705,"journal":{"name":"Shoulder and Elbow","volume":"16 4","pages":"429-435"},"PeriodicalIF":1.5000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418706/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Shoulder and Elbow","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/17585732241259754","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/20 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Artificial intelligence (AI) has progressed at a fast pace. ChatGPT, a rapidly expanding AI platform, has several growing applications in medicine and patient care. However, its ability to provide high-quality answers to patient questions about orthopedic procedures such as Tommy John surgery is unknown. Our objective is to evaluate the quality of information provided by ChatGPT 3.5 and 4.0 in response to patient questions regarding Tommy John surgery.

Methods: Twenty-five patient questions regarding Tommy John surgery were posed to ChatGPT 3.5 and 4.0. Readability was assessed via Flesch Kincaid Reading Ease, Flesh Kinkaid Grade Level, Gunning Fog Score, Simple Measure of Gobbledygook, Coleman Liau, and Automated Readability Index. The quality of each response was graded using a 5-point Likert scale.

Results: ChatGPT generated information at an educational level that greatly exceeds the recommended level. ChatGPT 4.0 produced slightly better responses to common questions regarding Tommy John surgery with fewer inaccuracies than ChatGPT 3.5.

Conclusion: Although ChatGPT can provide accurate information regarding Tommy John surgery, its responses may not be easily comprehended by the average patient. As AI platforms become more accessible to the public, patients must be aware of their limitations.

评估 ChatGPT 对患者有关汤米-约翰手术常见问题的答复。
背景:人工智能(AI)发展迅速。ChatGPT 是一个快速发展的人工智能平台,在医学和患者护理方面的应用日益增多。然而,该平台能否为患者提供有关骨科手术(如汤米约翰手术)的高质量问题解答尚不得而知。我们的目的是评估 ChatGPT 3.5 和 4.0 在回答患者有关汤米-约翰手术的问题时所提供信息的质量:我们向 ChatGPT 3.5 和 4.0 提出了 25 个有关汤米-约翰手术的患者问题。可读性通过弗莱什-金凯德阅读难易度、弗莱什-金凯德等级水平、贡宁雾度评分、"拗口 "简单测量法、科尔曼-辽和自动可读性指数进行评估。每个回答的质量均采用 5 点李克特量表进行评分:结果:ChatGPT 生成信息的教育水平大大超过了推荐水平。与 ChatGPT 3.5 相比,ChatGPT 4.0 对有关汤米-约翰手术的常见问题的回答稍好一些,但不准确的地方更少:结论:虽然 ChatGPT 可以提供有关汤米-约翰手术的准确信息,但它的回答可能不容易被普通患者理解。随着人工智能平台越来越多地为公众所用,患者必须意识到它们的局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Shoulder and Elbow
Shoulder and Elbow Medicine-Rehabilitation
CiteScore
2.80
自引率
0.00%
发文量
91
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信