ChatGPT 可就患者提出的有关常见肩部病理的问题提供可接受的答复。

IF 1.5 Q3 ORTHOPEDICS
Umar Ghilzai, Benjamin Fiedler, Abdullah Ghali, Aaron Singh, Benjamin Cass, Allan Young, Adil Shahzad Ahmed
{"title":"ChatGPT 可就患者提出的有关常见肩部病理的问题提供可接受的答复。","authors":"Umar Ghilzai, Benjamin Fiedler, Abdullah Ghali, Aaron Singh, Benjamin Cass, Allan Young, Adil Shahzad Ahmed","doi":"10.1177/17585732241283971","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is rapidly becoming a source of medical knowledge for patients. This study aims to assess the completeness and accuracy of ChatGPT's answers to the most frequently asked patients' questions about shoulder pathology.</p><p><strong>Methods: </strong>ChatGPT (version 3.5) was queried to produce the five most common shoulder pathologies: biceps tendonitis, rotator cuff tears, shoulder arthritis, shoulder dislocation and adhesive capsulitis. Subsequently, it generated the five most common patient questions regarding these pathologies and was queried to respond. Responses were evaluated by three shoulder and elbow fellowship-trained orthopedic surgeons with a mean of 9 years of independent practice, on Likert scales for accuracy (1-6) and completeness (rated 1-3).</p><p><strong>Results: </strong>For all questions, responses were deemed acceptable, rated at least \"nearly all correct,\" indicated by a score of 5 or greater for accuracy, and \"adequately complete,\" indicated by a minimum of 2 for completeness. The mean scores for accuracy and completeness, respectively, were 5.5 and 2.6 for rotator cuff tears, 5.8 and 2.7 for shoulder arthritis, 5.5 and 2.3 for shoulder dislocations, 5.1 and 2.4 for adhesive capsulitis, 5.8 and 2.9 for biceps tendonitis.</p><p><strong>Conclusion: </strong>ChatGPT provides both accurate and complete responses to the most common patients' questions about shoulder pathology. These findings suggest that Large Language Models might play a role as a patient resource; however, patients should always verify online information with their physician.</p><p><strong>Level of evidence: </strong>Level V Expert Opinion.</p>","PeriodicalId":36705,"journal":{"name":"Shoulder and Elbow","volume":" ","pages":"17585732241283971"},"PeriodicalIF":1.5000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559869/pdf/","citationCount":"0","resultStr":"{\"title\":\"ChatGPT provides acceptable responses to patient questions regarding common shoulder pathology.\",\"authors\":\"Umar Ghilzai, Benjamin Fiedler, Abdullah Ghali, Aaron Singh, Benjamin Cass, Allan Young, Adil Shahzad Ahmed\",\"doi\":\"10.1177/17585732241283971\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>ChatGPT is rapidly becoming a source of medical knowledge for patients. This study aims to assess the completeness and accuracy of ChatGPT's answers to the most frequently asked patients' questions about shoulder pathology.</p><p><strong>Methods: </strong>ChatGPT (version 3.5) was queried to produce the five most common shoulder pathologies: biceps tendonitis, rotator cuff tears, shoulder arthritis, shoulder dislocation and adhesive capsulitis. Subsequently, it generated the five most common patient questions regarding these pathologies and was queried to respond. Responses were evaluated by three shoulder and elbow fellowship-trained orthopedic surgeons with a mean of 9 years of independent practice, on Likert scales for accuracy (1-6) and completeness (rated 1-3).</p><p><strong>Results: </strong>For all questions, responses were deemed acceptable, rated at least \\\"nearly all correct,\\\" indicated by a score of 5 or greater for accuracy, and \\\"adequately complete,\\\" indicated by a minimum of 2 for completeness. The mean scores for accuracy and completeness, respectively, were 5.5 and 2.6 for rotator cuff tears, 5.8 and 2.7 for shoulder arthritis, 5.5 and 2.3 for shoulder dislocations, 5.1 and 2.4 for adhesive capsulitis, 5.8 and 2.9 for biceps tendonitis.</p><p><strong>Conclusion: </strong>ChatGPT provides both accurate and complete responses to the most common patients' questions about shoulder pathology. These findings suggest that Large Language Models might play a role as a patient resource; however, patients should always verify online information with their physician.</p><p><strong>Level of evidence: </strong>Level V Expert Opinion.</p>\",\"PeriodicalId\":36705,\"journal\":{\"name\":\"Shoulder and Elbow\",\"volume\":\" \",\"pages\":\"17585732241283971\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559869/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Shoulder and Elbow\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/17585732241283971\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Shoulder and Elbow","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/17585732241283971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:ChatGPT正迅速成为患者的医疗知识来源。本研究旨在评估 ChatGPT 对患者最常问到的有关肩部病理问题的回答的完整性和准确性:对 ChatGPT(3.5 版)进行了查询,以得出五种最常见的肩部病症:肱二头肌肌腱炎、肩袖撕裂、肩关节炎、肩关节脱位和粘连性囊炎。随后,它生成了与这些病症有关的五个最常见的患者问题,并询问了回答者。三位受过肩肘骨科研究员培训、平均独立工作 9 年的外科医生用李克特量表对回答的准确性(1-6 分)和完整性(1-3 分)进行了评估:结果:对于所有问题,至少 "几乎全部正确"(准确性评分为 5 分或以上)和 "充分完整"(完整性评分为至少 2 分)的回答均被视为可接受。准确性和完整性的平均得分分别为:肩袖撕裂为 5.5 分和 2.6 分,肩关节炎为 5.8 分和 2.7 分,肩关节脱位为 5.5 分和 2.3 分,粘连性肩囊炎为 5.1 分和 2.4 分,肱二头肌肌腱炎为 5.8 分和 2.9 分:结论:ChatGPT 可以准确、完整地回答患者关于肩部病理的最常见问题。这些研究结果表明,大语言模型可作为患者资源发挥作用;但是,患者应始终与医生核实在线信息:证据级别:V 级 专家意见:V 级
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ChatGPT provides acceptable responses to patient questions regarding common shoulder pathology.

Background: ChatGPT is rapidly becoming a source of medical knowledge for patients. This study aims to assess the completeness and accuracy of ChatGPT's answers to the most frequently asked patients' questions about shoulder pathology.

Methods: ChatGPT (version 3.5) was queried to produce the five most common shoulder pathologies: biceps tendonitis, rotator cuff tears, shoulder arthritis, shoulder dislocation and adhesive capsulitis. Subsequently, it generated the five most common patient questions regarding these pathologies and was queried to respond. Responses were evaluated by three shoulder and elbow fellowship-trained orthopedic surgeons with a mean of 9 years of independent practice, on Likert scales for accuracy (1-6) and completeness (rated 1-3).

Results: For all questions, responses were deemed acceptable, rated at least "nearly all correct," indicated by a score of 5 or greater for accuracy, and "adequately complete," indicated by a minimum of 2 for completeness. The mean scores for accuracy and completeness, respectively, were 5.5 and 2.6 for rotator cuff tears, 5.8 and 2.7 for shoulder arthritis, 5.5 and 2.3 for shoulder dislocations, 5.1 and 2.4 for adhesive capsulitis, 5.8 and 2.9 for biceps tendonitis.

Conclusion: ChatGPT provides both accurate and complete responses to the most common patients' questions about shoulder pathology. These findings suggest that Large Language Models might play a role as a patient resource; however, patients should always verify online information with their physician.

Level of evidence: Level V Expert Opinion.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Shoulder and Elbow
Shoulder and Elbow Medicine-Rehabilitation
CiteScore
2.80
自引率
0.00%
发文量
91
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信