The assessment of ChatGPT-4's performance compared to expert's consensus on chronic lateral ankle instability

IF 2.7 Q2 ORTHOPEDICS
Takuji Yokoe, Giulia Roversi, Nuno Sevivas, Naosuke Kamei, Pedro Diniz, Hélder Pereira
{"title":"The assessment of ChatGPT-4's performance compared to expert's consensus on chronic lateral ankle instability","authors":"Takuji Yokoe,&nbsp;Giulia Roversi,&nbsp;Nuno Sevivas,&nbsp;Naosuke Kamei,&nbsp;Pedro Diniz,&nbsp;Hélder Pereira","doi":"10.1002/jeo2.70393","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Purpose</h3>\n \n <p>To evaluate the accuracy of answers to clinical questions on the surgical treatment of chronic lateral ankle instability (CLAI) using ChatGPT-4 as a reference for consensus statements developed by the ESSKA-AFAS Ankle Instability Group (AIG). This study simulated the clinical settings where non-expert clinicians treat patients with CLAI.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The large language model (LLM) ChatGPT-4 was used on 10 February 2025 to answer a total of 17 questions regarding the surgical management of CLAI that were developed by the ESSKA-AFAS AIG. The ChatGPT responses were compared with the consensus statements developed by ESSKA-AFAS AIG. The consistency and accuracy of the answers by ChatGPT as a reference for the experts' answers were evaluated. The consistency of ChatGPT's answers to the consensus statements was assessed by the question, 'Is the answer by ChatGPT agreement with those by the experts? (Yes or No)'. Four scoring categories: Accuracy, Overconclusiveness (proposed recommendation despite the lack of consensus), Supplementary (additional information not covered by the consensus statement), and Incompleteness, were used to evaluate the quality of ChatGPT's answers.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Of the 17 questions on the surgical management of CLAI, 11 answers (64.7%) were agreement with the consensus statements by the experts. The percentages of ChatGPT's answers that were considered ‘Yes’ in the Accuracy and Supplementary were 64.7% (11/17) and 70.6% (12/17), respectively. The percentages of ChatGPT's answers that were considered “No” in the Overconclusiveness and Incompleteness were 76.5% (13/17) and 88.2% (15/17), respectively.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>The present study showed that ChatGPT-4 could not provide answers to queries on the surgical management of CLAI, such as foot and ankle experts. However, ChatGPT also showed its promising potential for its application when managing patients with CLAI.</p>\n </section>\n \n <section>\n \n <h3> Level of Evidence</h3>\n \n <p>Level Ⅳ.</p>\n </section>\n </div>","PeriodicalId":36909,"journal":{"name":"Journal of Experimental Orthopaedics","volume":"12 3","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jeo2.70393","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Orthopaedics","FirstCategoryId":"1085","ListUrlMain":"https://esskajournals.onlinelibrary.wiley.com/doi/10.1002/jeo2.70393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

To evaluate the accuracy of answers to clinical questions on the surgical treatment of chronic lateral ankle instability (CLAI) using ChatGPT-4 as a reference for consensus statements developed by the ESSKA-AFAS Ankle Instability Group (AIG). This study simulated the clinical settings where non-expert clinicians treat patients with CLAI.

Methods

The large language model (LLM) ChatGPT-4 was used on 10 February 2025 to answer a total of 17 questions regarding the surgical management of CLAI that were developed by the ESSKA-AFAS AIG. The ChatGPT responses were compared with the consensus statements developed by ESSKA-AFAS AIG. The consistency and accuracy of the answers by ChatGPT as a reference for the experts' answers were evaluated. The consistency of ChatGPT's answers to the consensus statements was assessed by the question, 'Is the answer by ChatGPT agreement with those by the experts? (Yes or No)'. Four scoring categories: Accuracy, Overconclusiveness (proposed recommendation despite the lack of consensus), Supplementary (additional information not covered by the consensus statement), and Incompleteness, were used to evaluate the quality of ChatGPT's answers.

Results

Of the 17 questions on the surgical management of CLAI, 11 answers (64.7%) were agreement with the consensus statements by the experts. The percentages of ChatGPT's answers that were considered ‘Yes’ in the Accuracy and Supplementary were 64.7% (11/17) and 70.6% (12/17), respectively. The percentages of ChatGPT's answers that were considered “No” in the Overconclusiveness and Incompleteness were 76.5% (13/17) and 88.2% (15/17), respectively.

Conclusion

The present study showed that ChatGPT-4 could not provide answers to queries on the surgical management of CLAI, such as foot and ankle experts. However, ChatGPT also showed its promising potential for its application when managing patients with CLAI.

Level of Evidence

Level Ⅳ.

Abstract Image

Abstract Image

Abstract Image

将ChatGPT-4的性能与专家对慢性外侧踝关节不稳定的共识进行比较
目的利用ChatGPT-4作为ESSKA-AFAS踝关节不稳定小组(AIG)制定的共识声明的参考,评估手术治疗慢性外侧踝关节不稳定(CLAI)的临床问题答案的准确性。本研究模拟了非专家临床医生治疗CLAI患者的临床环境。方法采用大语言模型(LLM) ChatGPT-4,于2025年2月10日回答由ESSKA-AFAS AIG开发的关于CLAI手术治疗的17个问题。ChatGPT的回答与ESSKA-AFAS AIG制定的共识声明进行了比较。对ChatGPT作为专家回答参考的答案的一致性和准确性进行了评价。ChatGPT对共识陈述的答案的一致性通过以下问题进行评估:“ChatGPT的答案与专家的答案是否一致?”(是或否)。四个评分类别:准确性,过度结论性(尽管缺乏共识的建议),补充性(共识声明未涵盖的额外信息)和不完整性,用于评估ChatGPT答案的质量。结果在17个关于CLAI手术处理的问题中,有11个(64.7%)的回答与专家的共识一致。ChatGPT在准确性和补充性的回答中被认为是“是”的百分比分别为64.7%(11/17)和70.6%(12/17)。在overconusiveness和Incompleteness中,ChatGPT的答案中被认为是No的比例分别为76.5%(13/17)和88.2%(15/17)。结论本研究表明,ChatGPT-4不能回答足部和踝关节专家对CLAI手术处理的疑问。然而,ChatGPT在管理CLAI患者方面也显示出了良好的应用潜力。证据级别Ⅳ。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Experimental Orthopaedics
Journal of Experimental Orthopaedics Medicine-Orthopedics and Sports Medicine
CiteScore
3.20
自引率
5.60%
发文量
114
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信