Assessing the quality of ChatGPT's responses to commonly asked questions about trigger finger treatment.

Mehmet Can Gezer, Mehmet Armangil
{"title":"Assessing the quality of ChatGPT's responses to commonly asked questions about trigger finger treatment.","authors":"Mehmet Can Gezer, Mehmet Armangil","doi":"10.14744/tjtes.2025.32735","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>This study aims to evaluate the accuracy and reliability of Generative Pre-trained Transformer (ChatGPT; OpenAI, San Francisco, California) in answering patient-related questions about trigger finger. This evaluation has the potential to enhance patient education prior to treatment and provides insight into the role of artificial intelligence (AI)-based systems in the patient educa-tion process.</p><p><strong>Methods: </strong>The ten most frequently asked questions regarding trigger finger were compiled from patient education websites and a literature review, then posed to ChatGPT. Two orthopedic specialists evaluated the responses using the Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN instrument (A Tool for Judging the Quality of Written Consumer Health Information on Treatment Choices). Additionally, the readability of the responses was assessed using the Flesch-Kincaid Grade Level.</p><p><strong>Results: </strong>The DISCERN scores for ChatGPT's responses to trigger finger questions ranged from 35 to 47, with an average of 42, indicating \"moderate\" quality. While 60% of the responses were satisfactory, 40% contained deficiencies. According to the JAMA Benchmark criteria, the absence of scientific references was a significant drawback. The average readability level corresponded to the university level, making the information difficult to understand for patients with low health literacy. Improvements are needed to enhance the accessibility and comprehensibility of the content for a broader patient population.</p><p><strong>Conclusion: </strong>To the best of our knowledge, this is the first study to investigate the use of ChatGPT in the context of trigger finger. While ChatGPT shows reasonable effectiveness in providing general information on trigger finger, expert oversight is necessary before it can be relied upon as a primary source for patient education.</p>","PeriodicalId":94263,"journal":{"name":"Ulusal travma ve acil cerrahi dergisi = Turkish journal of trauma & emergency surgery : TJTES","volume":"31 4","pages":"389-393"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12000978/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ulusal travma ve acil cerrahi dergisi = Turkish journal of trauma & emergency surgery : TJTES","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14744/tjtes.2025.32735","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: This study aims to evaluate the accuracy and reliability of Generative Pre-trained Transformer (ChatGPT; OpenAI, San Francisco, California) in answering patient-related questions about trigger finger. This evaluation has the potential to enhance patient education prior to treatment and provides insight into the role of artificial intelligence (AI)-based systems in the patient educa-tion process.

Methods: The ten most frequently asked questions regarding trigger finger were compiled from patient education websites and a literature review, then posed to ChatGPT. Two orthopedic specialists evaluated the responses using the Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN instrument (A Tool for Judging the Quality of Written Consumer Health Information on Treatment Choices). Additionally, the readability of the responses was assessed using the Flesch-Kincaid Grade Level.

Results: The DISCERN scores for ChatGPT's responses to trigger finger questions ranged from 35 to 47, with an average of 42, indicating "moderate" quality. While 60% of the responses were satisfactory, 40% contained deficiencies. According to the JAMA Benchmark criteria, the absence of scientific references was a significant drawback. The average readability level corresponded to the university level, making the information difficult to understand for patients with low health literacy. Improvements are needed to enhance the accessibility and comprehensibility of the content for a broader patient population.

Conclusion: To the best of our knowledge, this is the first study to investigate the use of ChatGPT in the context of trigger finger. While ChatGPT shows reasonable effectiveness in providing general information on trigger finger, expert oversight is necessary before it can be relied upon as a primary source for patient education.

评估ChatGPT对有关扳机指治疗的常见问题的回答质量。
背景:本研究旨在评估生成式预训练变压器(ChatGPT;OpenAI,旧金山,加州)回答与患者有关的扳机指问题。这项评估有可能加强治疗前的患者教育,并深入了解基于人工智能(AI)的系统在患者教育过程中的作用。方法:从患者教育网站和文献综述中收集10个最常见的扳机指问题,并向ChatGPT提出。两位骨科专家使用美国医学会杂志(JAMA)基准标准和DISCERN工具(一种判断书面消费者健康信息治疗选择质量的工具)评估了这些反应。此外,使用Flesch-Kincaid Grade Level评估回答的可读性。结果:ChatGPT对触发手指问题的回答得分在35到47之间,平均42分,表示“中等”质量。虽然60%的回答令人满意,但40%的回答存在缺陷。根据《美国医学会杂志》的基准标准,缺乏科学参考文献是一个重大缺陷。平均可读性水平与大学水平相当,这使得健康素养较低的患者难以理解信息。需要改进,以提高更广泛的患者群体的内容的可访问性和可理解性。结论:据我们所知,这是第一个调查在扳机指背景下使用ChatGPT的研究。虽然ChatGPT在提供扳机指的一般信息方面显示出合理的有效性,但在将其作为患者教育的主要来源之前,专家监督是必要的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信