ChatGPT Provides Accurate but Incomplete Responses and Reliably Adjusts Readability to Prompts for Hamstring Injury Frequently Asked Questions

Q3 Medicine
Thomas W. Fenn M.D., Dominic M. Farronato M.D., Douglas K. Wells M.D., George B. Reahl M.D., F. Winston Gwathmey M.D., Charles A. Su M.D., Ph.D.
{"title":"ChatGPT Provides Accurate but Incomplete Responses and Reliably Adjusts Readability to Prompts for Hamstring Injury Frequently Asked Questions","authors":"Thomas W. Fenn M.D.,&nbsp;Dominic M. Farronato M.D.,&nbsp;Douglas K. Wells M.D.,&nbsp;George B. Reahl M.D.,&nbsp;F. Winston Gwathmey M.D.,&nbsp;Charles A. Su M.D., Ph.D.","doi":"10.1016/j.asmr.2025.101200","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>To evaluate the accuracy of ChatGPT’s responses to frequently asked questions (FAQs) about hamstring injuries and to determine, if prompted, whether ChatGPT could appropriately tailor the reading level to that suggested.</div></div><div><h3>Methods</h3><div>A preliminary list of 15 questions on hamstring injuries was developed from various FAQ sections on patient education websites from a variety of institutions, from which the 10 most frequently cited questions were selected. Three queries were performed, inputting the questions into ChatGPT-4.0: (1) unprompted, naïve, (2) additional prompt specifying the response being tailored to a seventh-grade reading level, and (3) additional prompt specifying the response being tailored to a college graduate reading level. The responses from the unprompted query were independently evaluated by two of the authors. To assess the quality of the answers, a grading system was applied: (A) correct and sufficient response; (B) correct but insufficient response; (C) response containing both correct and incorrect information; and (D) incorrect response. In addition, the readability of each response was measured using the Flesch-Kinkaid Reading Ease Score (FRES) and Grade Level (FKGL) scales.</div></div><div><h3>Results</h3><div>Ten responses were evaluated. Inter-rater reliability was 0.6 regarding grading. Of the initial query, 2 of 10 responses received a grade of A, seven were graded as B, and one were graded as C. The average cumulative FRES and FKGL scores of the initial query was 61.64 and 10.28, respectively. The average cumulative FRES and FKGL scores of the secondary query were 75.2 and 6.1, respectively. Finally, the average FRES and FKGL scores of the third query were 12.08 and 17.23.</div></div><div><h3>Conclusions</h3><div>ChatGPT showed generally satisfactory accuracy in responding to questions regarding hamstring injuries, although certain responses lacked completeness or specificity. On initial, unprompted queries, the readability of responses aligned with a tenth-grade level. However, when explicitly prompted, ChatGPT reliably adjusted the complexity of its responses to both a seventh-grade and a graduate-level reading standard. These findings suggest that although ChatGPT may not consistently deliver fully comprehensive medical information, it possesses the capacity to adapt its output to meet specific readability targets.</div></div><div><h3>Clinical Relevance</h3><div>Artificial intelligence models like ChatGPT have the potential to serve as a supplemental educational tool for patients with orthopaedic to aid medical-decision making. It is important that we continually review the quality of they medical information generated by these artificial models as the evolve and improve.</div></div>","PeriodicalId":34631,"journal":{"name":"Arthroscopy Sports Medicine and Rehabilitation","volume":"7 4","pages":"Article 101200"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Arthroscopy Sports Medicine and Rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666061X25001269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

To evaluate the accuracy of ChatGPT’s responses to frequently asked questions (FAQs) about hamstring injuries and to determine, if prompted, whether ChatGPT could appropriately tailor the reading level to that suggested.

Methods

A preliminary list of 15 questions on hamstring injuries was developed from various FAQ sections on patient education websites from a variety of institutions, from which the 10 most frequently cited questions were selected. Three queries were performed, inputting the questions into ChatGPT-4.0: (1) unprompted, naïve, (2) additional prompt specifying the response being tailored to a seventh-grade reading level, and (3) additional prompt specifying the response being tailored to a college graduate reading level. The responses from the unprompted query were independently evaluated by two of the authors. To assess the quality of the answers, a grading system was applied: (A) correct and sufficient response; (B) correct but insufficient response; (C) response containing both correct and incorrect information; and (D) incorrect response. In addition, the readability of each response was measured using the Flesch-Kinkaid Reading Ease Score (FRES) and Grade Level (FKGL) scales.

Results

Ten responses were evaluated. Inter-rater reliability was 0.6 regarding grading. Of the initial query, 2 of 10 responses received a grade of A, seven were graded as B, and one were graded as C. The average cumulative FRES and FKGL scores of the initial query was 61.64 and 10.28, respectively. The average cumulative FRES and FKGL scores of the secondary query were 75.2 and 6.1, respectively. Finally, the average FRES and FKGL scores of the third query were 12.08 and 17.23.

Conclusions

ChatGPT showed generally satisfactory accuracy in responding to questions regarding hamstring injuries, although certain responses lacked completeness or specificity. On initial, unprompted queries, the readability of responses aligned with a tenth-grade level. However, when explicitly prompted, ChatGPT reliably adjusted the complexity of its responses to both a seventh-grade and a graduate-level reading standard. These findings suggest that although ChatGPT may not consistently deliver fully comprehensive medical information, it possesses the capacity to adapt its output to meet specific readability targets.

Clinical Relevance

Artificial intelligence models like ChatGPT have the potential to serve as a supplemental educational tool for patients with orthopaedic to aid medical-decision making. It is important that we continually review the quality of they medical information generated by these artificial models as the evolve and improve.
ChatGPT提供准确但不完整的反应,并可靠地调整可读性提示腘绳肌损伤常见问题
目的评估ChatGPT对腘绳肌损伤常见问题(FAQs)回答的准确性,并确定,如果提示,ChatGPT是否可以适当地调整阅读水平。方法从不同机构的患者教育网站的常见问题解答部分,初步编制了15个关于腿筋损伤的问题清单,从中选出10个最常被引用的问题。在ChatGPT-4.0中输入问题,执行了三个查询:(1)无提示,naïve;(2)附加提示,指定针对七年级阅读水平的回答;(3)附加提示,指定针对大学毕业生阅读水平的回答。来自非提示查询的回答由两位作者独立评估。为了评估答案的质量,采用了一个评分系统:(a)正确和充分的回答;(二)正确但反应不足的;(C)包含正确和错误信息的回答;(D)不正确的回答。此外,采用Flesch-Kinkaid阅读难度评分(FRES)和年级水平(FKGL)量表测量每个回答的可读性。结果对10例反应进行了评价。评分者间信度为0.6。在最初的问题中,10个回答中有2个得分为a, 7个得分为B, 1个得分为c。最初的问题的平均累积FRES和FKGL得分分别为61.64和10.28。二次查询的平均累积FRES和FKGL得分分别为75.2和6.1。最后,第三个查询的FRES和FKGL平均得分分别为12.08和17.23。结论atgpt在回答腘绳肌损伤问题时显示出令人满意的准确性,尽管某些问题的回答缺乏完整性或特异性。对于初始的、未提示的查询,回复的可读性与十年级的水平一致。然而,当明确提示时,ChatGPT可靠地调整了其响应的复杂性,以满足七年级和研究生水平的阅读标准。这些发现表明,尽管ChatGPT可能不能始终如一地提供全面的医疗信息,但它具有调整其输出以满足特定可读性目标的能力。临床相关性像ChatGPT这样的人工智能模型有潜力作为骨科患者辅助医疗决策的辅助教育工具。重要的是,随着这些人工模型的发展和改进,我们不断审查这些人工模型生成的医疗信息的质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.70
自引率
0.00%
发文量
218
审稿时长
45 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信