Thomas W. Fenn M.D., Dominic M. Farronato M.D., Douglas K. Wells M.D., George B. Reahl M.D., F. Winston Gwathmey M.D., Charles A. Su M.D., Ph.D.
{"title":"ChatGPT Provides Accurate but Incomplete Responses and Reliably Adjusts Readability to Prompts for Hamstring Injury Frequently Asked Questions","authors":"Thomas W. Fenn M.D., Dominic M. Farronato M.D., Douglas K. Wells M.D., George B. Reahl M.D., F. Winston Gwathmey M.D., Charles A. Su M.D., Ph.D.","doi":"10.1016/j.asmr.2025.101200","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>To evaluate the accuracy of ChatGPT’s responses to frequently asked questions (FAQs) about hamstring injuries and to determine, if prompted, whether ChatGPT could appropriately tailor the reading level to that suggested.</div></div><div><h3>Methods</h3><div>A preliminary list of 15 questions on hamstring injuries was developed from various FAQ sections on patient education websites from a variety of institutions, from which the 10 most frequently cited questions were selected. Three queries were performed, inputting the questions into ChatGPT-4.0: (1) unprompted, naïve, (2) additional prompt specifying the response being tailored to a seventh-grade reading level, and (3) additional prompt specifying the response being tailored to a college graduate reading level. The responses from the unprompted query were independently evaluated by two of the authors. To assess the quality of the answers, a grading system was applied: (A) correct and sufficient response; (B) correct but insufficient response; (C) response containing both correct and incorrect information; and (D) incorrect response. In addition, the readability of each response was measured using the Flesch-Kinkaid Reading Ease Score (FRES) and Grade Level (FKGL) scales.</div></div><div><h3>Results</h3><div>Ten responses were evaluated. Inter-rater reliability was 0.6 regarding grading. Of the initial query, 2 of 10 responses received a grade of A, seven were graded as B, and one were graded as C. The average cumulative FRES and FKGL scores of the initial query was 61.64 and 10.28, respectively. The average cumulative FRES and FKGL scores of the secondary query were 75.2 and 6.1, respectively. Finally, the average FRES and FKGL scores of the third query were 12.08 and 17.23.</div></div><div><h3>Conclusions</h3><div>ChatGPT showed generally satisfactory accuracy in responding to questions regarding hamstring injuries, although certain responses lacked completeness or specificity. On initial, unprompted queries, the readability of responses aligned with a tenth-grade level. However, when explicitly prompted, ChatGPT reliably adjusted the complexity of its responses to both a seventh-grade and a graduate-level reading standard. These findings suggest that although ChatGPT may not consistently deliver fully comprehensive medical information, it possesses the capacity to adapt its output to meet specific readability targets.</div></div><div><h3>Clinical Relevance</h3><div>Artificial intelligence models like ChatGPT have the potential to serve as a supplemental educational tool for patients with orthopaedic to aid medical-decision making. It is important that we continually review the quality of they medical information generated by these artificial models as the evolve and improve.</div></div>","PeriodicalId":34631,"journal":{"name":"Arthroscopy Sports Medicine and Rehabilitation","volume":"7 4","pages":"Article 101200"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Arthroscopy Sports Medicine and Rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666061X25001269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
To evaluate the accuracy of ChatGPT’s responses to frequently asked questions (FAQs) about hamstring injuries and to determine, if prompted, whether ChatGPT could appropriately tailor the reading level to that suggested.
Methods
A preliminary list of 15 questions on hamstring injuries was developed from various FAQ sections on patient education websites from a variety of institutions, from which the 10 most frequently cited questions were selected. Three queries were performed, inputting the questions into ChatGPT-4.0: (1) unprompted, naïve, (2) additional prompt specifying the response being tailored to a seventh-grade reading level, and (3) additional prompt specifying the response being tailored to a college graduate reading level. The responses from the unprompted query were independently evaluated by two of the authors. To assess the quality of the answers, a grading system was applied: (A) correct and sufficient response; (B) correct but insufficient response; (C) response containing both correct and incorrect information; and (D) incorrect response. In addition, the readability of each response was measured using the Flesch-Kinkaid Reading Ease Score (FRES) and Grade Level (FKGL) scales.
Results
Ten responses were evaluated. Inter-rater reliability was 0.6 regarding grading. Of the initial query, 2 of 10 responses received a grade of A, seven were graded as B, and one were graded as C. The average cumulative FRES and FKGL scores of the initial query was 61.64 and 10.28, respectively. The average cumulative FRES and FKGL scores of the secondary query were 75.2 and 6.1, respectively. Finally, the average FRES and FKGL scores of the third query were 12.08 and 17.23.
Conclusions
ChatGPT showed generally satisfactory accuracy in responding to questions regarding hamstring injuries, although certain responses lacked completeness or specificity. On initial, unprompted queries, the readability of responses aligned with a tenth-grade level. However, when explicitly prompted, ChatGPT reliably adjusted the complexity of its responses to both a seventh-grade and a graduate-level reading standard. These findings suggest that although ChatGPT may not consistently deliver fully comprehensive medical information, it possesses the capacity to adapt its output to meet specific readability targets.
Clinical Relevance
Artificial intelligence models like ChatGPT have the potential to serve as a supplemental educational tool for patients with orthopaedic to aid medical-decision making. It is important that we continually review the quality of they medical information generated by these artificial models as the evolve and improve.