Benjamin T. Lack , Edwin Mouhawasse , Justin T. Childers , Garrett R. Jackson , Shay V. Daji , Payton Yerke-Hansen , Filippo Familiari , Derrick M. Knapik , Vani J. Sabesan
{"title":"ChatGPT 能否回答患者有关反向肩关节置换术的问题?","authors":"Benjamin T. Lack , Edwin Mouhawasse , Justin T. Childers , Garrett R. Jackson , Shay V. Daji , Payton Yerke-Hansen , Filippo Familiari , Derrick M. Knapik , Vani J. Sabesan","doi":"10.1016/j.jisako.2024.100323","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>In recent years, artificial intelligence (AI) has seen substantial progress in its utilization, with Chat Generated Pre-Trained Transformer (ChatGPT) is emerging as a popular language model. The purpose of this study was to test the accuracy and reliability of ChatGPT's responses to frequently asked questions (FAQ) pertaining to reverse shoulder arthroplasty (RSA).</div></div><div><h3>Methods</h3><div>The ten most common FAQs were queried from institution patient education websites. These ten questions were then input into the chatbot during a single session without additional contextual information. The responses were then critically analyzed by two orthopedic surgeons for clarity, accuracy, and the quality of evidence-based information using The Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN score. The readability of the responses was analyzed using the Flesch-Kincaid Grade Level.</div></div><div><h3>Results</h3><div>In response to the ten questions, the average DISCERN score was 44 (range 38–51). Seven responses were classified as fair and three were poor. The JAMA Benchmark criteria score was 0 for all responses. Furthermore, the average Flesch-Kincaid Grade Level was 14.35, which correlates to a college graduate reading level.</div></div><div><h3>Conclusion</h3><div>Overall, ChatGPT was able to provide fair responses to common patient questions. However, the responses were all written at a college graduate reading level and lacked reliable citations. The readability greatly limits its utility. Thus, adequate patient education should be done by orthopedic surgeons. This study underscores the need for patient education resources that are reliable, accessible, and comprehensible.</div></div><div><h3>Level of evidence</h3><div>IV.</div></div>","PeriodicalId":36847,"journal":{"name":"Journal of ISAKOS Joint Disorders & Orthopaedic Sports Medicine","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can ChatGPT answer patient questions regarding reverse shoulder arthroplasty?\",\"authors\":\"Benjamin T. Lack , Edwin Mouhawasse , Justin T. Childers , Garrett R. Jackson , Shay V. Daji , Payton Yerke-Hansen , Filippo Familiari , Derrick M. Knapik , Vani J. Sabesan\",\"doi\":\"10.1016/j.jisako.2024.100323\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><div>In recent years, artificial intelligence (AI) has seen substantial progress in its utilization, with Chat Generated Pre-Trained Transformer (ChatGPT) is emerging as a popular language model. The purpose of this study was to test the accuracy and reliability of ChatGPT's responses to frequently asked questions (FAQ) pertaining to reverse shoulder arthroplasty (RSA).</div></div><div><h3>Methods</h3><div>The ten most common FAQs were queried from institution patient education websites. These ten questions were then input into the chatbot during a single session without additional contextual information. The responses were then critically analyzed by two orthopedic surgeons for clarity, accuracy, and the quality of evidence-based information using The Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN score. The readability of the responses was analyzed using the Flesch-Kincaid Grade Level.</div></div><div><h3>Results</h3><div>In response to the ten questions, the average DISCERN score was 44 (range 38–51). Seven responses were classified as fair and three were poor. The JAMA Benchmark criteria score was 0 for all responses. Furthermore, the average Flesch-Kincaid Grade Level was 14.35, which correlates to a college graduate reading level.</div></div><div><h3>Conclusion</h3><div>Overall, ChatGPT was able to provide fair responses to common patient questions. However, the responses were all written at a college graduate reading level and lacked reliable citations. The readability greatly limits its utility. Thus, adequate patient education should be done by orthopedic surgeons. This study underscores the need for patient education resources that are reliable, accessible, and comprehensible.</div></div><div><h3>Level of evidence</h3><div>IV.</div></div>\",\"PeriodicalId\":36847,\"journal\":{\"name\":\"Journal of ISAKOS Joint Disorders & Orthopaedic Sports Medicine\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of ISAKOS Joint Disorders & Orthopaedic Sports Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2059775424001706\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of ISAKOS Joint Disorders & Orthopaedic Sports Medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2059775424001706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
Can ChatGPT answer patient questions regarding reverse shoulder arthroplasty?
Introduction
In recent years, artificial intelligence (AI) has seen substantial progress in its utilization, with Chat Generated Pre-Trained Transformer (ChatGPT) is emerging as a popular language model. The purpose of this study was to test the accuracy and reliability of ChatGPT's responses to frequently asked questions (FAQ) pertaining to reverse shoulder arthroplasty (RSA).
Methods
The ten most common FAQs were queried from institution patient education websites. These ten questions were then input into the chatbot during a single session without additional contextual information. The responses were then critically analyzed by two orthopedic surgeons for clarity, accuracy, and the quality of evidence-based information using The Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN score. The readability of the responses was analyzed using the Flesch-Kincaid Grade Level.
Results
In response to the ten questions, the average DISCERN score was 44 (range 38–51). Seven responses were classified as fair and three were poor. The JAMA Benchmark criteria score was 0 for all responses. Furthermore, the average Flesch-Kincaid Grade Level was 14.35, which correlates to a college graduate reading level.
Conclusion
Overall, ChatGPT was able to provide fair responses to common patient questions. However, the responses were all written at a college graduate reading level and lacked reliable citations. The readability greatly limits its utility. Thus, adequate patient education should be done by orthopedic surgeons. This study underscores the need for patient education resources that are reliable, accessible, and comprehensible.