William L. Johns M.D. , Alec Kellish M.D. , Dominic Farronato B.S. , Michael G. Ciccotti M.D. , Sommer Hammoud M.D.
{"title":"ChatGPT 可以满意地回答患者关于肘关节尺侧副韧带重建的常见问题","authors":"William L. Johns M.D. , Alec Kellish M.D. , Dominic Farronato B.S. , Michael G. Ciccotti M.D. , Sommer Hammoud M.D.","doi":"10.1016/j.asmr.2024.100893","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><p>To determine whether ChatGPT effectively responds to 10 commonly asked questions concerning ulnar collateral ligament (UCL) reconstruction.</p></div><div><h3>Methods</h3><p>A comprehensive list of 90 UCL reconstruction questions was initially created, with a final set of 10 “most commonly asked” questions ultimately selected. Questions were presented to ChatGPT and its response was documented. Responses were evaluated independently by 3 authors using an evidence-based methodology, resulting in a grading system categorized as follows: (1) excellent response not requiring clarification; (2) satisfactory requiring minimal clarification; (3) satisfactory requiring moderate clarification; and (4) unsatisfactory requiring substantial clarification.</p></div><div><h3>Results</h3><p>Six of 10 ten responses were rated as “excellent” or “satisfactory.” Of those 6 responses, 2 were determined to be “excellent response not requiring clarification,” 3 were “satisfactory requiring minimal clarification,” and 1 was “satisfactory requiring moderate clarification.” Four questions encompassing inquiries about “What are the potential risks of UCL reconstruction surgery?” “Which type of graft should be used for my UCL reconstruction?” and “Should I have UCL reconstruction or repair?” were rated as “unsatisfactory requiring substantial clarification.”</p></div><div><h3>Conclusions</h3><p>ChatGPT exhibited the potential to improve a patient’s basic understanding of UCL reconstruction and provided responses that were deemed satisfactory to excellent for 60% of the most commonly asked questions. For the other 40% of questions, ChatGPT gave unsatisfactory responses, primarily due to a lack of relevant details or the need for further explanation.</p></div><div><h3>Clinical Relevance</h3><p>ChatGPT can assist in patient education regarding UCL reconstruction; however, its ability to appropriately answer more complex questions remains to be an area of skepticism and future improvement.</p></div>","PeriodicalId":34631,"journal":{"name":"Arthroscopy Sports Medicine and Rehabilitation","volume":"6 2","pages":"Article 100893"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666061X24000117/pdfft?md5=2050f137b07df1e314a24ed314d80097&pid=1-s2.0-S2666061X24000117-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ChatGPT Can Offer Satisfactory Responses to Common Patient Questions Regarding Elbow Ulnar Collateral Ligament Reconstruction\",\"authors\":\"William L. Johns M.D. , Alec Kellish M.D. , Dominic Farronato B.S. , Michael G. Ciccotti M.D. , Sommer Hammoud M.D.\",\"doi\":\"10.1016/j.asmr.2024.100893\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Purpose</h3><p>To determine whether ChatGPT effectively responds to 10 commonly asked questions concerning ulnar collateral ligament (UCL) reconstruction.</p></div><div><h3>Methods</h3><p>A comprehensive list of 90 UCL reconstruction questions was initially created, with a final set of 10 “most commonly asked” questions ultimately selected. Questions were presented to ChatGPT and its response was documented. Responses were evaluated independently by 3 authors using an evidence-based methodology, resulting in a grading system categorized as follows: (1) excellent response not requiring clarification; (2) satisfactory requiring minimal clarification; (3) satisfactory requiring moderate clarification; and (4) unsatisfactory requiring substantial clarification.</p></div><div><h3>Results</h3><p>Six of 10 ten responses were rated as “excellent” or “satisfactory.” Of those 6 responses, 2 were determined to be “excellent response not requiring clarification,” 3 were “satisfactory requiring minimal clarification,” and 1 was “satisfactory requiring moderate clarification.” Four questions encompassing inquiries about “What are the potential risks of UCL reconstruction surgery?” “Which type of graft should be used for my UCL reconstruction?” and “Should I have UCL reconstruction or repair?” were rated as “unsatisfactory requiring substantial clarification.”</p></div><div><h3>Conclusions</h3><p>ChatGPT exhibited the potential to improve a patient’s basic understanding of UCL reconstruction and provided responses that were deemed satisfactory to excellent for 60% of the most commonly asked questions. For the other 40% of questions, ChatGPT gave unsatisfactory responses, primarily due to a lack of relevant details or the need for further explanation.</p></div><div><h3>Clinical Relevance</h3><p>ChatGPT can assist in patient education regarding UCL reconstruction; however, its ability to appropriately answer more complex questions remains to be an area of skepticism and future improvement.</p></div>\",\"PeriodicalId\":34631,\"journal\":{\"name\":\"Arthroscopy Sports Medicine and Rehabilitation\",\"volume\":\"6 2\",\"pages\":\"Article 100893\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666061X24000117/pdfft?md5=2050f137b07df1e314a24ed314d80097&pid=1-s2.0-S2666061X24000117-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Arthroscopy Sports Medicine and Rehabilitation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666061X24000117\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Arthroscopy Sports Medicine and Rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666061X24000117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
ChatGPT Can Offer Satisfactory Responses to Common Patient Questions Regarding Elbow Ulnar Collateral Ligament Reconstruction
Purpose
To determine whether ChatGPT effectively responds to 10 commonly asked questions concerning ulnar collateral ligament (UCL) reconstruction.
Methods
A comprehensive list of 90 UCL reconstruction questions was initially created, with a final set of 10 “most commonly asked” questions ultimately selected. Questions were presented to ChatGPT and its response was documented. Responses were evaluated independently by 3 authors using an evidence-based methodology, resulting in a grading system categorized as follows: (1) excellent response not requiring clarification; (2) satisfactory requiring minimal clarification; (3) satisfactory requiring moderate clarification; and (4) unsatisfactory requiring substantial clarification.
Results
Six of 10 ten responses were rated as “excellent” or “satisfactory.” Of those 6 responses, 2 were determined to be “excellent response not requiring clarification,” 3 were “satisfactory requiring minimal clarification,” and 1 was “satisfactory requiring moderate clarification.” Four questions encompassing inquiries about “What are the potential risks of UCL reconstruction surgery?” “Which type of graft should be used for my UCL reconstruction?” and “Should I have UCL reconstruction or repair?” were rated as “unsatisfactory requiring substantial clarification.”
Conclusions
ChatGPT exhibited the potential to improve a patient’s basic understanding of UCL reconstruction and provided responses that were deemed satisfactory to excellent for 60% of the most commonly asked questions. For the other 40% of questions, ChatGPT gave unsatisfactory responses, primarily due to a lack of relevant details or the need for further explanation.
Clinical Relevance
ChatGPT can assist in patient education regarding UCL reconstruction; however, its ability to appropriately answer more complex questions remains to be an area of skepticism and future improvement.