Umar Ghilzai, Benjamin Fiedler, Abdullah Ghali, Aaron Singh, Benjamin Cass, Allan Young, Adil Shahzad Ahmed
{"title":"ChatGPT 可就患者提出的有关常见肩部病理的问题提供可接受的答复。","authors":"Umar Ghilzai, Benjamin Fiedler, Abdullah Ghali, Aaron Singh, Benjamin Cass, Allan Young, Adil Shahzad Ahmed","doi":"10.1177/17585732241283971","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is rapidly becoming a source of medical knowledge for patients. This study aims to assess the completeness and accuracy of ChatGPT's answers to the most frequently asked patients' questions about shoulder pathology.</p><p><strong>Methods: </strong>ChatGPT (version 3.5) was queried to produce the five most common shoulder pathologies: biceps tendonitis, rotator cuff tears, shoulder arthritis, shoulder dislocation and adhesive capsulitis. Subsequently, it generated the five most common patient questions regarding these pathologies and was queried to respond. Responses were evaluated by three shoulder and elbow fellowship-trained orthopedic surgeons with a mean of 9 years of independent practice, on Likert scales for accuracy (1-6) and completeness (rated 1-3).</p><p><strong>Results: </strong>For all questions, responses were deemed acceptable, rated at least \"nearly all correct,\" indicated by a score of 5 or greater for accuracy, and \"adequately complete,\" indicated by a minimum of 2 for completeness. The mean scores for accuracy and completeness, respectively, were 5.5 and 2.6 for rotator cuff tears, 5.8 and 2.7 for shoulder arthritis, 5.5 and 2.3 for shoulder dislocations, 5.1 and 2.4 for adhesive capsulitis, 5.8 and 2.9 for biceps tendonitis.</p><p><strong>Conclusion: </strong>ChatGPT provides both accurate and complete responses to the most common patients' questions about shoulder pathology. These findings suggest that Large Language Models might play a role as a patient resource; however, patients should always verify online information with their physician.</p><p><strong>Level of evidence: </strong>Level V Expert Opinion.</p>","PeriodicalId":36705,"journal":{"name":"Shoulder and Elbow","volume":" ","pages":"17585732241283971"},"PeriodicalIF":1.5000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559869/pdf/","citationCount":"0","resultStr":"{\"title\":\"ChatGPT provides acceptable responses to patient questions regarding common shoulder pathology.\",\"authors\":\"Umar Ghilzai, Benjamin Fiedler, Abdullah Ghali, Aaron Singh, Benjamin Cass, Allan Young, Adil Shahzad Ahmed\",\"doi\":\"10.1177/17585732241283971\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>ChatGPT is rapidly becoming a source of medical knowledge for patients. This study aims to assess the completeness and accuracy of ChatGPT's answers to the most frequently asked patients' questions about shoulder pathology.</p><p><strong>Methods: </strong>ChatGPT (version 3.5) was queried to produce the five most common shoulder pathologies: biceps tendonitis, rotator cuff tears, shoulder arthritis, shoulder dislocation and adhesive capsulitis. Subsequently, it generated the five most common patient questions regarding these pathologies and was queried to respond. Responses were evaluated by three shoulder and elbow fellowship-trained orthopedic surgeons with a mean of 9 years of independent practice, on Likert scales for accuracy (1-6) and completeness (rated 1-3).</p><p><strong>Results: </strong>For all questions, responses were deemed acceptable, rated at least \\\"nearly all correct,\\\" indicated by a score of 5 or greater for accuracy, and \\\"adequately complete,\\\" indicated by a minimum of 2 for completeness. The mean scores for accuracy and completeness, respectively, were 5.5 and 2.6 for rotator cuff tears, 5.8 and 2.7 for shoulder arthritis, 5.5 and 2.3 for shoulder dislocations, 5.1 and 2.4 for adhesive capsulitis, 5.8 and 2.9 for biceps tendonitis.</p><p><strong>Conclusion: </strong>ChatGPT provides both accurate and complete responses to the most common patients' questions about shoulder pathology. These findings suggest that Large Language Models might play a role as a patient resource; however, patients should always verify online information with their physician.</p><p><strong>Level of evidence: </strong>Level V Expert Opinion.</p>\",\"PeriodicalId\":36705,\"journal\":{\"name\":\"Shoulder and Elbow\",\"volume\":\" \",\"pages\":\"17585732241283971\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559869/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Shoulder and Elbow\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/17585732241283971\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Shoulder and Elbow","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/17585732241283971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
ChatGPT provides acceptable responses to patient questions regarding common shoulder pathology.
Background: ChatGPT is rapidly becoming a source of medical knowledge for patients. This study aims to assess the completeness and accuracy of ChatGPT's answers to the most frequently asked patients' questions about shoulder pathology.
Methods: ChatGPT (version 3.5) was queried to produce the five most common shoulder pathologies: biceps tendonitis, rotator cuff tears, shoulder arthritis, shoulder dislocation and adhesive capsulitis. Subsequently, it generated the five most common patient questions regarding these pathologies and was queried to respond. Responses were evaluated by three shoulder and elbow fellowship-trained orthopedic surgeons with a mean of 9 years of independent practice, on Likert scales for accuracy (1-6) and completeness (rated 1-3).
Results: For all questions, responses were deemed acceptable, rated at least "nearly all correct," indicated by a score of 5 or greater for accuracy, and "adequately complete," indicated by a minimum of 2 for completeness. The mean scores for accuracy and completeness, respectively, were 5.5 and 2.6 for rotator cuff tears, 5.8 and 2.7 for shoulder arthritis, 5.5 and 2.3 for shoulder dislocations, 5.1 and 2.4 for adhesive capsulitis, 5.8 and 2.9 for biceps tendonitis.
Conclusion: ChatGPT provides both accurate and complete responses to the most common patients' questions about shoulder pathology. These findings suggest that Large Language Models might play a role as a patient resource; however, patients should always verify online information with their physician.