Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp
{"title":"评价deeppresearch和DeepThink在前交叉韧带手术患者教育中的应用:chatgpt - 40在骨科信息的全面性上更胜一筹,DeepSeek R1在骨科信息的清晰度和可读性上领先。","authors":"Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp","doi":"10.1002/ksa.12711","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Purpose</h3>\n \n <p>This study compares ChatGPT-4o, equipped with its <i>deep research</i> feature, and DeepSeek R1, equipped with its <i>deepthink</i> feature—both enabling real-time online data access—in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch–Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired <i>t</i> tests were used to compare the mean scores of the two models for each criterion, with significance set at <i>p</i> < 0.05.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, <i>p</i> < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, <i>p</i> < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, <i>p</i> < 0.001) and higher FRES (61.3 vs. 32.7, <i>p</i> < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>ChatGPT-4o, leveraging its <i>deep research</i> feature, and DeepSeek R1, utilizing its <i>deepthink</i> feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.</p>\n </section>\n \n <section>\n \n <h3> Level of Evidence</h3>\n \n <p>Level V.</p>\n </section>\n </div>","PeriodicalId":17880,"journal":{"name":"Knee Surgery, Sports Traumatology, Arthroscopy","volume":"33 8","pages":"3025-3031"},"PeriodicalIF":5.0000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ksa.12711","citationCount":"0","resultStr":"{\"title\":\"Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information\",\"authors\":\"Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp\",\"doi\":\"10.1002/ksa.12711\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>This study compares ChatGPT-4o, equipped with its <i>deep research</i> feature, and DeepSeek R1, equipped with its <i>deepthink</i> feature—both enabling real-time online data access—in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch–Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired <i>t</i> tests were used to compare the mean scores of the two models for each criterion, with significance set at <i>p</i> < 0.05.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, <i>p</i> < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, <i>p</i> < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, <i>p</i> < 0.001) and higher FRES (61.3 vs. 32.7, <i>p</i> < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>ChatGPT-4o, leveraging its <i>deep research</i> feature, and DeepSeek R1, utilizing its <i>deepthink</i> feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Level of Evidence</h3>\\n \\n <p>Level V.</p>\\n </section>\\n </div>\",\"PeriodicalId\":17880,\"journal\":{\"name\":\"Knee Surgery, Sports Traumatology, Arthroscopy\",\"volume\":\"33 8\",\"pages\":\"3025-3031\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ksa.12711\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knee Surgery, Sports Traumatology, Arthroscopy\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://esskajournals.onlinelibrary.wiley.com/doi/10.1002/ksa.12711\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knee Surgery, Sports Traumatology, Arthroscopy","FirstCategoryId":"3","ListUrlMain":"https://esskajournals.onlinelibrary.wiley.com/doi/10.1002/ksa.12711","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information
Purpose
This study compares ChatGPT-4o, equipped with its deep research feature, and DeepSeek R1, equipped with its deepthink feature—both enabling real-time online data access—in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.
Methods
A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch–Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired t tests were used to compare the mean scores of the two models for each criterion, with significance set at p < 0.05.
Results
Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, p < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, p < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, p < 0.001) and higher FRES (61.3 vs. 32.7, p < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).
Conclusion
ChatGPT-4o, leveraging its deep research feature, and DeepSeek R1, utilizing its deepthink feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.
期刊介绍:
Few other areas of orthopedic surgery and traumatology have undergone such a dramatic evolution in the last 10 years as knee surgery, arthroscopy and sports traumatology. Ranked among the top 33% of journals in both Orthopedics and Sports Sciences, the goal of this European journal is to publish papers about innovative knee surgery, sports trauma surgery and arthroscopy. Each issue features a series of peer-reviewed articles that deal with diagnosis and management and with basic research. Each issue also contains at least one review article about an important clinical problem. Case presentations or short notes about technical innovations are also accepted for publication.
The articles cover all aspects of knee surgery and all types of sports trauma; in addition, epidemiology, diagnosis, treatment and prevention, and all types of arthroscopy (not only the knee but also the shoulder, elbow, wrist, hip, ankle, etc.) are addressed. Articles on new diagnostic techniques such as MRI and ultrasound and high-quality articles about the biomechanics of joints, muscles and tendons are included. Although this is largely a clinical journal, it is also open to basic research with clinical relevance.
Because the journal is supported by a distinguished European Editorial Board, assisted by an international Advisory Board, you can be assured that the journal maintains the highest standards.
Official Clinical Journal of the European Society of Sports Traumatology, Knee Surgery and Arthroscopy (ESSKA).