Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information.
Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp
{"title":"Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information.","authors":"Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp","doi":"10.1002/ksa.12711","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study compares ChatGPT-4o, equipped with its deep research feature, and DeepSeek R1, equipped with its deepthink feature-both enabling real-time online data access-in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.</p><p><strong>Methods: </strong>A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired t tests were used to compare the mean scores of the two models for each criterion, with significance set at p < 0.05.</p><p><strong>Results: </strong>Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, p < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, p < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, p < 0.001) and higher FRES (61.3 vs. 32.7, p < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).</p><p><strong>Conclusion: </strong>ChatGPT-4o, leveraging its deep research feature, and DeepSeek R1, utilizing its deepthink feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.</p><p><strong>Level of evidence: </strong>Level V.</p>","PeriodicalId":520702,"journal":{"name":"Knee surgery, sports traumatology, arthroscopy : official journal of the ESSKA","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knee surgery, sports traumatology, arthroscopy : official journal of the ESSKA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/ksa.12711","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: This study compares ChatGPT-4o, equipped with its deep research feature, and DeepSeek R1, equipped with its deepthink feature-both enabling real-time online data access-in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.
Methods: A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired t tests were used to compare the mean scores of the two models for each criterion, with significance set at p < 0.05.
Results: Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, p < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, p < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, p < 0.001) and higher FRES (61.3 vs. 32.7, p < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).
Conclusion: ChatGPT-4o, leveraging its deep research feature, and DeepSeek R1, utilizing its deepthink feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.