Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information.

Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp
{"title":"Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information.","authors":"Onur Gültekin, Jumpei Inoue, Baris Yilmaz, Mehmet Halis Cerci, Bekir Eray Kilinc, Hüsnü Yilmaz, Robert Prill, Mahmut Enes Kayaalp","doi":"10.1002/ksa.12711","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study compares ChatGPT-4o, equipped with its deep research feature, and DeepSeek R1, equipped with its deepthink feature-both enabling real-time online data access-in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.</p><p><strong>Methods: </strong>A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired t tests were used to compare the mean scores of the two models for each criterion, with significance set at p < 0.05.</p><p><strong>Results: </strong>Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, p < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, p < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, p < 0.001) and higher FRES (61.3 vs. 32.7, p < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).</p><p><strong>Conclusion: </strong>ChatGPT-4o, leveraging its deep research feature, and DeepSeek R1, utilizing its deepthink feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.</p><p><strong>Level of evidence: </strong>Level V.</p>","PeriodicalId":520702,"journal":{"name":"Knee surgery, sports traumatology, arthroscopy : official journal of the ESSKA","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knee surgery, sports traumatology, arthroscopy : official journal of the ESSKA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/ksa.12711","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: This study compares ChatGPT-4o, equipped with its deep research feature, and DeepSeek R1, equipped with its deepthink feature-both enabling real-time online data access-in generating responses to frequently asked questions (FAQs) about anterior cruciate ligament (ACL) surgery. The aim is to evaluate and compare their performance in terms of accuracy, clarity, completeness, consistency and readibility for evidence-based patient education.

Methods: A list of ten FAQs about ACL surgery was compiled after reviewing the Sports Medicine Fellowship Institution's webpages. These questions were posed to ChatGPT and DeepSeek in research-enabled modes. Orthopaedic sports surgeons evaluated the responses for accuracy, clarity, completeness, and consistency using a 4-point Likert scale. Inter-rater reliability of the evaluations was assessed using intraclass correlation coefficients (ICCs). In addition, a readability analysis was conducted using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) metrics via an established online calculator to objectively measure textual complexity. Paired t tests were used to compare the mean scores of the two models for each criterion, with significance set at p < 0.05.

Results: Both models demonstrated high accuracy (mean scores of 3.9/4) and consistency (4/4). Significant differences were observed in clarity and completeness: ChatGPT provided more comprehensive responses (mean completeness 4.0 vs. 3.2, p < 0.001), while DeepSeek's answers were clearer and more accessible to laypersons (mean clarity 3.9 vs. 3.0, p < 0.001). DeepSeek had lower FKGL (8.9 vs. 14.2, p < 0.001) and higher FRES (61.3 vs. 32.7, p < 0.001), indicating greater ease of reading for a general audience. ICC analysis indicated substantial inter-rater agreement (composite ICC = 0.80).

Conclusion: ChatGPT-4o, leveraging its deep research feature, and DeepSeek R1, utilizing its deepthink feature, both deliver high-quality, accurate information for ACL surgery patient education. While ChatGPT excels in comprehensiveness, DeepSeek outperforms in clarity and readability, suggesting that integrating the strengths of both models could optimize patient education outcomes.

Level of evidence: Level V.

评价deeppresearch和DeepThink在前交叉韧带手术患者教育中的应用:chatgpt - 40在骨科信息的全面性上更胜一筹,DeepSeek R1在骨科信息的清晰度和可读性上领先。
目的:本研究比较了具有深度研究功能的chatgpt - 40和具有深度思考功能的DeepSeek R1,两者都可以实时在线访问数据,以生成关于前交叉韧带(ACL)手术的常见问题(FAQs)的回答。目的是评估和比较他们在基于证据的患者教育的准确性、清晰度、完整性、一致性和可读性方面的表现。方法:通过查阅运动医学协会的网页,整理出关于前交叉韧带手术的十个常见问题。这些问题是在研究模式下向ChatGPT和DeepSeek提出的。骨科运动外科医生使用4点李克特量表评估反应的准确性、清晰度、完整性和一致性。使用类内相关系数(ICCs)评估评价的等级间信度。此外,通过建立在线计算器,使用Flesch- kincaid Grade Level (FKGL)和Flesch Reading Ease Score (FRES)指标进行可读性分析,客观衡量文本复杂性。采用配对t检验比较两种模型在各指标上的平均得分,显著性设为p。结果:两种模型均具有较高的准确性(平均得分为3.9/4)和一致性(4/4)。在清晰度和完整性方面存在显著差异:ChatGPT提供了更全面的回答(平均完整性4.0 vs. 3.2, p)结论:利用其深度研究特性的ChatGPT- 40和利用其深度思考特性的DeepSeek R1都为ACL手术患者教育提供了高质量,准确的信息。ChatGPT在全面性上更胜一筹,而DeepSeek在清晰度和可读性上更胜一筹,这表明整合两种模型的优势可以优化患者教育结果。证据等级:V级。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信