How reliable are ChatGPT and Google's answers to frequently asked questions about unicondylar knee arthroplasty from a scientific perspective?

IF 1.6 4区 医学
Journal of Orthopaedic Surgery Pub Date : 2025-05-01 Epub Date: 2025-06-10 DOI:10.1177/10225536251350411
Ali Aydilek, Ömer Levent Karadamar
{"title":"How reliable are ChatGPT and Google's answers to frequently asked questions about unicondylar knee arthroplasty from a scientific perspective?","authors":"Ali Aydilek, Ömer Levent Karadamar","doi":"10.1177/10225536251350411","DOIUrl":null,"url":null,"abstract":"<p><p>IntroductionUnicondylar knee arthroplasty (UKA) is a minimally invasive surgical technique that replaces a specific compartment of the knee joint. Patients increasingly rely on digital tools such as Google and ChatGPT for healthcare information. This study aims to compare the accuracy, reliability, and applicability of the information provided by these two platforms regarding unicondylar knee arthroplasty.Materials and MethodsThis study was conducted using a descriptive and comparative content analysis approach. 12 frequently asked questions regarding unicondylar knee arthroplasty were identified through Google's \"People Also Ask\" section and then directed to ChatGPT-4. The responses were compared based on scientific accuracy, level of detail, source reliability, applicability, and consistency. Readability analysis was conducted using DISCERN, FKGL, SMOG, and FRES scores.ResultsA total of 83.3% of ChatGPT's responses were found to be consistent with academic sources, whereas this rate was 58.3% for Google. ChatGPT's answers of 142.8 words, compared to Google's 85.6-word average. Regarding source reliability, 66.7% of ChatGPT's responses were based on academic guidelines, whereas Google's percentage was 41.7%. The DISCERN score for ChatGPT was 64.4, whereas Google's was 48.7. Google had a higher FRES score.ConclusionChatGPT provides more scientifically accurate information than Google, while Google offers simpler and more comprehensible content. However, the academic language used by ChatGPT may be challenging for some patient groups, whereas Google's superficial information is a significant limitation. In the future, the development of Artificial Intelligence-based medical information tools could be beneficial in improving patient safety and the quality of information dissemination.</p>","PeriodicalId":16608,"journal":{"name":"Journal of Orthopaedic Surgery","volume":"33 2","pages":"10225536251350411"},"PeriodicalIF":1.6000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Orthopaedic Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/10225536251350411","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/10 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

IntroductionUnicondylar knee arthroplasty (UKA) is a minimally invasive surgical technique that replaces a specific compartment of the knee joint. Patients increasingly rely on digital tools such as Google and ChatGPT for healthcare information. This study aims to compare the accuracy, reliability, and applicability of the information provided by these two platforms regarding unicondylar knee arthroplasty.Materials and MethodsThis study was conducted using a descriptive and comparative content analysis approach. 12 frequently asked questions regarding unicondylar knee arthroplasty were identified through Google's "People Also Ask" section and then directed to ChatGPT-4. The responses were compared based on scientific accuracy, level of detail, source reliability, applicability, and consistency. Readability analysis was conducted using DISCERN, FKGL, SMOG, and FRES scores.ResultsA total of 83.3% of ChatGPT's responses were found to be consistent with academic sources, whereas this rate was 58.3% for Google. ChatGPT's answers of 142.8 words, compared to Google's 85.6-word average. Regarding source reliability, 66.7% of ChatGPT's responses were based on academic guidelines, whereas Google's percentage was 41.7%. The DISCERN score for ChatGPT was 64.4, whereas Google's was 48.7. Google had a higher FRES score.ConclusionChatGPT provides more scientifically accurate information than Google, while Google offers simpler and more comprehensible content. However, the academic language used by ChatGPT may be challenging for some patient groups, whereas Google's superficial information is a significant limitation. In the future, the development of Artificial Intelligence-based medical information tools could be beneficial in improving patient safety and the quality of information dissemination.

从科学的角度来看,ChatGPT和b谷歌对单髁膝关节置换术常见问题的回答有多可靠?
单髁膝关节置换术(UKA)是一种微创手术技术,可替代膝关节的特定腔室。患者越来越依赖b谷歌和ChatGPT等数字工具获取医疗保健信息。本研究旨在比较这两个平台提供的关于单髁膝关节置换术信息的准确性、可靠性和适用性。材料与方法本研究采用描述性和比较性内容分析法。12个关于单髁膝关节置换术的常见问题通过b谷歌的“人们也问”部分确定,然后定向到ChatGPT-4。根据科学准确性、详细程度、来源可靠性、适用性和一致性对回答进行比较。使用DISCERN、FKGL、SMOG和FRES评分进行可读性分析。结果83.3%的ChatGPT回复与学术来源一致,而谷歌的这一比例为58.3%。ChatGPT的答案是142.8个单词,而b谷歌的平均答案是85.6个单词。在来源可靠性方面,ChatGPT的66.7%的回答是基于学术指南的,而b谷歌的这一比例为41.7%。ChatGPT的DISCERN得分为64.4,而谷歌的得分为48.7。b谷歌的FRES评分较高。结论chatgpt提供的信息比谷歌更科学准确,而谷歌提供的内容更简单易懂。然而,ChatGPT使用的学术语言可能对一些患者群体具有挑战性,而谷歌的肤浅信息是一个重大限制。未来,基于人工智能的医疗信息工具的发展将有助于提高患者安全和信息传播质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
91
期刊介绍: Journal of Orthopaedic Surgery is an open access peer-reviewed journal publishing original reviews and research articles on all aspects of orthopaedic surgery. It is the official journal of the Asia Pacific Orthopaedic Association. The journal welcomes and will publish materials of a diverse nature, from basic science research to clinical trials and surgical techniques. The journal encourages contributions from all parts of the world, but special emphasis is given to research of particular relevance to the Asia Pacific region.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信