{"title":"How reliable are ChatGPT and Google's answers to frequently asked questions about unicondylar knee arthroplasty from a scientific perspective?","authors":"Ali Aydilek, Ömer Levent Karadamar","doi":"10.1177/10225536251350411","DOIUrl":null,"url":null,"abstract":"<p><p>IntroductionUnicondylar knee arthroplasty (UKA) is a minimally invasive surgical technique that replaces a specific compartment of the knee joint. Patients increasingly rely on digital tools such as Google and ChatGPT for healthcare information. This study aims to compare the accuracy, reliability, and applicability of the information provided by these two platforms regarding unicondylar knee arthroplasty.Materials and MethodsThis study was conducted using a descriptive and comparative content analysis approach. 12 frequently asked questions regarding unicondylar knee arthroplasty were identified through Google's \"People Also Ask\" section and then directed to ChatGPT-4. The responses were compared based on scientific accuracy, level of detail, source reliability, applicability, and consistency. Readability analysis was conducted using DISCERN, FKGL, SMOG, and FRES scores.ResultsA total of 83.3% of ChatGPT's responses were found to be consistent with academic sources, whereas this rate was 58.3% for Google. ChatGPT's answers of 142.8 words, compared to Google's 85.6-word average. Regarding source reliability, 66.7% of ChatGPT's responses were based on academic guidelines, whereas Google's percentage was 41.7%. The DISCERN score for ChatGPT was 64.4, whereas Google's was 48.7. Google had a higher FRES score.ConclusionChatGPT provides more scientifically accurate information than Google, while Google offers simpler and more comprehensible content. However, the academic language used by ChatGPT may be challenging for some patient groups, whereas Google's superficial information is a significant limitation. In the future, the development of Artificial Intelligence-based medical information tools could be beneficial in improving patient safety and the quality of information dissemination.</p>","PeriodicalId":16608,"journal":{"name":"Journal of Orthopaedic Surgery","volume":"33 2","pages":"10225536251350411"},"PeriodicalIF":1.6000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Orthopaedic Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/10225536251350411","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/10 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
IntroductionUnicondylar knee arthroplasty (UKA) is a minimally invasive surgical technique that replaces a specific compartment of the knee joint. Patients increasingly rely on digital tools such as Google and ChatGPT for healthcare information. This study aims to compare the accuracy, reliability, and applicability of the information provided by these two platforms regarding unicondylar knee arthroplasty.Materials and MethodsThis study was conducted using a descriptive and comparative content analysis approach. 12 frequently asked questions regarding unicondylar knee arthroplasty were identified through Google's "People Also Ask" section and then directed to ChatGPT-4. The responses were compared based on scientific accuracy, level of detail, source reliability, applicability, and consistency. Readability analysis was conducted using DISCERN, FKGL, SMOG, and FRES scores.ResultsA total of 83.3% of ChatGPT's responses were found to be consistent with academic sources, whereas this rate was 58.3% for Google. ChatGPT's answers of 142.8 words, compared to Google's 85.6-word average. Regarding source reliability, 66.7% of ChatGPT's responses were based on academic guidelines, whereas Google's percentage was 41.7%. The DISCERN score for ChatGPT was 64.4, whereas Google's was 48.7. Google had a higher FRES score.ConclusionChatGPT provides more scientifically accurate information than Google, while Google offers simpler and more comprehensible content. However, the academic language used by ChatGPT may be challenging for some patient groups, whereas Google's superficial information is a significant limitation. In the future, the development of Artificial Intelligence-based medical information tools could be beneficial in improving patient safety and the quality of information dissemination.
期刊介绍:
Journal of Orthopaedic Surgery is an open access peer-reviewed journal publishing original reviews and research articles on all aspects of orthopaedic surgery. It is the official journal of the Asia Pacific Orthopaedic Association.
The journal welcomes and will publish materials of a diverse nature, from basic science research to clinical trials and surgical techniques. The journal encourages contributions from all parts of the world, but special emphasis is given to research of particular relevance to the Asia Pacific region.