Daniel E Pereira, Ndeye F Guisse, Rohit Siddabattula, Julia Perugini, Pooya Hosseinzadeh
{"title":"从算法到答案:流行搜索引擎与大型语言模型对畸形足患者教育的比较分析。","authors":"Daniel E Pereira, Ndeye F Guisse, Rohit Siddabattula, Julia Perugini, Pooya Hosseinzadeh","doi":"10.1097/BPB.0000000000001287","DOIUrl":null,"url":null,"abstract":"<p><p>This study evaluates Chat Generative Pre-Trained Transformer 4o's (ChatGPT-4o's) utility in clinical relevance and accuracy compared with Google for pediatric clubfoot treatment questions. Both were queried for the 15 most frequently asked questions related to pediatric clubfoot treatment, with Google as control. Questions were classified using the modified Rothwell criteria for online sources. Questions and answers were independently graded for clinical relevance (0 = not clinically relevance, 1 = some clinical relevance, 2 = very clinically relevant) and clinical accuracy (0 = inaccurate, 1 = somewhat accurate, 2 = accurate), respectively (D.E.P. and N.G.). Questions and answers were validated by an expert, board-certified pediatric orthopedic surgeon (P.H.), who also resolved any discrepancies in grading. Per modified Rothwell criteria, Google responses were most frequently classified as either 'notion' or 'indications/management' while ChatGPT-4o responses were most likely addressed as 'notion' or 'longevity'. Google sources were primarily from academic and government platforms, while ChatGPT-4o exclusively used academic sources. ChatGPT-4o questions scored higher for clinical relevance (P = 0.006); however, clinical accuracy of answers was equivalent (P = 0.570). ChatGPT-4o provides clinically relevant questions, more so than Google with regard to pediatric clubfoot treatment. Furthermore, ChatGPT-4o uses a greater proportion of academic sources compared with Google. While both sources provided clinically accurate answers, large language models appeared to provide information that was more relevant and scholarly to patients' concerns regarding clubfoot; however, further validation and extensive testing are required to prevent the unnecessary spread of misinformation and its utilization in a clinical setting.</p>","PeriodicalId":50092,"journal":{"name":"Journal of Pediatric Orthopaedics-Part B","volume":" ","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From algorithms to answers: a comparative analysis of popular search engines and large language models on clubfoot patient education.\",\"authors\":\"Daniel E Pereira, Ndeye F Guisse, Rohit Siddabattula, Julia Perugini, Pooya Hosseinzadeh\",\"doi\":\"10.1097/BPB.0000000000001287\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study evaluates Chat Generative Pre-Trained Transformer 4o's (ChatGPT-4o's) utility in clinical relevance and accuracy compared with Google for pediatric clubfoot treatment questions. Both were queried for the 15 most frequently asked questions related to pediatric clubfoot treatment, with Google as control. Questions were classified using the modified Rothwell criteria for online sources. Questions and answers were independently graded for clinical relevance (0 = not clinically relevance, 1 = some clinical relevance, 2 = very clinically relevant) and clinical accuracy (0 = inaccurate, 1 = somewhat accurate, 2 = accurate), respectively (D.E.P. and N.G.). Questions and answers were validated by an expert, board-certified pediatric orthopedic surgeon (P.H.), who also resolved any discrepancies in grading. Per modified Rothwell criteria, Google responses were most frequently classified as either 'notion' or 'indications/management' while ChatGPT-4o responses were most likely addressed as 'notion' or 'longevity'. Google sources were primarily from academic and government platforms, while ChatGPT-4o exclusively used academic sources. ChatGPT-4o questions scored higher for clinical relevance (P = 0.006); however, clinical accuracy of answers was equivalent (P = 0.570). ChatGPT-4o provides clinically relevant questions, more so than Google with regard to pediatric clubfoot treatment. Furthermore, ChatGPT-4o uses a greater proportion of academic sources compared with Google. While both sources provided clinically accurate answers, large language models appeared to provide information that was more relevant and scholarly to patients' concerns regarding clubfoot; however, further validation and extensive testing are required to prevent the unnecessary spread of misinformation and its utilization in a clinical setting.</p>\",\"PeriodicalId\":50092,\"journal\":{\"name\":\"Journal of Pediatric Orthopaedics-Part B\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Pediatric Orthopaedics-Part B\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/BPB.0000000000001287\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pediatric Orthopaedics-Part B","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/BPB.0000000000001287","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
From algorithms to answers: a comparative analysis of popular search engines and large language models on clubfoot patient education.
This study evaluates Chat Generative Pre-Trained Transformer 4o's (ChatGPT-4o's) utility in clinical relevance and accuracy compared with Google for pediatric clubfoot treatment questions. Both were queried for the 15 most frequently asked questions related to pediatric clubfoot treatment, with Google as control. Questions were classified using the modified Rothwell criteria for online sources. Questions and answers were independently graded for clinical relevance (0 = not clinically relevance, 1 = some clinical relevance, 2 = very clinically relevant) and clinical accuracy (0 = inaccurate, 1 = somewhat accurate, 2 = accurate), respectively (D.E.P. and N.G.). Questions and answers were validated by an expert, board-certified pediatric orthopedic surgeon (P.H.), who also resolved any discrepancies in grading. Per modified Rothwell criteria, Google responses were most frequently classified as either 'notion' or 'indications/management' while ChatGPT-4o responses were most likely addressed as 'notion' or 'longevity'. Google sources were primarily from academic and government platforms, while ChatGPT-4o exclusively used academic sources. ChatGPT-4o questions scored higher for clinical relevance (P = 0.006); however, clinical accuracy of answers was equivalent (P = 0.570). ChatGPT-4o provides clinically relevant questions, more so than Google with regard to pediatric clubfoot treatment. Furthermore, ChatGPT-4o uses a greater proportion of academic sources compared with Google. While both sources provided clinically accurate answers, large language models appeared to provide information that was more relevant and scholarly to patients' concerns regarding clubfoot; however, further validation and extensive testing are required to prevent the unnecessary spread of misinformation and its utilization in a clinical setting.
期刊介绍:
The journal highlights important recent developments from the world''s leading clinical and research institutions. The journal publishes peer-reviewed papers on the diagnosis and treatment of pediatric orthopedic disorders.
It is the official journal of IFPOS (International Federation of Paediatric Orthopaedic Societies).
Submitted articles undergo a preliminary review by the editor. Some articles may be returned to authors without further consideration. Those being considered for publication will undergo further assessment and peer-review by the editors and those invited to do so from a reviewer pool.