ChatGPT 是全髋关节和膝关节置换术患者值得信赖的信息来源吗?

IF 2.8 Q1 ORTHOPEDICS
Benjamin M Wright, Michael S Bodnar, Andrew D Moore, Meghan C Maseda, Michael P Kucharik, Connor C Diaz, Christian M Schmidt, Hassan R Mir
{"title":"ChatGPT 是全髋关节和膝关节置换术患者值得信赖的信息来源吗?","authors":"Benjamin M Wright, Michael S Bodnar, Andrew D Moore, Meghan C Maseda, Michael P Kucharik, Connor C Diaz, Christian M Schmidt, Hassan R Mir","doi":"10.1302/2633-1462.52.BJO-2023-0113.R1","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>While internet search engines have been the primary information source for patients' questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.</p><p><strong>Methods: </strong>We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, \"Please explain so it is easier to understand,\" to evaluate ChatGPT's ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a \"yes\" or \"no\" question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered \"yes.\"</p><p><strong>Results: </strong>The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ<sup>2</sup> = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85).</p><p><strong>Conclusion: </strong>ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.</p>","PeriodicalId":34103,"journal":{"name":"Bone & Joint Open","volume":"5 2","pages":"139-146"},"PeriodicalIF":2.8000,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10867788/pdf/","citationCount":"0","resultStr":"{\"title\":\"Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?\",\"authors\":\"Benjamin M Wright, Michael S Bodnar, Andrew D Moore, Meghan C Maseda, Michael P Kucharik, Connor C Diaz, Christian M Schmidt, Hassan R Mir\",\"doi\":\"10.1302/2633-1462.52.BJO-2023-0113.R1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Aims: </strong>While internet search engines have been the primary information source for patients' questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.</p><p><strong>Methods: </strong>We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, \\\"Please explain so it is easier to understand,\\\" to evaluate ChatGPT's ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a \\\"yes\\\" or \\\"no\\\" question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered \\\"yes.\\\"</p><p><strong>Results: </strong>The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ<sup>2</sup> = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85).</p><p><strong>Conclusion: </strong>ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.</p>\",\"PeriodicalId\":34103,\"journal\":{\"name\":\"Bone & Joint Open\",\"volume\":\"5 2\",\"pages\":\"139-146\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10867788/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bone & Joint Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1302/2633-1462.52.BJO-2023-0113.R1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bone & Joint Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1302/2633-1462.52.BJO-2023-0113.R1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

摘要

目的:虽然互联网搜索引擎一直是患者提问的主要信息来源,但像 ChatGPT 这样的人工智能大型语言模型正逐渐成为新的主要信息来源。本研究的目的是确定 ChatGPT 能否以一致的准确性、全面性和易读性回答患者关于全髋关节(THA)和膝关节置换术(TKA)的问题:我们向 ChatGPT 提出了谷歌搜索量最高的 20 个有关全髋关节置换术和膝关节置换术的问题,以及另外 10 个术后问题。每个问题都问了两次,以评估质量的一致性。每次回答后,我们都会回复 "请解释一下,以便更容易理解",以评估 ChatGPT 降低回答阅读水平的能力,即 Flesch-Kincaid Grade Level (FKGL)。五位住院医师对 120 个回答的准确性和全面性进行了 1 到 5 级评分。此外,他们还回答了有关可接受性的 "是 "或 "否 "问题。计算每个问题的平均分,如果≥四名评分者回答 "是",则认为回答是可接受的:准确性和全面性的平均得分分别为 4.26(95% 置信区间 (CI) 4.19 至 4.33)和 3.79(95% CI 3.69 至 3.89)。在所有回答中,59.2%(71/120;95% CI 50.0% 至 67.7%)是可以接受的。两次询问同一问题时,ChatGPT 的准确性(t = 0.821;p = 0.415)、全面性(t = 1.387;p = 0.171)、可接受性(χ2 = 1.832;p = 0.176)和 FKGL(t = 0.264;p = 0.793)均无显著差异。简单回答(11.14;95% CI 10.57 至 11.71)的 FKGL(t = 2.204;p = 0.029)明显低于原始回答(12.15;95% CI 11.45 至 12.85):ChatGPT回答THA和TKA患者问题的准确性与之前的网站报告不相上下,具有足够的全面性,但作为唯一信息来源的可接受性有限。ChatGPT 有潜力回答患者关于 THA 和 TKA 的问题,但需要改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?

Aims: While internet search engines have been the primary information source for patients' questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.

Methods: We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, "Please explain so it is easier to understand," to evaluate ChatGPT's ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a "yes" or "no" question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered "yes."

Results: The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ2 = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85).

Conclusion: ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Bone & Joint Open
Bone & Joint Open ORTHOPEDICS-
CiteScore
5.10
自引率
0.00%
发文量
0
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信