ChatGPT 和 Google 对患者最常问到的有关肩袖修复的问题提供了大部分优秀或满意的答复

Q3 Medicine
Martinus Megalla M.D. , Alexander K. Hahn M.D. , Jordan A. Bauer M.D. , Jordan T. Windsor B.S. , Zachary T. Grace M.D. , Marissa A. Gedman M.D. , Robert A. Arciero M.D.
{"title":"ChatGPT 和 Google 对患者最常问到的有关肩袖修复的问题提供了大部分优秀或满意的答复","authors":"Martinus Megalla M.D. ,&nbsp;Alexander K. Hahn M.D. ,&nbsp;Jordan A. Bauer M.D. ,&nbsp;Jordan T. Windsor B.S. ,&nbsp;Zachary T. Grace M.D. ,&nbsp;Marissa A. Gedman M.D. ,&nbsp;Robert A. Arciero M.D.","doi":"10.1016/j.asmr.2024.100963","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>To assess the differences in frequently asked questions (FAQs) and responses related to rotator cuff surgery between Google and ChatGPT.</div></div><div><h3>Methods</h3><div>Both Google and ChatGPT (version 3.5) were queried for the top 10 FAQs using the search term “rotator cuff repair.” Questions were categorized according to Rothwell’s classification. In addition to questions and answers for each website, the source that the answer was pulled from was noted and assigned a category (academic, medical practice, etc). Responses were also graded as “excellent response not requiring clarification” (1), “satisfactory requiring minimal clarification” (2), “satisfactory requiring moderate clarification” (3), or “unsatisfactory requiring substantial clarification” (4).</div></div><div><h3>Results</h3><div>Overall, 30% of questions were similar between what Google and ChatGPT deemed to be the most FAQs. For questions from Google web search, most answers came from medical practices (40%). For ChatGPT, most answers were provided by academic sources (90%). For numerical questions, ChatGPT and Google provided similar responses for 30% of questions. For most of the questions, both Google and ChatGPT responses were either “excellent” or “satisfactory requiring minimal clarification.” Google had 1 response rated as satisfactory requiring moderate clarification, whereas ChatGPT had 2 responses rated as unsatisfactory.</div></div><div><h3>Conclusions</h3><div>Both Google and ChatGPT offer mostly excellent or satisfactory responses to the most FAQs regarding rotator cuff repair. However, ChatGPT may provide inaccurate or even fabricated answers and associated citations.</div></div><div><h3>Clinical Relevance</h3><div>In general, the quality of online medical content is low. As artificial intelligence develops and becomes more widely used, it is important to assess the quality of the information patients are receiving from this technology.</div></div>","PeriodicalId":34631,"journal":{"name":"Arthroscopy Sports Medicine and Rehabilitation","volume":"6 5","pages":"Article 100963"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT and Google Provide Mostly Excellent or Satisfactory Responses to the Most Frequently Asked Patient Questions Related to Rotator Cuff Repair\",\"authors\":\"Martinus Megalla M.D. ,&nbsp;Alexander K. Hahn M.D. ,&nbsp;Jordan A. Bauer M.D. ,&nbsp;Jordan T. Windsor B.S. ,&nbsp;Zachary T. Grace M.D. ,&nbsp;Marissa A. Gedman M.D. ,&nbsp;Robert A. Arciero M.D.\",\"doi\":\"10.1016/j.asmr.2024.100963\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Purpose</h3><div>To assess the differences in frequently asked questions (FAQs) and responses related to rotator cuff surgery between Google and ChatGPT.</div></div><div><h3>Methods</h3><div>Both Google and ChatGPT (version 3.5) were queried for the top 10 FAQs using the search term “rotator cuff repair.” Questions were categorized according to Rothwell’s classification. In addition to questions and answers for each website, the source that the answer was pulled from was noted and assigned a category (academic, medical practice, etc). Responses were also graded as “excellent response not requiring clarification” (1), “satisfactory requiring minimal clarification” (2), “satisfactory requiring moderate clarification” (3), or “unsatisfactory requiring substantial clarification” (4).</div></div><div><h3>Results</h3><div>Overall, 30% of questions were similar between what Google and ChatGPT deemed to be the most FAQs. For questions from Google web search, most answers came from medical practices (40%). For ChatGPT, most answers were provided by academic sources (90%). For numerical questions, ChatGPT and Google provided similar responses for 30% of questions. For most of the questions, both Google and ChatGPT responses were either “excellent” or “satisfactory requiring minimal clarification.” Google had 1 response rated as satisfactory requiring moderate clarification, whereas ChatGPT had 2 responses rated as unsatisfactory.</div></div><div><h3>Conclusions</h3><div>Both Google and ChatGPT offer mostly excellent or satisfactory responses to the most FAQs regarding rotator cuff repair. However, ChatGPT may provide inaccurate or even fabricated answers and associated citations.</div></div><div><h3>Clinical Relevance</h3><div>In general, the quality of online medical content is low. As artificial intelligence develops and becomes more widely used, it is important to assess the quality of the information patients are receiving from this technology.</div></div>\",\"PeriodicalId\":34631,\"journal\":{\"name\":\"Arthroscopy Sports Medicine and Rehabilitation\",\"volume\":\"6 5\",\"pages\":\"Article 100963\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Arthroscopy Sports Medicine and Rehabilitation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666061X24000907\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Arthroscopy Sports Medicine and Rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666061X24000907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

目的 评估谷歌和 ChatGPT 之间有关肩袖手术的常见问题(FAQs)和回复的差异。方法 使用搜索词 "肩袖修复 "查询谷歌和 ChatGPT(3.5 版)的前 10 个常见问题。问题按照罗斯威尔分类法进行分类。除了每个网站的问题和答案外,还注明了答案的来源并进行了分类(学术、医疗实践等)。回答也被分为 "不需要澄清的优秀回答"(1)、"需要少量澄清的满意回答"(2)、"需要适度澄清的满意回答"(3)或 "需要大量澄清的不满意回答"(4)。在谷歌网络搜索的问题中,大多数答案来自医疗机构(40%)。而在 ChatGPT 中,大多数答案来自学术界(90%)。对于数字问题,ChatGPT 和谷歌对 30% 的问题提供了相似的回答。对于大多数问题,Google 和 ChatGPT 的回答要么是 "优秀",要么是 "满意,只需少量说明"。结论对于有关肩袖修复的大多数常见问题,Google 和 ChatGPT 都提供了大部分优秀或满意的回答。临床相关性一般来说,在线医疗内容的质量较低。随着人工智能的发展和广泛应用,评估患者从这项技术中获得的信息质量非常重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ChatGPT and Google Provide Mostly Excellent or Satisfactory Responses to the Most Frequently Asked Patient Questions Related to Rotator Cuff Repair

Purpose

To assess the differences in frequently asked questions (FAQs) and responses related to rotator cuff surgery between Google and ChatGPT.

Methods

Both Google and ChatGPT (version 3.5) were queried for the top 10 FAQs using the search term “rotator cuff repair.” Questions were categorized according to Rothwell’s classification. In addition to questions and answers for each website, the source that the answer was pulled from was noted and assigned a category (academic, medical practice, etc). Responses were also graded as “excellent response not requiring clarification” (1), “satisfactory requiring minimal clarification” (2), “satisfactory requiring moderate clarification” (3), or “unsatisfactory requiring substantial clarification” (4).

Results

Overall, 30% of questions were similar between what Google and ChatGPT deemed to be the most FAQs. For questions from Google web search, most answers came from medical practices (40%). For ChatGPT, most answers were provided by academic sources (90%). For numerical questions, ChatGPT and Google provided similar responses for 30% of questions. For most of the questions, both Google and ChatGPT responses were either “excellent” or “satisfactory requiring minimal clarification.” Google had 1 response rated as satisfactory requiring moderate clarification, whereas ChatGPT had 2 responses rated as unsatisfactory.

Conclusions

Both Google and ChatGPT offer mostly excellent or satisfactory responses to the most FAQs regarding rotator cuff repair. However, ChatGPT may provide inaccurate or even fabricated answers and associated citations.

Clinical Relevance

In general, the quality of online medical content is low. As artificial intelligence develops and becomes more widely used, it is important to assess the quality of the information patients are receiving from this technology.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.70
自引率
0.00%
发文量
218
审稿时长
45 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信