人工智能大型语言模型聊天机器人在回答麻醉常见问题方面的比较

Teresa P. Nguyen , Brendan Carvalho , Hannah Sukhdeo , Kareem Joudi , Nan Guo , Marianne Chen , Jed T. Wolpaw , Jesse J. Kiefer , Melissa Byrne , Tatiana Jamroz , Allison A. Mootz , Sharon C. Reale , James Zou , Pervez Sultan
{"title":"人工智能大型语言模型聊天机器人在回答麻醉常见问题方面的比较","authors":"Teresa P. Nguyen ,&nbsp;Brendan Carvalho ,&nbsp;Hannah Sukhdeo ,&nbsp;Kareem Joudi ,&nbsp;Nan Guo ,&nbsp;Marianne Chen ,&nbsp;Jed T. Wolpaw ,&nbsp;Jesse J. Kiefer ,&nbsp;Melissa Byrne ,&nbsp;Tatiana Jamroz ,&nbsp;Allison A. Mootz ,&nbsp;Sharon C. Reale ,&nbsp;James Zou ,&nbsp;Pervez Sultan","doi":"10.1016/j.bjao.2024.100280","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Patients are increasingly using artificial intelligence (AI) chatbots to seek answers to medical queries.</p></div><div><h3>Methods</h3><p>Ten frequently asked questions in anaesthesia were posed to three AI chatbots: ChatGPT4 (OpenAI), Bard (Google), and Bing Chat (Microsoft). Each chatbot's answers were evaluated in a randomised, blinded order by five residency programme directors from 15 medical institutions in the USA. Three medical content quality categories (accuracy, comprehensiveness, safety) and three communication quality categories (understandability, empathy/respect, and ethics) were scored between 1 and 5 (1 representing worst, 5 representing best).</p></div><div><h3>Results</h3><p>ChatGPT4 and Bard outperformed Bing Chat (median [inter-quartile range] scores: 4 [3–4], 4 [3–4], and 3 [2–4], respectively; <em>P</em>&lt;0.001 with all metrics combined). All AI chatbots performed poorly in accuracy (score of ≥4 by 58%, 48%, and 36% of experts for ChatGPT4, Bard, and Bing Chat, respectively), comprehensiveness (score ≥4 by 42%, 30%, and 12% of experts for ChatGPT4, Bard, and Bing Chat, respectively), and safety (score ≥4 by 50%, 40%, and 28% of experts for ChatGPT4, Bard, and Bing Chat, respectively). Notably, answers from ChatGPT4, Bard, and Bing Chat differed statistically in comprehensiveness (ChatGPT4, 3 [2–4] <em>vs</em> Bing Chat, 2 [2–3], <em>P</em>&lt;0.001; and Bard 3 [2–4] <em>vs</em> Bing Chat, 2 [2–3], <em>P</em>=0.002). All large language model chatbots performed well with no statistical difference for understandability (<em>P</em>=0.24), empathy (<em>P</em>=0.032), and ethics (<em>P</em>=0.465).</p></div><div><h3>Conclusions</h3><p>In answering anaesthesia patient frequently asked questions, the chatbots perform well on communication metrics but are suboptimal for medical content metrics. Overall, ChatGPT4 and Bard were comparable to each other, both outperforming Bing Chat.</p></div>","PeriodicalId":72418,"journal":{"name":"BJA open","volume":"10 ","pages":"Article 100280"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772609624000248/pdfft?md5=3069a1d67d9065d3c6a3fb1ea7230c29&pid=1-s2.0-S2772609624000248-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Comparison of artificial intelligence large language model chatbots in answering frequently asked questions in anaesthesia\",\"authors\":\"Teresa P. Nguyen ,&nbsp;Brendan Carvalho ,&nbsp;Hannah Sukhdeo ,&nbsp;Kareem Joudi ,&nbsp;Nan Guo ,&nbsp;Marianne Chen ,&nbsp;Jed T. Wolpaw ,&nbsp;Jesse J. Kiefer ,&nbsp;Melissa Byrne ,&nbsp;Tatiana Jamroz ,&nbsp;Allison A. Mootz ,&nbsp;Sharon C. Reale ,&nbsp;James Zou ,&nbsp;Pervez Sultan\",\"doi\":\"10.1016/j.bjao.2024.100280\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><p>Patients are increasingly using artificial intelligence (AI) chatbots to seek answers to medical queries.</p></div><div><h3>Methods</h3><p>Ten frequently asked questions in anaesthesia were posed to three AI chatbots: ChatGPT4 (OpenAI), Bard (Google), and Bing Chat (Microsoft). Each chatbot's answers were evaluated in a randomised, blinded order by five residency programme directors from 15 medical institutions in the USA. Three medical content quality categories (accuracy, comprehensiveness, safety) and three communication quality categories (understandability, empathy/respect, and ethics) were scored between 1 and 5 (1 representing worst, 5 representing best).</p></div><div><h3>Results</h3><p>ChatGPT4 and Bard outperformed Bing Chat (median [inter-quartile range] scores: 4 [3–4], 4 [3–4], and 3 [2–4], respectively; <em>P</em>&lt;0.001 with all metrics combined). All AI chatbots performed poorly in accuracy (score of ≥4 by 58%, 48%, and 36% of experts for ChatGPT4, Bard, and Bing Chat, respectively), comprehensiveness (score ≥4 by 42%, 30%, and 12% of experts for ChatGPT4, Bard, and Bing Chat, respectively), and safety (score ≥4 by 50%, 40%, and 28% of experts for ChatGPT4, Bard, and Bing Chat, respectively). Notably, answers from ChatGPT4, Bard, and Bing Chat differed statistically in comprehensiveness (ChatGPT4, 3 [2–4] <em>vs</em> Bing Chat, 2 [2–3], <em>P</em>&lt;0.001; and Bard 3 [2–4] <em>vs</em> Bing Chat, 2 [2–3], <em>P</em>=0.002). All large language model chatbots performed well with no statistical difference for understandability (<em>P</em>=0.24), empathy (<em>P</em>=0.032), and ethics (<em>P</em>=0.465).</p></div><div><h3>Conclusions</h3><p>In answering anaesthesia patient frequently asked questions, the chatbots perform well on communication metrics but are suboptimal for medical content metrics. Overall, ChatGPT4 and Bard were comparable to each other, both outperforming Bing Chat.</p></div>\",\"PeriodicalId\":72418,\"journal\":{\"name\":\"BJA open\",\"volume\":\"10 \",\"pages\":\"Article 100280\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2772609624000248/pdfft?md5=3069a1d67d9065d3c6a3fb1ea7230c29&pid=1-s2.0-S2772609624000248-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BJA open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772609624000248\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJA open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772609624000248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景患者越来越多地使用人工智能(AI)聊天机器人来寻求医疗问题的答案。方法向三个人工智能聊天机器人提出了十个麻醉方面的常见问题:向三个人工智能聊天机器人提出了麻醉方面的十个常见问题:ChatGPT4(OpenAI)、Bard(谷歌)和Bing Chat(微软)。来自美国 15 家医疗机构的五位住院医师培训项目主任对每个聊天机器人的回答进行了随机、盲法评估。三个医疗内容质量类别(准确性、全面性、安全性)和三个交流质量类别(可理解性、同情/尊重、道德)的评分在 1 到 5 之间(1 代表最差,5 代表最佳):中位数[四分位数间距]分别为 4 [3-4]、4 [3-4] 和 3 [2-4];综合所有指标,P<0.001)。所有人工智能聊天机器人在准确性(ChatGPT4、Bard 和 Bing Chat 分别有 58%、48% 和 36% 的专家评分≥4 分)、全面性(ChatGPT4、Bard 和 Bing Chat 分别有 42%、30% 和 12% 的专家评分≥4 分)和安全性(ChatGPT4、Bard 和 Bing Chat 分别有 50% 、40% 和 28% 的专家评分≥4 分)方面的表现都很差。值得注意的是,ChatGPT4、Bard 和 Bing Chat 的回答在全面性方面存在统计学差异(ChatGPT4 3 [2-4] vs Bing Chat 2 [2-3],P<0.001;Bard 3 [2-4] vs Bing Chat 2 [2-3],P=0.002)。所有大语言模型聊天机器人在可理解性(P=0.24)、移情(P=0.032)和伦理(P=0.465)方面均表现良好,无统计学差异。总的来说,ChatGPT4 和 Bard 的表现不相上下,都优于 Bing Chat。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Comparison of artificial intelligence large language model chatbots in answering frequently asked questions in anaesthesia

Background

Patients are increasingly using artificial intelligence (AI) chatbots to seek answers to medical queries.

Methods

Ten frequently asked questions in anaesthesia were posed to three AI chatbots: ChatGPT4 (OpenAI), Bard (Google), and Bing Chat (Microsoft). Each chatbot's answers were evaluated in a randomised, blinded order by five residency programme directors from 15 medical institutions in the USA. Three medical content quality categories (accuracy, comprehensiveness, safety) and three communication quality categories (understandability, empathy/respect, and ethics) were scored between 1 and 5 (1 representing worst, 5 representing best).

Results

ChatGPT4 and Bard outperformed Bing Chat (median [inter-quartile range] scores: 4 [3–4], 4 [3–4], and 3 [2–4], respectively; P<0.001 with all metrics combined). All AI chatbots performed poorly in accuracy (score of ≥4 by 58%, 48%, and 36% of experts for ChatGPT4, Bard, and Bing Chat, respectively), comprehensiveness (score ≥4 by 42%, 30%, and 12% of experts for ChatGPT4, Bard, and Bing Chat, respectively), and safety (score ≥4 by 50%, 40%, and 28% of experts for ChatGPT4, Bard, and Bing Chat, respectively). Notably, answers from ChatGPT4, Bard, and Bing Chat differed statistically in comprehensiveness (ChatGPT4, 3 [2–4] vs Bing Chat, 2 [2–3], P<0.001; and Bard 3 [2–4] vs Bing Chat, 2 [2–3], P=0.002). All large language model chatbots performed well with no statistical difference for understandability (P=0.24), empathy (P=0.032), and ethics (P=0.465).

Conclusions

In answering anaesthesia patient frequently asked questions, the chatbots perform well on communication metrics but are suboptimal for medical content metrics. Overall, ChatGPT4 and Bard were comparable to each other, both outperforming Bing Chat.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BJA open
BJA open Anesthesiology and Pain Medicine
CiteScore
0.60
自引率
0.00%
发文量
0
审稿时长
83 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信