人工智能聊天机器人根据给定的临床案例诊断急性肺血栓栓塞症的性能。

Q3 Medicine
Acute Medicine Pub Date : 2024-01-01
Banu Arslan, Mehmet Necmeddin Sutasir, Ertugrul Altinbilek
{"title":"人工智能聊天机器人根据给定的临床案例诊断急性肺血栓栓塞症的性能。","authors":"Banu Arslan, Mehmet Necmeddin Sutasir, Ertugrul Altinbilek","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Chatbots hold great potential to serve as support tool in diagnosis and clinical decision process. In this study, we aimed to evaluate the accuracy of chatbots in diagnosing pulmonary embolism (PE). Furthermore, we assessed their performance in determining the PE severity.</p><p><strong>Method: </strong>65 case reports meeting our inclusion criteria were selected for this study. Two emergency medicine (EM) physicians crafted clinical vignettes and introduced them to the Bard, Bing, and ChatGPT-3.5 with asking the top 10 diagnoses. After obtaining all differential diagnoses lists, vignettes enriched with supplemental data redirected to the chatbots with asking the severity of PE.</p><p><strong>Results: </strong>ChatGPT-3.5, Bing, and Bard listed PE within the top 10 diagnoses list with accuracy rates of 92.3%, 92.3%, and 87.6%, respectively. For the top 3 diagnoses, Bard achieved 75.4% accuracy, while ChatGPT and Bing both had 67.7%. As the top diagnosis, Bard, ChatGPT-3.5, and Bing were accurate in 56.9%, 47.7% and 30.8% cases, respectively. Significant differences between Bard and both Bing (p=0.000) and ChatGPT (p=0.007) were noted in this group. Massive PEs were correctly identified with over 85% success rate. Overclassification rates for Bard, ChatGPT-3.5 and Bing at 38.5%, 23.3% and 20%, respectively. Misclassification rates were highest in submassive group.</p><p><strong>Conclusion: </strong>Although chatbots aren't intended for diagnosis, their high level of diagnostic accuracy and success rate in identifying massive PE underscore the promising potential of chatbots as clinical decision support tool. However, further research with larger patient datasets is required to validate and refine their performance in real-world clinical settings.</p>","PeriodicalId":39743,"journal":{"name":"Acute Medicine","volume":"23 2","pages":"66-74"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance of AI-powered chatbots in diagnosing acute pulmonary thromboembolism from given clinical vignettes.\",\"authors\":\"Banu Arslan, Mehmet Necmeddin Sutasir, Ertugrul Altinbilek\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Chatbots hold great potential to serve as support tool in diagnosis and clinical decision process. In this study, we aimed to evaluate the accuracy of chatbots in diagnosing pulmonary embolism (PE). Furthermore, we assessed their performance in determining the PE severity.</p><p><strong>Method: </strong>65 case reports meeting our inclusion criteria were selected for this study. Two emergency medicine (EM) physicians crafted clinical vignettes and introduced them to the Bard, Bing, and ChatGPT-3.5 with asking the top 10 diagnoses. After obtaining all differential diagnoses lists, vignettes enriched with supplemental data redirected to the chatbots with asking the severity of PE.</p><p><strong>Results: </strong>ChatGPT-3.5, Bing, and Bard listed PE within the top 10 diagnoses list with accuracy rates of 92.3%, 92.3%, and 87.6%, respectively. For the top 3 diagnoses, Bard achieved 75.4% accuracy, while ChatGPT and Bing both had 67.7%. As the top diagnosis, Bard, ChatGPT-3.5, and Bing were accurate in 56.9%, 47.7% and 30.8% cases, respectively. Significant differences between Bard and both Bing (p=0.000) and ChatGPT (p=0.007) were noted in this group. Massive PEs were correctly identified with over 85% success rate. Overclassification rates for Bard, ChatGPT-3.5 and Bing at 38.5%, 23.3% and 20%, respectively. Misclassification rates were highest in submassive group.</p><p><strong>Conclusion: </strong>Although chatbots aren't intended for diagnosis, their high level of diagnostic accuracy and success rate in identifying massive PE underscore the promising potential of chatbots as clinical decision support tool. However, further research with larger patient datasets is required to validate and refine their performance in real-world clinical settings.</p>\",\"PeriodicalId\":39743,\"journal\":{\"name\":\"Acute Medicine\",\"volume\":\"23 2\",\"pages\":\"66-74\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Acute Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acute Medicine","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

背景:聊天机器人作为诊断和临床决策过程中的辅助工具具有巨大潜力。在本研究中,我们旨在评估聊天机器人诊断肺栓塞(PE)的准确性。此外,我们还评估了聊天机器人在确定肺栓塞严重程度方面的表现:本研究选择了 65 份符合纳入标准的病例报告。两名急诊医学(EM)医生精心制作了临床小故事,并将其介绍给 Bard、Bing 和 ChatGPT-3.5,同时询问前 10 个诊断。在获得所有鉴别诊断列表后,用补充数据充实的小故事重定向到聊天机器人,询问 PE 的严重程度:结果:ChatGPT-3.5、Bing 和 Bard 将 PE 列在前 10 个诊断列表中,准确率分别为 92.3%、92.3% 和 87.6%。在前 3 项诊断中,Bard 的准确率为 75.4%,而 ChatGPT 和 Bing 的准确率均为 67.7%。作为最高诊断,Bard、ChatGPT-3.5 和 Bing 的准确率分别为 56.9%、47.7% 和 30.8%。在这组病例中,Bard 与 Bing(P=0.000)和 ChatGPT(P=0.007)之间存在显著差异。大面积 PE 的正确识别率超过 85%。Bard、ChatGPT-3.5 和 Bing 的过分类率分别为 38.5%、23.3% 和 20%。亚大规模组的误分类率最高:虽然聊天机器人并非用于诊断,但其诊断准确率和识别大面积 PE 的成功率都很高,这凸显了聊天机器人作为临床决策支持工具的巨大潜力。不过,还需要对更大的患者数据集进行进一步研究,以验证和完善聊天机器人在实际临床环境中的表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Performance of AI-powered chatbots in diagnosing acute pulmonary thromboembolism from given clinical vignettes.

Background: Chatbots hold great potential to serve as support tool in diagnosis and clinical decision process. In this study, we aimed to evaluate the accuracy of chatbots in diagnosing pulmonary embolism (PE). Furthermore, we assessed their performance in determining the PE severity.

Method: 65 case reports meeting our inclusion criteria were selected for this study. Two emergency medicine (EM) physicians crafted clinical vignettes and introduced them to the Bard, Bing, and ChatGPT-3.5 with asking the top 10 diagnoses. After obtaining all differential diagnoses lists, vignettes enriched with supplemental data redirected to the chatbots with asking the severity of PE.

Results: ChatGPT-3.5, Bing, and Bard listed PE within the top 10 diagnoses list with accuracy rates of 92.3%, 92.3%, and 87.6%, respectively. For the top 3 diagnoses, Bard achieved 75.4% accuracy, while ChatGPT and Bing both had 67.7%. As the top diagnosis, Bard, ChatGPT-3.5, and Bing were accurate in 56.9%, 47.7% and 30.8% cases, respectively. Significant differences between Bard and both Bing (p=0.000) and ChatGPT (p=0.007) were noted in this group. Massive PEs were correctly identified with over 85% success rate. Overclassification rates for Bard, ChatGPT-3.5 and Bing at 38.5%, 23.3% and 20%, respectively. Misclassification rates were highest in submassive group.

Conclusion: Although chatbots aren't intended for diagnosis, their high level of diagnostic accuracy and success rate in identifying massive PE underscore the promising potential of chatbots as clinical decision support tool. However, further research with larger patient datasets is required to validate and refine their performance in real-world clinical settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Acute Medicine
Acute Medicine Medicine-Emergency Medicine
CiteScore
1.50
自引率
0.00%
发文量
32
期刊介绍: These are usually commissioned by the editorial team in accordance with a cycle running over several years. Authors wishing to submit a review relevant to Acute Medicine are advised to contact the editor before writing this. Unsolicited review articles received for consideration may be included if the subject matter is considered of interest to the readership, provided the topic has not already been covered in a recent edition. Review articles are usually 3000-5000 words and may include tables, pictures and other figures as required for the text. Include 3 or 4 ‘key points’ summarising the main teaching messages.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信