Exploring ChatGPT's potential in ECG interpretation and outcome prediction in emergency department.

IF 2.7 3区 医学 Q1 EMERGENCY MEDICINE
Arian Zaboli, Francesco Brigo, Marta Ziller, Magdalena Massar, Marta Parodi, Gabriele Magnarelli, Gloria Brigiari, Gianni Turcato
{"title":"Exploring ChatGPT's potential in ECG interpretation and outcome prediction in emergency department.","authors":"Arian Zaboli, Francesco Brigo, Marta Ziller, Magdalena Massar, Marta Parodi, Gabriele Magnarelli, Gloria Brigiari, Gianni Turcato","doi":"10.1016/j.ajem.2024.11.023","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Approximately 20 % of emergency department (ED) visits involve cardiovascular symptoms. While ECGs are crucial for diagnosing serious conditions, interpretation accuracy varies among emergency physicians. Artificial intelligence (AI), such as ChatGPT, could assist in ECG interpretation by enhancing diagnostic precision.</p><p><strong>Methods: </strong>This single-center, retrospective observational study, conducted at Merano Hospital's ED, assessed ChatGPT's agreement with cardiologists in interpreting ECGs. The primary outcome was agreement level between ChatGPT and cardiologists. Secondary outcomes included ChatGPT's ability to identify patients at risk for Major Adverse Cardiac Events (MACE).</p><p><strong>Results: </strong>Of the 128 patients enrolled, ChatGPT showed good agreement with cardiologists on most ECG segments, excluding T wave (kappa = 0.048) and ST segment (kappa = 0.267). Significant discrepancies arose in the assessment of critical cases, as ChatGPT classified more patients as at risk for MACE than were identified by physicians.</p><p><strong>Conclusions: </strong>ChatGPT demonstrates moderate accuracy in ECG interpretation, yet its current limitations, especially in assessing critical cases, restrict its clinical utility in ED settings. Future research and technological advancements could enhance AI's reliability, potentially positioning it as a valuable support tool for emergency physicians.</p>","PeriodicalId":55536,"journal":{"name":"American Journal of Emergency Medicine","volume":"88 ","pages":"7-11"},"PeriodicalIF":2.7000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Emergency Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.ajem.2024.11.023","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EMERGENCY MEDICINE","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Approximately 20 % of emergency department (ED) visits involve cardiovascular symptoms. While ECGs are crucial for diagnosing serious conditions, interpretation accuracy varies among emergency physicians. Artificial intelligence (AI), such as ChatGPT, could assist in ECG interpretation by enhancing diagnostic precision.

Methods: This single-center, retrospective observational study, conducted at Merano Hospital's ED, assessed ChatGPT's agreement with cardiologists in interpreting ECGs. The primary outcome was agreement level between ChatGPT and cardiologists. Secondary outcomes included ChatGPT's ability to identify patients at risk for Major Adverse Cardiac Events (MACE).

Results: Of the 128 patients enrolled, ChatGPT showed good agreement with cardiologists on most ECG segments, excluding T wave (kappa = 0.048) and ST segment (kappa = 0.267). Significant discrepancies arose in the assessment of critical cases, as ChatGPT classified more patients as at risk for MACE than were identified by physicians.

Conclusions: ChatGPT demonstrates moderate accuracy in ECG interpretation, yet its current limitations, especially in assessing critical cases, restrict its clinical utility in ED settings. Future research and technological advancements could enhance AI's reliability, potentially positioning it as a valuable support tool for emergency physicians.

探索 ChatGPT 在急诊科心电图解读和结果预测方面的潜力。
背景:约 20% 的急诊科(ED)就诊者有心血管症状。虽然心电图对诊断严重疾病至关重要,但急诊医生对心电图的解读准确性却参差不齐。人工智能(AI),如 ChatGPT,可以通过提高诊断精确度来协助心电图解读:这项在梅拉诺医院急诊室进行的单中心回顾性观察研究评估了 ChatGPT 与心脏病专家在解读心电图方面的一致性。主要结果是 ChatGPT 与心脏病专家的一致程度。次要结果包括 ChatGPT 识别重大心脏不良事件 (MACE) 风险患者的能力:在 128 名入选患者中,ChatGPT 与心脏病专家在大多数心电图节段上都显示出良好的一致性,但不包括 T 波(kappa = 0.048)和 ST 段(kappa = 0.267)。在危重病例的评估中出现了明显的差异,因为 ChatGPT 将更多患者归类为有 MACE 风险,而不是由医生确定的患者:ChatGPT 在心电图解读方面表现出中等准确性,但其目前的局限性,尤其是在评估危重病例方面,限制了其在急诊室环境中的临床应用。未来的研究和技术进步可以提高人工智能的可靠性,使其成为急诊医生的重要辅助工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.00
自引率
5.60%
发文量
730
审稿时长
42 days
期刊介绍: A distinctive blend of practicality and scholarliness makes the American Journal of Emergency Medicine a key source for information on emergency medical care. Covering all activities concerned with emergency medicine, it is the journal to turn to for information to help increase the ability to understand, recognize and treat emergency conditions. Issues contain clinical articles, case reports, review articles, editorials, international notes, book reviews and more.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信