ChatGPT's competence in responding to urological emergencies.

Mazhar Ortaç, Rıfat Burak Ergül, Hüseyin Burak Yazılı, Muhammet Firat Özervarlı, Şenol Tonyalı, Omer Sarılar, Faruk Özgör
{"title":"ChatGPT's competence in responding to urological emergencies.","authors":"Mazhar Ortaç, Rıfat Burak Ergül, Hüseyin Burak Yazılı, Muhammet Firat Özervarlı, Şenol Tonyalı, Omer Sarılar, Faruk Özgör","doi":"10.14744/tjtes.2024.03377","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>In recent years, artificial intelligence (AI) applications have been increasingly used as sources of medical information, alongside their applications in many other fields. This study is the first to evaluate ChatGPT's performance in addressing urological emergencies (UE).</p><p><strong>Methods: </strong>The study included frequently asked questions (FAQs) by the public regarding UE, as well as UE-related questions formulated based on the European Association of Urology (EAU) guidelines. The FAQs were selected from questions posed by patients to doctors and hospital accounts on social media platforms (Facebook, Instagram, and X) and on websites. All questions were presented to ChatGPT 4 (premium version) in English, and the responses were recorded. Two urologists assessed the quality of the responses using a Global Quality Score (GQS) on a scale of 1 to 5.</p><p><strong>Results: </strong>Of the 73 total FAQs, 53 (72.6%) received a GQS score of 5, while only two (2.7%) received a GQS score of 1. The questions with a GQS score of 1 pertained to priapism and urosepsis. The topic with the highest proportion of responses receiving a GQS score of 5 was urosepsis (82.3%), whereas the lowest scores were observed in questions related to renal trauma (66.7%) and postrenal acute kidney injury (66.7%). A total of 42 questions were formulated based on the EAU guidelines, of which 23 (54.8%) received a GQS score of 5 from the physicians. The mean GQS score for FAQs was 4.38+-1.14, which was significantly higher (p=0.009) than the mean GQS score for EAU guideline-based questions (3.88+-1.47).</p><p><strong>Conclusion: </strong>This study demonstrated for the first time that nearly three out of four FAQs were answered accurately and satisfactorily by ChatGPT. However, the accuracy and proficiency of ChatGPT's responses significantly decreased when addressing guideline-based questions on UE.</p>","PeriodicalId":94263,"journal":{"name":"Ulusal travma ve acil cerrahi dergisi = Turkish journal of trauma & emergency surgery : TJTES","volume":"31 3","pages":"291-295"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894229/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ulusal travma ve acil cerrahi dergisi = Turkish journal of trauma & emergency surgery : TJTES","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14744/tjtes.2024.03377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: In recent years, artificial intelligence (AI) applications have been increasingly used as sources of medical information, alongside their applications in many other fields. This study is the first to evaluate ChatGPT's performance in addressing urological emergencies (UE).

Methods: The study included frequently asked questions (FAQs) by the public regarding UE, as well as UE-related questions formulated based on the European Association of Urology (EAU) guidelines. The FAQs were selected from questions posed by patients to doctors and hospital accounts on social media platforms (Facebook, Instagram, and X) and on websites. All questions were presented to ChatGPT 4 (premium version) in English, and the responses were recorded. Two urologists assessed the quality of the responses using a Global Quality Score (GQS) on a scale of 1 to 5.

Results: Of the 73 total FAQs, 53 (72.6%) received a GQS score of 5, while only two (2.7%) received a GQS score of 1. The questions with a GQS score of 1 pertained to priapism and urosepsis. The topic with the highest proportion of responses receiving a GQS score of 5 was urosepsis (82.3%), whereas the lowest scores were observed in questions related to renal trauma (66.7%) and postrenal acute kidney injury (66.7%). A total of 42 questions were formulated based on the EAU guidelines, of which 23 (54.8%) received a GQS score of 5 from the physicians. The mean GQS score for FAQs was 4.38+-1.14, which was significantly higher (p=0.009) than the mean GQS score for EAU guideline-based questions (3.88+-1.47).

Conclusion: This study demonstrated for the first time that nearly three out of four FAQs were answered accurately and satisfactorily by ChatGPT. However, the accuracy and proficiency of ChatGPT's responses significantly decreased when addressing guideline-based questions on UE.

ChatGPT应对泌尿科突发事件的能力。
背景:近年来,人工智能(AI)应用越来越多地被用作医疗信息的来源,以及它们在许多其他领域的应用。本研究首次评估了ChatGPT在处理泌尿外科急诊(UE)方面的表现。方法:该研究包括公众关于尿路尿路的常见问题(FAQs),以及根据欧洲泌尿外科协会(EAU)指南制定的尿路尿路相关问题。这些常见问题是从患者在社交媒体平台(Facebook、Instagram和X)和网站上向医生和医院账户提出的问题中选出的。所有的问题都用英语提交给ChatGPT 4(高级版),并记录回答。两名泌尿科医生使用全球质量评分(GQS)在1到5的范围内评估反应的质量。结果:73个常见问题中,53个(72.6%)的GQS得分为5分,只有2个(2.7%)的GQS得分为1分。GQS评分为1分的问题与阴茎勃起障碍和尿毒症有关。GQS得分为5分的回答比例最高的是尿脓毒症(82.3%),而得分最低的是与肾外伤(66.7%)和肾后急性肾损伤(66.7%)相关的问题。根据EAU指南共制定了42个问题,其中23个(54.8%)获得了医生的GQS 5分。常见问题的平均GQS得分为4.38+-1.14,显著高于EAU指南基础问题的平均GQS得分(3.88+-1.47)(p=0.009)。结论:本研究首次证明ChatGPT对近四分之三的常见问题的回答是准确和令人满意的。然而,在解决基于UE的指南问题时,ChatGPT的回答的准确性和熟练程度显着降低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信