Calidad de información de ChatGPT, BARD y Copilot acerca de patología urológica en inglés y en español

IF 1.2 4区 医学 Q3 UROLOGY & NEPHROLOGY
J.J. Szczesniewski , A. Ramoso Alba , P.M. Rodríguez Castro , M.F. Lorenzo Gómez , J. Sainz González , L. Llanes González
{"title":"Calidad de información de ChatGPT, BARD y Copilot acerca de patología urológica en inglés y en español","authors":"J.J. Szczesniewski ,&nbsp;A. Ramoso Alba ,&nbsp;P.M. Rodríguez Castro ,&nbsp;M.F. Lorenzo Gómez ,&nbsp;J. Sainz González ,&nbsp;L. Llanes González","doi":"10.1016/j.acuro.2023.12.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction and objective</h3><p>Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft).</p></div><div><h3>Methods</h3><p>We analyzed information on the following pathologies and their treatments as provided by AI: prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire.</p></div><div><h3>Results</h3><p>The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot.</p></div><div><h3>Conclusions</h3><p>The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.</p></div>","PeriodicalId":7145,"journal":{"name":"Actas urologicas espanolas","volume":"48 5","pages":"Pages 398-403"},"PeriodicalIF":1.2000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Actas urologicas espanolas","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0210480624000020","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction and objective

Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft).

Methods

We analyzed information on the following pathologies and their treatments as provided by AI: prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire.

Results

The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot.

Conclusions

The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.

用英语和西班牙语提供的有关泌尿外科病理学的聊天室、吟游诗人和副驾驶员信息的质量
简介和目标人工智能使在对话框中询问病理信息成为可能。我们的目的是分析 ChatGPT(OpenIA)、BARD(谷歌)和 Copilot(微软)提供的有关最常见泌尿科病症的信息质量。方法我们分析了人工智能提供的有关以下病症及其治疗方法的信息:前列腺癌、肾癌、膀胱癌、尿路结石和良性前列腺肥大(BPH)。在对话框中以英语和西班牙语提出问题;通过 DISCERN 问卷收集和分析答案以及回答的总体适当性。结果三个聊天机器人的回答都解释了病理、详细说明了风险因素并描述了治疗方法。不同之处在于 BARD 和 Copilot 提供了外部信息引用,而 ChatGPT 没有。就绝对数量而言,Copilot 获得的 DISCERN 分数最高;但在适当性量表上,他们的回答并不是最适当的。BARD 的手术治疗得分最高,其次是 ChatGPT,最后是 Copilot。所提供的信息有很大的偏差,这取决于病理学、语言,尤其是所咨询的对话框。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Actas urologicas espanolas
Actas urologicas espanolas UROLOGY & NEPHROLOGY-
CiteScore
1.90
自引率
0.00%
发文量
98
审稿时长
46 days
期刊介绍: Actas Urológicas Españolas is an international journal dedicated to urological diseases and renal transplant. It has been the official publication of the Spanish Urology Association since 1974 and of the American Urology Confederation since 2008. Its articles cover all aspects related to urology. Actas Urológicas Españolas, governed by the peer review system (double blinded), is published online in Spanish and English. Consequently, manuscripts may be sent in Spanish or English and bidirectional free cost translation will be provided.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信