J.J. Szczesniewski , A. Ramoso Alba , P.M. Rodríguez Castro , M.F. Lorenzo Gómez , J. Sainz González , L. Llanes González
{"title":"用英语和西班牙语提供的有关泌尿外科病理学的聊天室、吟游诗人和副驾驶员信息的质量","authors":"J.J. Szczesniewski , A. Ramoso Alba , P.M. Rodríguez Castro , M.F. Lorenzo Gómez , J. Sainz González , L. Llanes González","doi":"10.1016/j.acuro.2023.12.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction and objective</h3><p>Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft).</p></div><div><h3>Methods</h3><p>We analyzed information on the following pathologies and their treatments as provided by AI: prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire.</p></div><div><h3>Results</h3><p>The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot.</p></div><div><h3>Conclusions</h3><p>The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.</p></div>","PeriodicalId":7145,"journal":{"name":"Actas urologicas espanolas","volume":"48 5","pages":"Pages 398-403"},"PeriodicalIF":1.2000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Calidad de información de ChatGPT, BARD y Copilot acerca de patología urológica en inglés y en español\",\"authors\":\"J.J. Szczesniewski , A. Ramoso Alba , P.M. Rodríguez Castro , M.F. Lorenzo Gómez , J. Sainz González , L. Llanes González\",\"doi\":\"10.1016/j.acuro.2023.12.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction and objective</h3><p>Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft).</p></div><div><h3>Methods</h3><p>We analyzed information on the following pathologies and their treatments as provided by AI: prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire.</p></div><div><h3>Results</h3><p>The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot.</p></div><div><h3>Conclusions</h3><p>The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.</p></div>\",\"PeriodicalId\":7145,\"journal\":{\"name\":\"Actas urologicas espanolas\",\"volume\":\"48 5\",\"pages\":\"Pages 398-403\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Actas urologicas espanolas\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0210480624000020\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Actas urologicas espanolas","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0210480624000020","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
Calidad de información de ChatGPT, BARD y Copilot acerca de patología urológica en inglés y en español
Introduction and objective
Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft).
Methods
We analyzed information on the following pathologies and their treatments as provided by AI: prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire.
Results
The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot.
Conclusions
The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.
期刊介绍:
Actas Urológicas Españolas is an international journal dedicated to urological diseases and renal transplant. It has been the official publication of the Spanish Urology Association since 1974 and of the American Urology Confederation since 2008. Its articles cover all aspects related to urology.
Actas Urológicas Españolas, governed by the peer review system (double blinded), is published online in Spanish and English. Consequently, manuscripts may be sent in Spanish or English and bidirectional free cost translation will be provided.