Can ChatGPT help patients understand their andrological diseases?

Revista internacional de andrologia Pub Date : 2024-06-01 Epub Date: 2024-06-30 DOI:10.22514/j.androl.2024.010
İsmail Emre Ergin, Adem Sancı
{"title":"Can ChatGPT help patients understand their andrological diseases?","authors":"İsmail Emre Ergin, Adem Sancı","doi":"10.22514/j.androl.2024.010","DOIUrl":null,"url":null,"abstract":"<p><p>We aimed to assess the reliability of Chat Generative Pre-training Transformer (ChatGPT)'s andrology information and its suitability for informing patients and medical students accurately about andrology topics. We presented a series of systematically organized frequently asked questions on andrology topics and sentences containing strong recommendations from the European Association of Urology (EAU) Guideline to ChatGPT-3.5 and 4.0 as questions. These questions encompassed Male Hypogonadism, Erectile Dysfunction and Sexual Desire Disorder, Disorders of Ejaculation, Penile Curvature and Penile Size Abnormalities, Priapism, and Male Infertility. Two expert urologists independently evaluated and assigned scores ranging from 1 to 4 to each response based on its accuracy, with the following ratings: (1) Completely true, (2) Accurate but insufficient, (3) A mixture of accurate and misleading information, and (4) Completely false. A total of 120 questions were included in the study. Among these questions, 50.0% received a grade of 1 (completely correct) (55.4% for 4.0 version). The combined rate of correct answers (grades 1 and 2) was 85.2% for frequently asked questions (88.8% for 4.0 version) and 81.5% for questions obtained from the guideline. The rate of completely incorrect answers (grade 4) was 1.8% for frequently asked questions (0% for 4.0 version) and 5.2% for questions based on strong recommendations. The response rate of version 4.0 to questions created from sentences containing strong recommendations from the EAU guideline was the same as version 3.5. ChatGPT provided satisfactory answers to the questions asked, although some responses lacked completeness. It may be beneficial to utilize ChatGPT under the guidance of a urologist to enhance patients' comprehension of their andrology issues.</p>","PeriodicalId":519907,"journal":{"name":"Revista internacional de andrologia","volume":"22 2","pages":"14-20"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Revista internacional de andrologia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22514/j.androl.2024.010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/30 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We aimed to assess the reliability of Chat Generative Pre-training Transformer (ChatGPT)'s andrology information and its suitability for informing patients and medical students accurately about andrology topics. We presented a series of systematically organized frequently asked questions on andrology topics and sentences containing strong recommendations from the European Association of Urology (EAU) Guideline to ChatGPT-3.5 and 4.0 as questions. These questions encompassed Male Hypogonadism, Erectile Dysfunction and Sexual Desire Disorder, Disorders of Ejaculation, Penile Curvature and Penile Size Abnormalities, Priapism, and Male Infertility. Two expert urologists independently evaluated and assigned scores ranging from 1 to 4 to each response based on its accuracy, with the following ratings: (1) Completely true, (2) Accurate but insufficient, (3) A mixture of accurate and misleading information, and (4) Completely false. A total of 120 questions were included in the study. Among these questions, 50.0% received a grade of 1 (completely correct) (55.4% for 4.0 version). The combined rate of correct answers (grades 1 and 2) was 85.2% for frequently asked questions (88.8% for 4.0 version) and 81.5% for questions obtained from the guideline. The rate of completely incorrect answers (grade 4) was 1.8% for frequently asked questions (0% for 4.0 version) and 5.2% for questions based on strong recommendations. The response rate of version 4.0 to questions created from sentences containing strong recommendations from the EAU guideline was the same as version 3.5. ChatGPT provided satisfactory answers to the questions asked, although some responses lacked completeness. It may be beneficial to utilize ChatGPT under the guidance of a urologist to enhance patients' comprehension of their andrology issues.

ChatGPT 可以帮助患者了解自己的耳鼻喉疾病吗?
我们的目的是评估 Chat Generative Pre-training Transformer(ChatGPT)的泌尿科信息的可靠性,以及它是否适用于向患者和医学生准确介绍泌尿科话题。我们向 ChatGPT-3.5 和 4.0 提出了一系列系统整理的有关泌尿学主题的常见问题,以及包含欧洲泌尿学协会 (EAU) 指南中强烈建议的句子作为问题。这些问题包括男性性腺功能减退症、勃起功能障碍和性欲障碍、射精障碍、阴茎弯曲和阴茎大小异常、尿道下裂和男性不育。由两名泌尿科专家独立进行评估,并根据每个回答的准确性给予 1 到 4 分的评分,评分标准如下:(1) 完全正确;(2) 准确但不充分;(3) 准确信息和误导信息并存;(4) 完全错误。研究共包括 120 个问题。在这些问题中,50.0%的问题得到了 1 分(完全正确)(4.0 版本为 55.4%)。常见问题的正确率(1 级和 2 级)合计为 85.2%(4.0 版为 88.8%),从指南中获取的问题的正确率为 81.5%。在常见问题中,答案完全错误的比率(4 级)为 1.8%(4.0 版为 0%),在基于强烈建议的问题中为 5.2%。4.0 版对根据 EAU 指南中包含强烈建议的句子创建的问题的回答率与 3.5 版相同。ChatGPT 对所提问题给出了令人满意的答复,尽管有些答复不够完整。在泌尿科医生的指导下使用 ChatGPT 可能会对提高患者对泌尿科问题的理解能力有所帮助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信