评估人工智能聊天机器人作为泌尿外科导师的功效:对欧洲泌尿外科委员会 2022 年在职评估答复的比较分析。

IF 1.5 4区 医学 Q3 UROLOGY & NEPHROLOGY
Urologia Internationalis Pub Date : 2024-01-01 Epub Date: 2024-03-30 DOI:10.1159/000537854
Matthias May, Katharina Körner-Riffard, Lisa Kollitsch, Maximilian Burger, Sabine D Brookman-May, Michael Rauchenwald, Martin Marszalek, Klaus Eredics
{"title":"评估人工智能聊天机器人作为泌尿外科导师的功效:对欧洲泌尿外科委员会 2022 年在职评估答复的比较分析。","authors":"Matthias May, Katharina Körner-Riffard, Lisa Kollitsch, Maximilian Burger, Sabine D Brookman-May, Michael Rauchenwald, Martin Marszalek, Klaus Eredics","doi":"10.1159/000537854","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>This study assessed the potential of large language models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics.</p><p><strong>Methods: </strong>Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48 h, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as \"formal accuracy\" (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as \"extended accuracy\" (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined.</p><p><strong>Results: </strong>In two rounds of testing, the FA scores were achieved as follows: ChatGPT-3.5: 58% and 62%, ChatGPT-4: 63% and 77%, and BING AI: 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p &gt; 0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence.</p><p><strong>Conclusion: </strong>LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine-specific LLMs is required before integration into urological training programs.</p>","PeriodicalId":23414,"journal":{"name":"Urologia Internationalis","volume":" ","pages":"359-366"},"PeriodicalIF":1.5000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11305516/pdf/","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Efficacy of AI Chatbots as Tutors in Urology: A Comparative Analysis of Responses to the 2022 In-Service Assessment of the European Board of Urology.\",\"authors\":\"Matthias May, Katharina Körner-Riffard, Lisa Kollitsch, Maximilian Burger, Sabine D Brookman-May, Michael Rauchenwald, Martin Marszalek, Klaus Eredics\",\"doi\":\"10.1159/000537854\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>This study assessed the potential of large language models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics.</p><p><strong>Methods: </strong>Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48 h, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as \\\"formal accuracy\\\" (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as \\\"extended accuracy\\\" (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined.</p><p><strong>Results: </strong>In two rounds of testing, the FA scores were achieved as follows: ChatGPT-3.5: 58% and 62%, ChatGPT-4: 63% and 77%, and BING AI: 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p &gt; 0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence.</p><p><strong>Conclusion: </strong>LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine-specific LLMs is required before integration into urological training programs.</p>\",\"PeriodicalId\":23414,\"journal\":{\"name\":\"Urologia Internationalis\",\"volume\":\" \",\"pages\":\"359-366\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11305516/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Urologia Internationalis\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1159/000537854\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/30 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Urologia Internationalis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1159/000537854","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/30 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

摘要

简介本研究通过评估大语言模型(LLMs)在回答泌尿学子课题时的准确性,评估其作为教育工具的潜力:使用 2022 年欧洲泌尿外科委员会(European Board of Urology,EBU)在职评估(In-Service Assessment,ISA)中的 100 道多选题(Multiple-Choice Questions,MCQs),对三种大型语言模型(ChatGPT-3.5、ChatGPT-4 和 Bing AI)进行了两轮测试,测试间隔为 48 小时,涵盖五个不同的子课题。正确答案被定义为 "形式准确性"(FA),代表四个选项中指定的单一最佳答案(SBA)。从 LLM 中选出的备选答案不一定是 SBA,但仍被视为正确答案,被称为 "扩展准确度"(EA)。结果显示,在两轮测试中,"扩展准确性 "和 "扩展准确性 "的准确率均高于 "扩展准确性":在两轮测试中,FAs 的得分如下:ChatGPT-3.5:58% 和 62%;ChatGPT-4:63% 和 77%;BING AI:81% 和 73%。EA 的加入并未显著提高整体性能。因此,ChatGPT-3.5、ChatGPT-4 和 BING AI 的收益分别为 7% 和 5%、5% 和 2%、3% 和 1%(P>0.3)。在泌尿系统子课题中,LLMs 在儿科/先天性疾病方面表现最佳,而在功能性/BPS/尿失禁方面的效果相对较差:法学硕士在泌尿学知识方面表现欠佳,在教育目的方面的熟练程度也不尽人意。将 EA 与 FA 相结合后,总体准确率没有明显提高。错误率仍然很高,从 16% 到 35% 不等。不同子课题的熟练程度差异很大。在将 LLM 纳入泌尿科培训计划之前,需要进一步开发针对特定医学的 LLM。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating the Efficacy of AI Chatbots as Tutors in Urology: A Comparative Analysis of Responses to the 2022 In-Service Assessment of the European Board of Urology.

Introduction: This study assessed the potential of large language models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics.

Methods: Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48 h, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as "formal accuracy" (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as "extended accuracy" (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined.

Results: In two rounds of testing, the FA scores were achieved as follows: ChatGPT-3.5: 58% and 62%, ChatGPT-4: 63% and 77%, and BING AI: 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p > 0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence.

Conclusion: LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine-specific LLMs is required before integration into urological training programs.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Urologia Internationalis
Urologia Internationalis 医学-泌尿学与肾脏学
CiteScore
3.30
自引率
6.20%
发文量
94
审稿时长
3-8 weeks
期刊介绍: Concise but fully substantiated international reports of clinically oriented research into science and current management of urogenital disorders form the nucleus of original as well as basic research papers. These are supplemented by up-to-date reviews by international experts on the state-of-the-art of key topics of clinical urological practice. Essential topics receiving regular coverage include the introduction of new techniques and instrumentation as well as the evaluation of new functional tests and diagnostic methods. Special attention is given to advances in surgical techniques and clinical oncology. The regular publication of selected case reports represents the great variation in urological disease and illustrates treatment solutions in singular cases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信