泌尿外科医生与人工智能生成的信息:患者和医生对准确性、完整性和偏好的评估。

IF 2.8 2区 医学 Q2 UROLOGY & NEPHROLOGY
Eric J Robinson, Chunyuan Qiu, Stuart Sands, Mohammad Khan, Shivang Vora, Kenichiro Oshima, Khang Nguyen, L Andrew DiFronzo, David Rhew, Mark I Feng
{"title":"泌尿外科医生与人工智能生成的信息:患者和医生对准确性、完整性和偏好的评估。","authors":"Eric J Robinson, Chunyuan Qiu, Stuart Sands, Mohammad Khan, Shivang Vora, Kenichiro Oshima, Khang Nguyen, L Andrew DiFronzo, David Rhew, Mark I Feng","doi":"10.1007/s00345-024-05399-y","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate the accuracy, comprehensiveness, empathetic tone, and patient preference for AI and urologist responses to patient messages concerning common BPH questions across phases of care.</p><p><strong>Methods: </strong>Cross-sectional study evaluating responses to 20 BPH-related questions generated by 2 AI chatbots and 4 urologists in a simulated clinical messaging environment without direct patient interaction. Accuracy, completeness, and empathetic tone of responses assessed by experts using Likert scales, and preferences and perceptions of authorship (chatbot vs. human) rated by non-medical evaluators.</p><p><strong>Results: </strong>Five non-medical volunteers independently evaluated, ranked, and inferred the source for 120 responses (n = 600 total). For volunteer evaluations, the mean (SD) score of chatbots, 3.0 (1.4) (moderately empathetic) was significantly higher than urologists, 2.1 (1.1) (slightly empathetic) (p < 0.001); mean (SD) and preference ranking for chatbots, 2.6 (1.6), was significantly higher than urologist ranking, 3.9 (1.6) (p < 0.001). Two subject matter experts (SMEs) independently evaluated 120 responses each (answers to 20 questions from 4 urologist and 2 chatbots, n = 240 total). For SME evaluations, mean (SD) accuracy score for chatbots was 4.5 (1.1) (nearly all correct) and not significantly different than urologists, 4.6 (1.2). The mean (SD) completeness score for chatbots was 2.4 (0.8) (comprehensive), significantly higher than urologists, 1.6 (0.6) (adequate) (p < 0.001).</p><p><strong>Conclusion: </strong>Answers to patient BPH messages generated by chatbots were evaluated by experts as equally accurate and more complete than urologist answers. Non-medical volunteers preferred chatbot-generated messages and considered them more empathetic compared to answers generated by urologists.</p>","PeriodicalId":23954,"journal":{"name":"World Journal of Urology","volume":"43 1","pages":"48"},"PeriodicalIF":2.8000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11680670/pdf/","citationCount":"0","resultStr":"{\"title\":\"Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians.\",\"authors\":\"Eric J Robinson, Chunyuan Qiu, Stuart Sands, Mohammad Khan, Shivang Vora, Kenichiro Oshima, Khang Nguyen, L Andrew DiFronzo, David Rhew, Mark I Feng\",\"doi\":\"10.1007/s00345-024-05399-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To evaluate the accuracy, comprehensiveness, empathetic tone, and patient preference for AI and urologist responses to patient messages concerning common BPH questions across phases of care.</p><p><strong>Methods: </strong>Cross-sectional study evaluating responses to 20 BPH-related questions generated by 2 AI chatbots and 4 urologists in a simulated clinical messaging environment without direct patient interaction. Accuracy, completeness, and empathetic tone of responses assessed by experts using Likert scales, and preferences and perceptions of authorship (chatbot vs. human) rated by non-medical evaluators.</p><p><strong>Results: </strong>Five non-medical volunteers independently evaluated, ranked, and inferred the source for 120 responses (n = 600 total). For volunteer evaluations, the mean (SD) score of chatbots, 3.0 (1.4) (moderately empathetic) was significantly higher than urologists, 2.1 (1.1) (slightly empathetic) (p < 0.001); mean (SD) and preference ranking for chatbots, 2.6 (1.6), was significantly higher than urologist ranking, 3.9 (1.6) (p < 0.001). Two subject matter experts (SMEs) independently evaluated 120 responses each (answers to 20 questions from 4 urologist and 2 chatbots, n = 240 total). For SME evaluations, mean (SD) accuracy score for chatbots was 4.5 (1.1) (nearly all correct) and not significantly different than urologists, 4.6 (1.2). The mean (SD) completeness score for chatbots was 2.4 (0.8) (comprehensive), significantly higher than urologists, 1.6 (0.6) (adequate) (p < 0.001).</p><p><strong>Conclusion: </strong>Answers to patient BPH messages generated by chatbots were evaluated by experts as equally accurate and more complete than urologist answers. Non-medical volunteers preferred chatbot-generated messages and considered them more empathetic compared to answers generated by urologists.</p>\",\"PeriodicalId\":23954,\"journal\":{\"name\":\"World Journal of Urology\",\"volume\":\"43 1\",\"pages\":\"48\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11680670/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"World Journal of Urology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00345-024-05399-y\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Journal of Urology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00345-024-05399-y","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:评估人工智能的准确性、全面性、共情语气和患者偏好,以及泌尿科医生对患者在护理阶段有关常见前列腺增生问题的信息的回应。方法:横断面研究评估了2个AI聊天机器人和4名泌尿科医生在模拟临床信息传递环境中对20个bph相关问题的回答,没有直接与患者互动。专家使用李克特量表评估回答的准确性、完整性和移情语气,非医学评估人员评估作者的偏好和看法(聊天机器人与人类)。结果:5名非医疗志愿者独立评估、排序和推断120份回复的来源(n = 600)。对于志愿者评估,聊天机器人的平均(SD)得分3.0(1.4)(中度共情)明显高于泌尿科医生,2.1(1.1)(轻度共情)(p结论:专家评估聊天机器人对患者BPH信息的回答与泌尿科医生的回答同样准确,更完整。非医疗志愿者更喜欢聊天机器人生成的信息,并认为与泌尿科医生生成的答案相比,它们更有同理心。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians.

Purpose: To evaluate the accuracy, comprehensiveness, empathetic tone, and patient preference for AI and urologist responses to patient messages concerning common BPH questions across phases of care.

Methods: Cross-sectional study evaluating responses to 20 BPH-related questions generated by 2 AI chatbots and 4 urologists in a simulated clinical messaging environment without direct patient interaction. Accuracy, completeness, and empathetic tone of responses assessed by experts using Likert scales, and preferences and perceptions of authorship (chatbot vs. human) rated by non-medical evaluators.

Results: Five non-medical volunteers independently evaluated, ranked, and inferred the source for 120 responses (n = 600 total). For volunteer evaluations, the mean (SD) score of chatbots, 3.0 (1.4) (moderately empathetic) was significantly higher than urologists, 2.1 (1.1) (slightly empathetic) (p < 0.001); mean (SD) and preference ranking for chatbots, 2.6 (1.6), was significantly higher than urologist ranking, 3.9 (1.6) (p < 0.001). Two subject matter experts (SMEs) independently evaluated 120 responses each (answers to 20 questions from 4 urologist and 2 chatbots, n = 240 total). For SME evaluations, mean (SD) accuracy score for chatbots was 4.5 (1.1) (nearly all correct) and not significantly different than urologists, 4.6 (1.2). The mean (SD) completeness score for chatbots was 2.4 (0.8) (comprehensive), significantly higher than urologists, 1.6 (0.6) (adequate) (p < 0.001).

Conclusion: Answers to patient BPH messages generated by chatbots were evaluated by experts as equally accurate and more complete than urologist answers. Non-medical volunteers preferred chatbot-generated messages and considered them more empathetic compared to answers generated by urologists.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
World Journal of Urology
World Journal of Urology 医学-泌尿学与肾脏学
CiteScore
6.80
自引率
8.80%
发文量
317
审稿时长
4-8 weeks
期刊介绍: The WORLD JOURNAL OF UROLOGY conveys regularly the essential results of urological research and their practical and clinical relevance to a broad audience of urologists in research and clinical practice. In order to guarantee a balanced program, articles are published to reflect the developments in all fields of urology on an internationally advanced level. Each issue treats a main topic in review articles of invited international experts. Free papers are unrelated articles to the main topic.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信