Artificial intelligence improves urologic oncology patient education and counseling.

IF 1.2 4区 医学 Q3 UROLOGY & NEPHROLOGY
Canadian Journal of Urology Pub Date : 2024-10-01
Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah
{"title":"Artificial intelligence improves urologic oncology patient education and counseling.","authors":"Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).</p><p><strong>Materials and methods: </strong>We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.</p><p><strong>Results: </strong>Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.</p><p><strong>Conclusions: </strong>Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.</p>","PeriodicalId":56323,"journal":{"name":"Canadian Journal of Urology","volume":"31 5","pages":"12013-12018"},"PeriodicalIF":1.2000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Journal of Urology","FirstCategoryId":"3","ListUrlMain":"","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).

Materials and methods: We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.

Results: Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.

Conclusions: Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.

人工智能改善了泌尿科肿瘤患者的教育和咨询。
导言:患者在面对棘手的泌尿系统癌症诊断时会寻求在线资源的支持。医生撰写的资源超过了建议的 6-8 年级阅读水平,给患者造成了困惑,使他们转向人工智能聊天机器人等不规范的在线材料。我们旨在比较 ChatGPT 与 Epic 和泌尿外科护理基金会(UCF)上患者教育的可读性和质量:我们分析了来自 ChatGPT、Epic 和 UCF 的前列腺癌、膀胱癌和肾癌内容。我们进一步研究了使用特定人工智能提示(ChatGPT-a)和被指定为易读的 Epic 资料的可读性调整回复。盲审稿人完成了描述性文本分析、通过六个有效公式进行的可读性分析,以及通过 DISCERN、PEMAT 和 Likert 工具进行的质量分析:Epic达到了建议的年级水平,而UCF和ChatGPT超过了建议的年级水平(5.81 vs. 8.44 vs. 12.16,p < 0.001)。ChatGPT 的文本更长,措辞更复杂(p < 0.001)。Epic 的质量尚可,UCF 的质量良好,而 ChatGPT 的质量极佳(49.5 vs. 61.67 vs. 64.33)。可操作性总体较差,但 Epic 的可操作性最低(37%)。从定性分析来看,Epic 在所有质量指标上都落后于其他公司。根据用户教育水平(ChatGPT-a 和 Epic 易读性)进行调整后,可读性有所提高(7.50 和 3.53),但只有 ChatGPT-a 保持了较高的质量:结论:在线泌尿肿瘤患者资料在很大程度上超过了美国人的平均文化水平,对患者而言往往缺乏实际效用。我们的 ChatGPT-a 模型表明,人工智能技术可以提高可访问性和实用性。经过开发,医疗保健专用的人工智能程序可以帮助医疗服务提供者创建可访问且个性化的内容,从而改善泌尿科患者的共同决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Canadian Journal of Urology
Canadian Journal of Urology UROLOGY & NEPHROLOGY-
CiteScore
1.90
自引率
0.00%
发文量
86
审稿时长
6-12 weeks
期刊介绍: The CJU publishes articles of interest to the field of urology and related specialties who treat urologic diseases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信