Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah
{"title":"Artificial intelligence improves urologic oncology patient education and counseling.","authors":"Yash B Shah, Anushka Ghosh, Aaron Hochberg, James R Mark, Costas D Lallas, Mihir S Shah","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).</p><p><strong>Materials and methods: </strong>We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.</p><p><strong>Results: </strong>Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.</p><p><strong>Conclusions: </strong>Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.</p>","PeriodicalId":56323,"journal":{"name":"Canadian Journal of Urology","volume":"31 5","pages":"12013-12018"},"PeriodicalIF":1.2000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Journal of Urology","FirstCategoryId":"3","ListUrlMain":"","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).
Materials and methods: We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.
Results: Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16, p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.
Conclusions: Online urologic oncology patient materials largely exceed the average American's literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.