Loris Cacciatore, Antonio Minore, Pierangelo Contessa, Gianluigi Raso, Antonio Rosario Iannello, Rocco Papalia, Francesco Esperto
{"title":"The role of AI in prostate cancer care: Assessing the role of chatbots versus urologists in patient communication and empathy.","authors":"Loris Cacciatore, Antonio Minore, Pierangelo Contessa, Gianluigi Raso, Antonio Rosario Iannello, Rocco Papalia, Francesco Esperto","doi":"10.1177/03915603261446425","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To date, the integration of artificial intelligence (AI) in healthcare has expanded rapidly, offering new tools for patient education and communication. In prostate cancer (PCa), where information needs are high and emotionally sensitive, AI-driven chatbots (CB) may enhance patient engagement. This study aims to compare the performance and perceived quality of responses from CB versus urologists (URO) to common PCa-related inquiries.</p><p><strong>Methods: </strong>We conducted a cross-sectional analysis of 20 frequently asked PCa general questions. Responses were generated by two AI-based CB and four certified URO in a simulated clinical messaging setting, without direct patient interaction. Expert reviewers first assessed each response for medical accuracy and completeness. Then, five blinded non-medical evaluators rated the responses using Likert scales to evaluate completeness (1-5), empathy (using a five-item adaptation of the Jefferson Scale), and overall preference.</p><p><strong>Results: </strong>A total of 600 responses were evaluated. Accuracy and completeness scores were comparable between CB and URO responses, according to experts' evaluations (<i>p</i> = 0.45 and <i>p</i> = 0.12). However, CB responses scored significantly higher in completeness and empathy (both <i>p</i> < 0.001) for non-medical evaluators. Moreover, a statistically significant preference for overall CB-generated responses over those from urologists, was demonstrated (<i>p</i> < 0.001).</p><p><strong>Conclusions: </strong>While CB responses were as accurate as those from URO, they outperformed in completeness and empathy. These results suggest that AI-based CB could serve as effective tools in enhancing patient communication and satisfaction and may be a valuable complement to urologist-led care in clinical practice.</p>","PeriodicalId":23574,"journal":{"name":"Urologia Journal","volume":" ","pages":"3915603261446425"},"PeriodicalIF":0.7000,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Urologia Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/03915603261446425","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: To date, the integration of artificial intelligence (AI) in healthcare has expanded rapidly, offering new tools for patient education and communication. In prostate cancer (PCa), where information needs are high and emotionally sensitive, AI-driven chatbots (CB) may enhance patient engagement. This study aims to compare the performance and perceived quality of responses from CB versus urologists (URO) to common PCa-related inquiries.
Methods: We conducted a cross-sectional analysis of 20 frequently asked PCa general questions. Responses were generated by two AI-based CB and four certified URO in a simulated clinical messaging setting, without direct patient interaction. Expert reviewers first assessed each response for medical accuracy and completeness. Then, five blinded non-medical evaluators rated the responses using Likert scales to evaluate completeness (1-5), empathy (using a five-item adaptation of the Jefferson Scale), and overall preference.
Results: A total of 600 responses were evaluated. Accuracy and completeness scores were comparable between CB and URO responses, according to experts' evaluations (p = 0.45 and p = 0.12). However, CB responses scored significantly higher in completeness and empathy (both p < 0.001) for non-medical evaluators. Moreover, a statistically significant preference for overall CB-generated responses over those from urologists, was demonstrated (p < 0.001).
Conclusions: While CB responses were as accurate as those from URO, they outperformed in completeness and empathy. These results suggest that AI-based CB could serve as effective tools in enhancing patient communication and satisfaction and may be a valuable complement to urologist-led care in clinical practice.