利用生成式人工智能提高泌尿外科肿瘤学文献摘要的可读性:BRIDGE-AI 6随机对照试验

IF 2.8 Q2 ONCOLOGY
JCO Clinical Cancer Informatics Pub Date : 2025-09-01 Epub Date: 2025-09-10 DOI:10.1200/CCI-25-00042
Conner Ganjavi, Ethan Layne, Francesco Cei, Karanvir Gill, Vasileios Magoulianitis, Andre Abreu, Mitchell Goldenberg, Mihir M Desai, Inderbir Gill, Giovanni E Cacciamani
{"title":"利用生成式人工智能提高泌尿外科肿瘤学文献摘要的可读性:BRIDGE-AI 6随机对照试验","authors":"Conner Ganjavi, Ethan Layne, Francesco Cei, Karanvir Gill, Vasileios Magoulianitis, Andre Abreu, Mitchell Goldenberg, Mihir M Desai, Inderbir Gill, Giovanni E Cacciamani","doi":"10.1200/CCI-25-00042","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate a generative artificial intelligence (GAI) framework for creating readable lay abstracts and summaries (LASs) of urologic oncology research, while maintaining accuracy, completeness, and clarity, for the purpose of assessing their comprehension and perception among patients and caregivers.</p><p><strong>Methods: </strong>Forty original abstracts (OAs) on prostate, bladder, kidney, and testis cancers from leading journals were selected. LASs were generated using a free GAI tool, with three versions per abstract for consistency. Readability was compared with OAs using validated metrics. Two independent reviewers assessed accuracy, completeness, and clarity and identified AI hallucinations. A pilot study was conducted with 277 patients and caregivers randomly assigned to receive either OAs or LASs and complete comprehension and perception assessments.</p><p><strong>Results: </strong>Mean GAI-generated LAS generation time was <10 seconds. Across 600 sections generated, readability and quality metrics were consistent (<i>P</i> > .05). Quality scores ranged from 85% to 100%, with hallucinations in 1% of sections. The best test showed significantly better readability (68.9 <i>v</i> 25.3; <i>P</i> < .001), grade level, and text metrics compared with the OA. Methods sections had slightly lower accuracy (85% <i>v</i> 100%; <i>P</i> = .03) and trifecta achievement (82.5% <i>v</i> 100%; <i>P</i> = .01), but other sections retained high quality (≥92.5%; <i>P</i> > .05). GAI-generated LAS recipients scored significantly better in comprehension and most perception-based questions (<i>P</i> < .001) with LAS being the only consistently significant predictor (<i>P</i> < .001).</p><p><strong>Conclusion: </strong>GAI-generated LASs for urologic oncology research are highly readable and generally preserve the quality of the OAs. Patients and caregivers demonstrated improved comprehension and more favorable perceptions of LASs compared with OAs. Human oversight remains essential to ensure the accurate, complete, and clear representations of the original research.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"9 ","pages":"e2500042"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Readability of Lay Abstracts and Summaries for Urologic Oncology Literature Using Generative Artificial Intelligence: BRIDGE-AI 6 Randomized Controlled Trial.\",\"authors\":\"Conner Ganjavi, Ethan Layne, Francesco Cei, Karanvir Gill, Vasileios Magoulianitis, Andre Abreu, Mitchell Goldenberg, Mihir M Desai, Inderbir Gill, Giovanni E Cacciamani\",\"doi\":\"10.1200/CCI-25-00042\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To evaluate a generative artificial intelligence (GAI) framework for creating readable lay abstracts and summaries (LASs) of urologic oncology research, while maintaining accuracy, completeness, and clarity, for the purpose of assessing their comprehension and perception among patients and caregivers.</p><p><strong>Methods: </strong>Forty original abstracts (OAs) on prostate, bladder, kidney, and testis cancers from leading journals were selected. LASs were generated using a free GAI tool, with three versions per abstract for consistency. Readability was compared with OAs using validated metrics. Two independent reviewers assessed accuracy, completeness, and clarity and identified AI hallucinations. A pilot study was conducted with 277 patients and caregivers randomly assigned to receive either OAs or LASs and complete comprehension and perception assessments.</p><p><strong>Results: </strong>Mean GAI-generated LAS generation time was <10 seconds. Across 600 sections generated, readability and quality metrics were consistent (<i>P</i> > .05). Quality scores ranged from 85% to 100%, with hallucinations in 1% of sections. The best test showed significantly better readability (68.9 <i>v</i> 25.3; <i>P</i> < .001), grade level, and text metrics compared with the OA. Methods sections had slightly lower accuracy (85% <i>v</i> 100%; <i>P</i> = .03) and trifecta achievement (82.5% <i>v</i> 100%; <i>P</i> = .01), but other sections retained high quality (≥92.5%; <i>P</i> > .05). GAI-generated LAS recipients scored significantly better in comprehension and most perception-based questions (<i>P</i> < .001) with LAS being the only consistently significant predictor (<i>P</i> < .001).</p><p><strong>Conclusion: </strong>GAI-generated LASs for urologic oncology research are highly readable and generally preserve the quality of the OAs. Patients and caregivers demonstrated improved comprehension and more favorable perceptions of LASs compared with OAs. Human oversight remains essential to ensure the accurate, complete, and clear representations of the original research.</p>\",\"PeriodicalId\":51626,\"journal\":{\"name\":\"JCO Clinical Cancer Informatics\",\"volume\":\"9 \",\"pages\":\"e2500042\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JCO Clinical Cancer Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1200/CCI-25-00042\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/9/10 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JCO Clinical Cancer Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1200/CCI-25-00042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/10 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:评估一种生成式人工智能(GAI)框架,用于创建可读的泌尿外科肿瘤学研究摘要和摘要(LASs),同时保持准确性、完整性和清晰度,目的是评估患者和护理人员对其的理解和感知。方法:选取40篇主要期刊上关于前列腺癌、膀胱癌、肾癌和睾丸癌的原始摘要。LASs是使用免费的GAI工具生成的,为了一致性,每个摘要有三个版本。使用经过验证的指标将可读性与oa进行比较。两名独立评审员评估了准确性、完整性和清晰度,并确定了人工智能幻觉。一项试点研究对277名患者和护理人员进行了随机分配,接受oa或LASs,并完成理解和感知评估。结果:gai生成LAS的平均生成时间P < 0.05)。质量分数从85%到100%不等,有1%的部分出现幻觉。与OA相比,最佳测试显示出更好的可读性(68.9 v 25.3; P < 0.001)、年级水平和文本指标。方法切片准确度略低(85% v 100%, P = 0.03),三联片准确度略低(82.5% v 100%, P = 0.01),但其他切片质量较高(≥92.5%,P = 0.05)。ai生成的LAS接收者在理解和大多数基于感知的问题上得分明显更好(P < .001), LAS是唯一持续显著的预测因子(P < .001)。结论:人工智能生成的用于泌尿肿瘤研究的LASs具有很高的可读性,并且总体上保持了oa的质量。与oa相比,患者和护理人员表现出更好的理解和更有利的认知。人为的监督对于确保原始研究的准确、完整和清晰的表述仍然是必不可少的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing Readability of Lay Abstracts and Summaries for Urologic Oncology Literature Using Generative Artificial Intelligence: BRIDGE-AI 6 Randomized Controlled Trial.

Purpose: To evaluate a generative artificial intelligence (GAI) framework for creating readable lay abstracts and summaries (LASs) of urologic oncology research, while maintaining accuracy, completeness, and clarity, for the purpose of assessing their comprehension and perception among patients and caregivers.

Methods: Forty original abstracts (OAs) on prostate, bladder, kidney, and testis cancers from leading journals were selected. LASs were generated using a free GAI tool, with three versions per abstract for consistency. Readability was compared with OAs using validated metrics. Two independent reviewers assessed accuracy, completeness, and clarity and identified AI hallucinations. A pilot study was conducted with 277 patients and caregivers randomly assigned to receive either OAs or LASs and complete comprehension and perception assessments.

Results: Mean GAI-generated LAS generation time was <10 seconds. Across 600 sections generated, readability and quality metrics were consistent (P > .05). Quality scores ranged from 85% to 100%, with hallucinations in 1% of sections. The best test showed significantly better readability (68.9 v 25.3; P < .001), grade level, and text metrics compared with the OA. Methods sections had slightly lower accuracy (85% v 100%; P = .03) and trifecta achievement (82.5% v 100%; P = .01), but other sections retained high quality (≥92.5%; P > .05). GAI-generated LAS recipients scored significantly better in comprehension and most perception-based questions (P < .001) with LAS being the only consistently significant predictor (P < .001).

Conclusion: GAI-generated LASs for urologic oncology research are highly readable and generally preserve the quality of the OAs. Patients and caregivers demonstrated improved comprehension and more favorable perceptions of LASs compared with OAs. Human oversight remains essential to ensure the accurate, complete, and clear representations of the original research.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.20
自引率
4.80%
发文量
190
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信