[使前列腺癌研究更容易获得:chatGPT-4作为加强外行交流的工具]。

IF 0.5 4区 医学 Q4 UROLOGY & NEPHROLOGY
Maximilian Haas, Veronika Saberi, Christopher Gossler, Anna Schmelzer, Anton Kravchuk, Johannes Breyer, Johannes Bründl, Simon Engelmann, Clemens Kirschner, Christian Gilfrich, Maximilian Burger, Dominik von Winning, Christian Wülfing, Hendrik Borgmann, Severin Rodler, Axel S Merseburger, Emily Rinderknecht, Matthias May
{"title":"[使前列腺癌研究更容易获得:chatGPT-4作为加强外行交流的工具]。","authors":"Maximilian Haas, Veronika Saberi, Christopher Gossler, Anna Schmelzer, Anton Kravchuk, Johannes Breyer, Johannes Bründl, Simon Engelmann, Clemens Kirschner, Christian Gilfrich, Maximilian Burger, Dominik von Winning, Christian Wülfing, Hendrik Borgmann, Severin Rodler, Axel S Merseburger, Emily Rinderknecht, Matthias May","doi":"10.1007/s00120-025-02558-w","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and objective: </strong>The abundant potential arising from various applications of artificial intelligence is gradually influencing academic and scientific communication. This study examines the suitability of ChatGPT‑4 for generating layperson's summaries (LS) of scientific articles published in the four journals within the European Urology family and compares the quality of these newly generated LS with the original texts.</p><p><strong>Methods: </strong>A total of 327 articles on prostate cancer published between January 1, 2023, and June 30, 2024, were analyzed. ChatGPT‑4 generated patient summaries using both a basic and an advanced prompt, the latter specifically optimized for enhancing readability. Readability was assessed using established indices, while two blinded reviewers evaluated content quality on a 5-point Likert scale. Additionally, readability, content quality, and adherence to journal guidelines were combined into an overall scoring system.</p><p><strong>Results: </strong>The advanced prompt led to significantly improved readability compared to the basic prompt (p < 0.001) and the original LS (p < 0.001). Content quality was comparable between the two ChatGPT‑4 prompts (p = 0.665) but was higher than that of the original summaries (p = 0.001 and p = 0.002, respectively). Both prompts demonstrated superior adherence to journal guidelines (p < 0.001), with error-free LS rates of 29.4% (original), 76.1% (basic prompt), and 92% (advanced prompt) (p < 0.001).</p><p><strong>Conclusion: </strong>ChatGPT‑4 is a validated and effective tool for generating LS, offering superior readability and high compliance with editorial guidelines. It has the potential to assist researchers and scientific journals in enhancing the accessibility and comprehensibility of scientific content, thereby, improving patient engagement and understanding.</p>","PeriodicalId":29782,"journal":{"name":"Urologie","volume":" ","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"[Making prostate cancer research accessible: chatGPT-4 as a tool to enhance lay communication].\",\"authors\":\"Maximilian Haas, Veronika Saberi, Christopher Gossler, Anna Schmelzer, Anton Kravchuk, Johannes Breyer, Johannes Bründl, Simon Engelmann, Clemens Kirschner, Christian Gilfrich, Maximilian Burger, Dominik von Winning, Christian Wülfing, Hendrik Borgmann, Severin Rodler, Axel S Merseburger, Emily Rinderknecht, Matthias May\",\"doi\":\"10.1007/s00120-025-02558-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background and objective: </strong>The abundant potential arising from various applications of artificial intelligence is gradually influencing academic and scientific communication. This study examines the suitability of ChatGPT‑4 for generating layperson's summaries (LS) of scientific articles published in the four journals within the European Urology family and compares the quality of these newly generated LS with the original texts.</p><p><strong>Methods: </strong>A total of 327 articles on prostate cancer published between January 1, 2023, and June 30, 2024, were analyzed. ChatGPT‑4 generated patient summaries using both a basic and an advanced prompt, the latter specifically optimized for enhancing readability. Readability was assessed using established indices, while two blinded reviewers evaluated content quality on a 5-point Likert scale. Additionally, readability, content quality, and adherence to journal guidelines were combined into an overall scoring system.</p><p><strong>Results: </strong>The advanced prompt led to significantly improved readability compared to the basic prompt (p < 0.001) and the original LS (p < 0.001). Content quality was comparable between the two ChatGPT‑4 prompts (p = 0.665) but was higher than that of the original summaries (p = 0.001 and p = 0.002, respectively). Both prompts demonstrated superior adherence to journal guidelines (p < 0.001), with error-free LS rates of 29.4% (original), 76.1% (basic prompt), and 92% (advanced prompt) (p < 0.001).</p><p><strong>Conclusion: </strong>ChatGPT‑4 is a validated and effective tool for generating LS, offering superior readability and high compliance with editorial guidelines. It has the potential to assist researchers and scientific journals in enhancing the accessibility and comprehensibility of scientific content, thereby, improving patient engagement and understanding.</p>\",\"PeriodicalId\":29782,\"journal\":{\"name\":\"Urologie\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2025-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Urologie\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00120-025-02558-w\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Urologie","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00120-025-02558-w","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景与目的:人工智能的各种应用所产生的巨大潜力正逐渐影响着学术和科学交流。本研究考察了ChatGPT‑4在生成外行人摘要(LS)方面的适用性,这些摘要发表在欧洲泌尿科家族的四种期刊上,并将这些新生成的LS与原始文本的质量进行了比较。方法:对2023年1月1日至2024年6月30日发表的327篇前列腺癌相关文献进行分析。ChatGPT‑4使用基本提示和高级提示生成患者摘要,后者专门针对增强可读性进行了优化。使用已建立的指标评估可读性,而两名盲法审稿人以5分李克特量表评估内容质量。此外,可读性、内容质量和对期刊指南的依从性被合并到一个总体评分系统中。结果:与基本提示相比,高级提示显着提高了可读性(p )结论:ChatGPT‑4是一种经过验证的有效生成LS的工具,具有优越的可读性和高度符合编辑指南。它有可能帮助研究人员和科学期刊提高科学内容的可及性和可理解性,从而提高患者的参与和理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
[Making prostate cancer research accessible: chatGPT-4 as a tool to enhance lay communication].

Background and objective: The abundant potential arising from various applications of artificial intelligence is gradually influencing academic and scientific communication. This study examines the suitability of ChatGPT‑4 for generating layperson's summaries (LS) of scientific articles published in the four journals within the European Urology family and compares the quality of these newly generated LS with the original texts.

Methods: A total of 327 articles on prostate cancer published between January 1, 2023, and June 30, 2024, were analyzed. ChatGPT‑4 generated patient summaries using both a basic and an advanced prompt, the latter specifically optimized for enhancing readability. Readability was assessed using established indices, while two blinded reviewers evaluated content quality on a 5-point Likert scale. Additionally, readability, content quality, and adherence to journal guidelines were combined into an overall scoring system.

Results: The advanced prompt led to significantly improved readability compared to the basic prompt (p < 0.001) and the original LS (p < 0.001). Content quality was comparable between the two ChatGPT‑4 prompts (p = 0.665) but was higher than that of the original summaries (p = 0.001 and p = 0.002, respectively). Both prompts demonstrated superior adherence to journal guidelines (p < 0.001), with error-free LS rates of 29.4% (original), 76.1% (basic prompt), and 92% (advanced prompt) (p < 0.001).

Conclusion: ChatGPT‑4 is a validated and effective tool for generating LS, offering superior readability and high compliance with editorial guidelines. It has the potential to assist researchers and scientific journals in enhancing the accessibility and comprehensibility of scientific content, thereby, improving patient engagement and understanding.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Urologie
Urologie UROLOGY & NEPHROLOGY-
CiteScore
1.00
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信