评估经人工智能修改和生成的内窥镜颅底手术患者教育材料的可读性、可靠性和质量。

IF 2.5 3区 医学 Q1 OTORHINOLARYNGOLOGY
American Journal of Rhinology & Allergy Pub Date : 2024-11-01 Epub Date: 2024-08-21 DOI:10.1177/19458924241273055
Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan
{"title":"评估经人工智能修改和生成的内窥镜颅底手术患者教育材料的可读性、可靠性和质量。","authors":"Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan","doi":"10.1177/19458924241273055","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.</p><p><strong>Objective: </strong>To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.</p><p><strong>Methods: </strong>An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article <i>de novo</i>. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.</p><p><strong>Results: </strong>Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, <i>p</i> < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had \"poor\" quality.</p><p><strong>Conclusions: </strong>ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.</p>","PeriodicalId":7650,"journal":{"name":"American Journal of Rhinology & Allergy","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery.\",\"authors\":\"Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan\",\"doi\":\"10.1177/19458924241273055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.</p><p><strong>Objective: </strong>To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.</p><p><strong>Methods: </strong>An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article <i>de novo</i>. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.</p><p><strong>Results: </strong>Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, <i>p</i> < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had \\\"poor\\\" quality.</p><p><strong>Conclusions: </strong>ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.</p>\",\"PeriodicalId\":7650,\"journal\":{\"name\":\"American Journal of Rhinology & Allergy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Rhinology & Allergy\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/19458924241273055\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"OTORHINOLARYNGOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Rhinology & Allergy","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/19458924241273055","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/21 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:尽管美国国立卫生研究院(National Institutes of Health)和美国医学会(American Medical Association)建议在线患者教育材料的识字率应达到或低于六年级,但与内窥镜颅底手术(ESBS)相关的材料却缺乏可读性和质量。ChatGPT是一种人工智能(AI)系统,能够综合大量互联网数据,生成对用户查询的回复,但其在改进患者教育材料方面的效用尚未得到探讨:研究在线患者教育资料的可读性和质量现状,确定 ChatGPT 在改进文章和生成患者教育资料方面的作用:方法:利用与 ESBS 相关的 10 个不同搜索词进行文章搜索。使用 ChatGPT 对现有的十篇可读性最低的面向患者的文章进行了修改,并使用迭代查询重新生成一篇文章。弗莱什阅读容易度(FRE)和相关指标衡量了整体可读性和内容素养水平,而 DISCERN 则评估了文章的可靠性和质量:结果:共找到 66 篇文章。ChatGPT 提高了 10 篇可读性最低的在线文章的 FRE 可读性(19.7 ± 4.4 vs. 56.9 ± 5.9,p < 0.001),从大学水平提高到 10 年级水平。生成的文章可读性高于48.5%的文章(38.9 vs. 39.4 ± 12.4),质量高于94%的文章(51.0 vs. 37.6 ± 6.1)。56.7%的在线文章质量 "差":ChatGPT提高了文章的可读性,尽管大多数文章的可读性仍高于推荐的患者教育材料的识字水平。通过迭代查询,ChatGPT 可以生成比大多数现有在线文章更可靠、质量更高的患者教育材料,并且可以根据一般在线文章的可读性进行定制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery.

Background: Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.

Objective: To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.

Methods: An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article de novo. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.

Results: Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, p < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had "poor" quality.

Conclusions: ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.60
自引率
11.50%
发文量
82
审稿时长
4-8 weeks
期刊介绍: The American Journal of Rhinology & Allergy is a peer-reviewed, scientific publication committed to expanding knowledge and publishing the best clinical and basic research within the fields of Rhinology & Allergy. Its focus is to publish information which contributes to improved quality of care for patients with nasal and sinus disorders. Its primary readership consists of otolaryngologists, allergists, and plastic surgeons. Published material includes peer-reviewed original research, clinical trials, and review articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信