Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan
{"title":"Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery.","authors":"Michael Warn, Leo L T Meller, Daniella Chan, Sina J Torabi, Benjamin F Bitner, Bobby A Tajudeen, Edward C Kuan","doi":"10.1177/19458924241273055","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.</p><p><strong>Objective: </strong>To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.</p><p><strong>Methods: </strong>An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article <i>de novo</i>. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.</p><p><strong>Results: </strong>Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, <i>p</i> < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had \"poor\" quality.</p><p><strong>Conclusions: </strong>ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.</p>","PeriodicalId":7650,"journal":{"name":"American Journal of Rhinology & Allergy","volume":" ","pages":"396-402"},"PeriodicalIF":2.5000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Rhinology & Allergy","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/19458924241273055","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/21 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.
Objective: To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.
Methods: An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article de novo. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.
Results: Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, p < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had "poor" quality.
Conclusions: ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.
期刊介绍:
The American Journal of Rhinology & Allergy is a peer-reviewed, scientific publication committed to expanding knowledge and publishing the best clinical and basic research within the fields of Rhinology & Allergy. Its focus is to publish information which contributes to improved quality of care for patients with nasal and sinus disorders. Its primary readership consists of otolaryngologists, allergists, and plastic surgeons. Published material includes peer-reviewed original research, clinical trials, and review articles.