医院在线患者教育材料在耳鼻喉科专科的可读性

IF 1.6 4区 医学 Q2 OTORHINOLARYNGOLOGY
Akshay Warrier, Rohan P. Singh, Afash Haleem, Andrew Lee, David Mothy, Aakash Patel, Jean Anderson Eloy, Brian Manzi
{"title":"医院在线患者教育材料在耳鼻喉科专科的可读性","authors":"Akshay Warrier,&nbsp;Rohan P. Singh,&nbsp;Afash Haleem,&nbsp;Andrew Lee,&nbsp;David Mothy,&nbsp;Aakash Patel,&nbsp;Jean Anderson Eloy,&nbsp;Brian Manzi","doi":"10.1002/lio2.70101","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>This study evaluates the readability of online patient education materials (OPEMs) across otolaryngology subspecialties, hospital characteristics, and national otolaryngology organizations, while assessing AI alternatives.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Hospitals from the US News Best ENT list were queried for OPEMs describing a chosen surgery per subspecialty; the American Academy of Otolaryngology—Head and Neck Surgery (AAO), American Laryngological Association (ALA), Ear, Nose, and Throat United Kingdom (ENTUK), and the Canadian Society of Otolaryngology—Head and Neck Surgery (CSOHNS) were similarly queried. Google was queried for the top 10 links from hospitals per procedure. Ownership (private/public), presence of respective otolaryngology fellowships, region, and median household income (zip code) were collected. Readability was assessed using seven indices and averaged: Automated Readability Index (ARI), Flesch Reading Ease Score (FRES), Flesch–Kincaid Grade Level (FKGL), Gunning Fog Readability (GFR), Simple Measure of Gobbledygook (SMOG), Coleman–Liau Readability Index (CLRI), and Linsear Write Readability Formula (LWRF). AI-generated materials from ChatGPT were compared for readability, accuracy, content, and tone. Analyses were conducted between subspecialties, against national organizations, NIH standard, and across demographic variables.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Across 144 hospitals, OPEMs exceeded NIH readability standards, averaging at an 8th–12th grade level across subspecialties. In rhinology, facial plastics, and sleep medicine, hospital OPEMs had higher readability scores than ENTUK's materials (11.4 vs. 9.1, 10.4 vs. 7.2, 11.5 vs. 9.2, respectively; all <i>p</i> &lt; 0.05), but lower than AAO (<i>p</i> = 0.005). ChatGPT-generated materials averaged a 6.8-grade level, demonstrating improved readability, especially with specialized prompting, compared to all hospital and organization OPEMs.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>OPEMs from all sources exceed the NIH readability standard. ENTUK serves as a benchmark for accessible language, while ChatGPT demonstrates the feasibility of producing more readable content. Otolaryngologists might consider using ChatGPT to generate patient-friendly materials, with caution, and advocate for national-level improvements in patient education readability.</p>\n </section>\n </div>","PeriodicalId":48529,"journal":{"name":"Laryngoscope Investigative Otolaryngology","volume":"10 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/lio2.70101","citationCount":"0","resultStr":"{\"title\":\"Readability of Hospital Online Patient Education Materials Across Otolaryngology Specialties\",\"authors\":\"Akshay Warrier,&nbsp;Rohan P. Singh,&nbsp;Afash Haleem,&nbsp;Andrew Lee,&nbsp;David Mothy,&nbsp;Aakash Patel,&nbsp;Jean Anderson Eloy,&nbsp;Brian Manzi\",\"doi\":\"10.1002/lio2.70101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Introduction</h3>\\n \\n <p>This study evaluates the readability of online patient education materials (OPEMs) across otolaryngology subspecialties, hospital characteristics, and national otolaryngology organizations, while assessing AI alternatives.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>Hospitals from the US News Best ENT list were queried for OPEMs describing a chosen surgery per subspecialty; the American Academy of Otolaryngology—Head and Neck Surgery (AAO), American Laryngological Association (ALA), Ear, Nose, and Throat United Kingdom (ENTUK), and the Canadian Society of Otolaryngology—Head and Neck Surgery (CSOHNS) were similarly queried. Google was queried for the top 10 links from hospitals per procedure. Ownership (private/public), presence of respective otolaryngology fellowships, region, and median household income (zip code) were collected. Readability was assessed using seven indices and averaged: Automated Readability Index (ARI), Flesch Reading Ease Score (FRES), Flesch–Kincaid Grade Level (FKGL), Gunning Fog Readability (GFR), Simple Measure of Gobbledygook (SMOG), Coleman–Liau Readability Index (CLRI), and Linsear Write Readability Formula (LWRF). AI-generated materials from ChatGPT were compared for readability, accuracy, content, and tone. Analyses were conducted between subspecialties, against national organizations, NIH standard, and across demographic variables.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Across 144 hospitals, OPEMs exceeded NIH readability standards, averaging at an 8th–12th grade level across subspecialties. In rhinology, facial plastics, and sleep medicine, hospital OPEMs had higher readability scores than ENTUK's materials (11.4 vs. 9.1, 10.4 vs. 7.2, 11.5 vs. 9.2, respectively; all <i>p</i> &lt; 0.05), but lower than AAO (<i>p</i> = 0.005). ChatGPT-generated materials averaged a 6.8-grade level, demonstrating improved readability, especially with specialized prompting, compared to all hospital and organization OPEMs.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>OPEMs from all sources exceed the NIH readability standard. ENTUK serves as a benchmark for accessible language, while ChatGPT demonstrates the feasibility of producing more readable content. Otolaryngologists might consider using ChatGPT to generate patient-friendly materials, with caution, and advocate for national-level improvements in patient education readability.</p>\\n </section>\\n </div>\",\"PeriodicalId\":48529,\"journal\":{\"name\":\"Laryngoscope Investigative Otolaryngology\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2025-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/lio2.70101\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Laryngoscope Investigative Otolaryngology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/lio2.70101\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OTORHINOLARYNGOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Laryngoscope Investigative Otolaryngology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/lio2.70101","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

本研究在评估人工智能替代方案的同时,评估了耳鼻喉科亚专科、医院特点和国家耳鼻喉科组织的在线患者教育材料(OPEMs)的可读性。方法从《美国新闻》最佳耳鼻喉科医院名单中查询描述每个亚专科所选手术的OPEMs;美国耳鼻喉头颈外科学会(AAO)、美国喉学会(ALA)、英国耳鼻喉学会(ENTUK)和加拿大耳鼻喉头颈外科学会(CSOHNS)也接受了类似的调查。在谷歌中查询每个程序来自医院的前10个链接。收集所有权(私人/公共),各自耳鼻喉科奖学金的存在,地区和家庭收入中位数(邮政编码)。采用自动可读性指数(ARI)、Flesch Reading Ease Score (FRES)、Flesch - kincaid Grade Level (FKGL)、Gunning Fog可读性(GFR)、Simple Measure of Gobbledygook (SMOG)、Coleman-Liau可读性指数(CLRI)和Linsear写可读性公式(LWRF) 7项指标进行评价,并取其平均值。比较了ChatGPT中人工智能生成的材料的可读性、准确性、内容和语气。在亚专业之间、针对国家组织、NIH标准和跨人口变量进行了分析。结果在144家医院中,OPEMs的可读性超过了NIH标准,亚专科的平均水平为8 - 12年级。在鼻科、面部整形和睡眠医学方面,医院OPEMs的可读性得分高于ENTUK的材料(分别为11.4比9.1、10.4比7.2、11.5比9.2);p < 0.05),但低于AAO (p = 0.005)。chatgpt生成的材料平均为6.8级,与所有医院和组织的OPEMs相比,显示出更高的可读性,特别是在专门提示下。结论所有来源的OPEMs均超过NIH可读性标准。ENTUK作为可访问语言的基准,而ChatGPT则展示了生成更具可读性内容的可行性。耳鼻喉科医生可能会考虑使用ChatGPT生成对患者友好的材料,但要谨慎,并提倡在国家层面上提高患者教育的可读性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Readability of Hospital Online Patient Education Materials Across Otolaryngology Specialties

Introduction

This study evaluates the readability of online patient education materials (OPEMs) across otolaryngology subspecialties, hospital characteristics, and national otolaryngology organizations, while assessing AI alternatives.

Methods

Hospitals from the US News Best ENT list were queried for OPEMs describing a chosen surgery per subspecialty; the American Academy of Otolaryngology—Head and Neck Surgery (AAO), American Laryngological Association (ALA), Ear, Nose, and Throat United Kingdom (ENTUK), and the Canadian Society of Otolaryngology—Head and Neck Surgery (CSOHNS) were similarly queried. Google was queried for the top 10 links from hospitals per procedure. Ownership (private/public), presence of respective otolaryngology fellowships, region, and median household income (zip code) were collected. Readability was assessed using seven indices and averaged: Automated Readability Index (ARI), Flesch Reading Ease Score (FRES), Flesch–Kincaid Grade Level (FKGL), Gunning Fog Readability (GFR), Simple Measure of Gobbledygook (SMOG), Coleman–Liau Readability Index (CLRI), and Linsear Write Readability Formula (LWRF). AI-generated materials from ChatGPT were compared for readability, accuracy, content, and tone. Analyses were conducted between subspecialties, against national organizations, NIH standard, and across demographic variables.

Results

Across 144 hospitals, OPEMs exceeded NIH readability standards, averaging at an 8th–12th grade level across subspecialties. In rhinology, facial plastics, and sleep medicine, hospital OPEMs had higher readability scores than ENTUK's materials (11.4 vs. 9.1, 10.4 vs. 7.2, 11.5 vs. 9.2, respectively; all p < 0.05), but lower than AAO (p = 0.005). ChatGPT-generated materials averaged a 6.8-grade level, demonstrating improved readability, especially with specialized prompting, compared to all hospital and organization OPEMs.

Conclusion

OPEMs from all sources exceed the NIH readability standard. ENTUK serves as a benchmark for accessible language, while ChatGPT demonstrates the feasibility of producing more readable content. Otolaryngologists might consider using ChatGPT to generate patient-friendly materials, with caution, and advocate for national-level improvements in patient education readability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.00
自引率
0.00%
发文量
245
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信