利用人工智能创建喉科专题患者信息。

IF 2.2 3区 医学 Q3 MEDICINE, RESEARCH & EXPERIMENTAL
Laryngoscope Pub Date : 2024-11-06 DOI:10.1002/lary.31891
Quynh-Lam Tran, Pauline P Huynh, Bryan Le, Nancy Jiang
{"title":"利用人工智能创建喉科专题患者信息。","authors":"Quynh-Lam Tran, Pauline P Huynh, Bryan Le, Nancy Jiang","doi":"10.1002/lary.31891","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To evaluate and compare the readability and quality of patient information generated by Chat-Generative Pre-Trained Transformer-3.5 (ChatGPT) and the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) using validated instruments including Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease, DISCERN, and Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P).</p><p><strong>Methods: </strong>ENTHealth.org and ChatGPT-3.5 were queried for patient information on laryngology topics. ChatGPT-3.5 was queried twice for a given topic to evaluate for reliability. This generated three de-identified text documents for each topic: one from AAO-HNS and two from ChatGPT (ChatGPT Output 1, ChatGPT Output 2). Grade level and reading ease were compared between the three sources using a one-way analysis of variance and Tukey's post hoc test. Independent t-tests were used to compare DISCERN and PEMAT understandability and actionability scores between AAO-HNS and ChatGPT Output 1.</p><p><strong>Results: </strong>Material generated from ChatGPT Output 1 and ChatGPT Output 2 were at least two reading grade levels higher than that of material from AAO-HNS (p < 0.001). Regarding reading ease, ChatGPT Output 1 and ChatGPT Output 2 documents had significantly lower mean scores compared to AAO-HNS (p < 0.001). Moreover, ChatGPT Output 1 material on vocal cord paralysis had a lower PEMAT-P understandability compared to that of AAO-HNS material (p > 0.05).</p><p><strong>Conclusion: </strong>Patient information on the ENTHealth.org website for select laryngology topics was, on average, of a lower grade level and higher reading ease compared to that produced by ChatGPT, but interestingly with largely no difference in the quality of information provided.</p><p><strong>Level of evidence: </strong>NA Laryngoscope, 2024.</p>","PeriodicalId":49921,"journal":{"name":"Laryngoscope","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Utilization of Artificial Intelligence in the Creation of Patient Information on Laryngology Topics.\",\"authors\":\"Quynh-Lam Tran, Pauline P Huynh, Bryan Le, Nancy Jiang\",\"doi\":\"10.1002/lary.31891\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To evaluate and compare the readability and quality of patient information generated by Chat-Generative Pre-Trained Transformer-3.5 (ChatGPT) and the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) using validated instruments including Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease, DISCERN, and Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P).</p><p><strong>Methods: </strong>ENTHealth.org and ChatGPT-3.5 were queried for patient information on laryngology topics. ChatGPT-3.5 was queried twice for a given topic to evaluate for reliability. This generated three de-identified text documents for each topic: one from AAO-HNS and two from ChatGPT (ChatGPT Output 1, ChatGPT Output 2). Grade level and reading ease were compared between the three sources using a one-way analysis of variance and Tukey's post hoc test. Independent t-tests were used to compare DISCERN and PEMAT understandability and actionability scores between AAO-HNS and ChatGPT Output 1.</p><p><strong>Results: </strong>Material generated from ChatGPT Output 1 and ChatGPT Output 2 were at least two reading grade levels higher than that of material from AAO-HNS (p < 0.001). Regarding reading ease, ChatGPT Output 1 and ChatGPT Output 2 documents had significantly lower mean scores compared to AAO-HNS (p < 0.001). Moreover, ChatGPT Output 1 material on vocal cord paralysis had a lower PEMAT-P understandability compared to that of AAO-HNS material (p > 0.05).</p><p><strong>Conclusion: </strong>Patient information on the ENTHealth.org website for select laryngology topics was, on average, of a lower grade level and higher reading ease compared to that produced by ChatGPT, but interestingly with largely no difference in the quality of information provided.</p><p><strong>Level of evidence: </strong>NA Laryngoscope, 2024.</p>\",\"PeriodicalId\":49921,\"journal\":{\"name\":\"Laryngoscope\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Laryngoscope\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/lary.31891\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, RESEARCH & EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Laryngoscope","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/lary.31891","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

目的:评估并比较使用经过验证的工具,包括 Flesch-Kincaid Grade Level (FKGL)、Flesch Reading Ease、DISCERN 和可打印材料患者教育材料评估工具 (PEMAT-P),评估并比较 Chat-Generative Pre-Trained Transformer-3.5 (ChatGPT) 和美国耳鼻咽喉头颈外科学会 (AAO-HNS) 生成的患者信息的可读性和质量:方法:通过 ENTHealth.org 和 ChatGPT-3.5 查询喉科主题的患者信息。为评估可靠性,对特定主题的 ChatGPT-3.5 进行了两次查询。这为每个主题生成了三份去标识化文本文档:一份来自 AAO-HNS,两份来自 ChatGPT(ChatGPT 输出 1、ChatGPT 输出 2)。使用单因子方差分析和 Tukey 后检验比较了三个来源的年级和阅读难易程度。使用独立 t 检验比较 AAO-HNS 和 ChatGPT 输出 1 的 DISCERN 和 PEMAT 可理解性和可操作性得分:结果:ChatGPT输出1和ChatGPT输出2生成的材料比AAO-HNS的材料至少高出两个阅读等级(P 0.05):ENTHealth.org网站上关于部分喉科主题的患者信息与ChatGPT制作的信息相比,平均水平较低,阅读难度较高,但有趣的是,所提供信息的质量基本没有差异:NA 《喉镜》,2024 年。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Utilization of Artificial Intelligence in the Creation of Patient Information on Laryngology Topics.

Objective: To evaluate and compare the readability and quality of patient information generated by Chat-Generative Pre-Trained Transformer-3.5 (ChatGPT) and the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) using validated instruments including Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease, DISCERN, and Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P).

Methods: ENTHealth.org and ChatGPT-3.5 were queried for patient information on laryngology topics. ChatGPT-3.5 was queried twice for a given topic to evaluate for reliability. This generated three de-identified text documents for each topic: one from AAO-HNS and two from ChatGPT (ChatGPT Output 1, ChatGPT Output 2). Grade level and reading ease were compared between the three sources using a one-way analysis of variance and Tukey's post hoc test. Independent t-tests were used to compare DISCERN and PEMAT understandability and actionability scores between AAO-HNS and ChatGPT Output 1.

Results: Material generated from ChatGPT Output 1 and ChatGPT Output 2 were at least two reading grade levels higher than that of material from AAO-HNS (p < 0.001). Regarding reading ease, ChatGPT Output 1 and ChatGPT Output 2 documents had significantly lower mean scores compared to AAO-HNS (p < 0.001). Moreover, ChatGPT Output 1 material on vocal cord paralysis had a lower PEMAT-P understandability compared to that of AAO-HNS material (p > 0.05).

Conclusion: Patient information on the ENTHealth.org website for select laryngology topics was, on average, of a lower grade level and higher reading ease compared to that produced by ChatGPT, but interestingly with largely no difference in the quality of information provided.

Level of evidence: NA Laryngoscope, 2024.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Laryngoscope
Laryngoscope 医学-耳鼻喉科学
CiteScore
6.50
自引率
7.70%
发文量
500
审稿时长
2-4 weeks
期刊介绍: The Laryngoscope has been the leading source of information on advances in the diagnosis and treatment of head and neck disorders since 1890. The Laryngoscope is the first choice among otolaryngologists for publication of their important findings and techniques. Each monthly issue of The Laryngoscope features peer-reviewed medical, clinical, and research contributions in general otolaryngology, allergy/rhinology, otology/neurotology, laryngology/bronchoesophagology, head and neck surgery, sleep medicine, pediatric otolaryngology, facial plastics and reconstructive surgery, oncology, and communicative disorders. Contributions include papers and posters presented at the Annual and Section Meetings of the Triological Society, as well as independent papers, "How I Do It", "Triological Best Practice" articles, and contemporary reviews. Theses authored by the Triological Society’s new Fellows as well as papers presented at meetings of the American Laryngological Association are published in The Laryngoscope. • Broncho-esophagology • Communicative disorders • Head and neck surgery • Plastic and reconstructive facial surgery • Oncology • Speech and hearing defects
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信