The quality and readability of patient information provided by ChatGPT: can AI reliably explain common ENT operations?

IF 1.9 3区 医学 Q2 OTORHINOLARYNGOLOGY
Michel Abou-Abdallah, Talib Dar, Yasamin Mahmudzade, Joshua Michaels, Rishi Talwar, Chrysostomos Tornari
{"title":"The quality and readability of patient information provided by ChatGPT: can AI reliably explain common ENT operations?","authors":"Michel Abou-Abdallah, Talib Dar, Yasamin Mahmudzade, Joshua Michaels, Rishi Talwar, Chrysostomos Tornari","doi":"10.1007/s00405-024-08598-w","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Access to high-quality and comprehensible patient information is crucial. However, information provided by increasingly prevalent Artificial Intelligence tools has not been thoroughly investigated. This study assesses the quality and readability of information from ChatGPT regarding three index ENT operations: tonsillectomy, adenoidectomy, and grommets.</p><p><strong>Methods: </strong>We asked ChatGPT standard and simplified questions. Readability was calculated using Flesch-Kincaid Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI) and Simple Measure of Gobbledygook (SMOG) scores. We assessed quality using the DISCERN instrument and compared these with ENT UK patient leaflets.</p><p><strong>Results: </strong>ChatGPT readability was poor, with mean FRES of 38.9 and 55.1 pre- and post-simplification, respectively. Simplified information from ChatGPT was 43.6% more readable (FRES) but scored 11.6% lower for quality. ENT UK patient information readability and quality was consistently higher.</p><p><strong>Conclusions: </strong>ChatGPT can simplify information at the expense of quality, resulting in shorter answers with important omissions. Limitations in knowledge and insight curb its reliability for healthcare information. Patients should use reputable sources from professional organisations alongside clear communication with their clinicians for well-informed consent and making decisions.</p>","PeriodicalId":11952,"journal":{"name":"European Archives of Oto-Rhino-Laryngology","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Archives of Oto-Rhino-Laryngology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00405-024-08598-w","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/26 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Access to high-quality and comprehensible patient information is crucial. However, information provided by increasingly prevalent Artificial Intelligence tools has not been thoroughly investigated. This study assesses the quality and readability of information from ChatGPT regarding three index ENT operations: tonsillectomy, adenoidectomy, and grommets.

Methods: We asked ChatGPT standard and simplified questions. Readability was calculated using Flesch-Kincaid Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI) and Simple Measure of Gobbledygook (SMOG) scores. We assessed quality using the DISCERN instrument and compared these with ENT UK patient leaflets.

Results: ChatGPT readability was poor, with mean FRES of 38.9 and 55.1 pre- and post-simplification, respectively. Simplified information from ChatGPT was 43.6% more readable (FRES) but scored 11.6% lower for quality. ENT UK patient information readability and quality was consistently higher.

Conclusions: ChatGPT can simplify information at the expense of quality, resulting in shorter answers with important omissions. Limitations in knowledge and insight curb its reliability for healthcare information. Patients should use reputable sources from professional organisations alongside clear communication with their clinicians for well-informed consent and making decisions.

ChatGPT 提供的患者信息的质量和可读性:人工智能能否可靠地解释常见的耳鼻喉科手术?
目的:获取高质量、可理解的患者信息至关重要。然而,对日益流行的人工智能工具所提供的信息尚未进行深入研究。本研究评估了 ChatGPT 提供的有关扁桃体切除术、腺样体切除术和扣环切除术这三种耳鼻喉科手术指标信息的质量和可读性:方法:我们向 ChatGPT 提出了标准问题和简化问题。可读性采用弗莱什-金凯德阅读容易程度评分(FRES)、弗莱什-金凯德等级水平(FKGL)、冈宁雾指数(GFI)和简单拗口程度评分(SMOG)进行计算。我们使用 DISCERN 工具对质量进行了评估,并将其与英国耳鼻喉科患者宣传单进行了比较:ChatGPT 的可读性很差,简化前和简化后的平均 FRES 分别为 38.9 和 55.1。ChatGPT 简化后的信息可读性提高了 43.6%(FRES),但质量得分却降低了 11.6%。英国耳鼻喉科患者信息的可读性和质量一直较高:结论:ChatGPT 在简化信息的同时也牺牲了信息质量,导致答案较短,并有重要遗漏。在知识和洞察力方面的局限性限制了其医疗保健信息的可靠性。患者应使用专业组织提供的可靠信息来源,同时与临床医生进行明确沟通,以便在充分知情的情况下同意并做出决定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
7.70%
发文量
537
审稿时长
2-4 weeks
期刊介绍: Official Journal of European Union of Medical Specialists – ORL Section and Board Official Journal of Confederation of European Oto-Rhino-Laryngology Head and Neck Surgery "European Archives of Oto-Rhino-Laryngology" publishes original clinical reports and clinically relevant experimental studies, as well as short communications presenting new results of special interest. With peer review by a respected international editorial board and prompt English-language publication, the journal provides rapid dissemination of information by authors from around the world. This particular feature makes it the journal of choice for readers who want to be informed about the continuing state of the art concerning basic sciences and the diagnosis and management of diseases of the head and neck on an international level. European Archives of Oto-Rhino-Laryngology was founded in 1864 as "Archiv für Ohrenheilkunde" by A. von Tröltsch, A. Politzer and H. Schwartze.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信