Parallel Corpus Analysis of Text and Audio Comprehension to Evaluate Readability Formula Effectiveness: Quantitative Analysis.

IF 6 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Arif Ahmed, Gondy Leroy, David Kauchak, Prosanta Barai, Philip Harber, Stephen Rains
{"title":"Parallel Corpus Analysis of Text and Audio Comprehension to Evaluate Readability Formula Effectiveness: Quantitative Analysis.","authors":"Arif Ahmed, Gondy Leroy, David Kauchak, Prosanta Barai, Philip Harber, Stephen Rains","doi":"10.2196/69772","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Health literacy, the ability to understand and act on health information, is critical for patient outcomes and health care system effectiveness. While plain language guidelines enhance text-based communication, audio-based health information remains underexplored, despite the growing use of digital assistants and smart devices in health care. Traditional readability formulas, such as Flesch-Kincaid, provide limited insights into the complexity of health-related texts and fail to address challenges specific to audio formats. Factors like syntax and semantic features significantly influence comprehension and retention across modalities.</p><p><strong>Objective: </strong>This study investigates features that affect comprehension of medical information delivered via text or audio formats. We also examine existing readability formulas and their correlation with perceived and actual difficulty of health information for both modalities.</p><p><strong>Methods: </strong>We developed a parallel corpus of health-related information that differed in delivery format: text or audio. We used text from the British Medical Journal (BMJ) Lay Summary (n=193), WebMD (n=40), Patient Instruction (n=40), Simple Wikipedia (n=243), and BMJ journal (n=200). Participants (n=487) read or listened to a health text and then completed a questionnaire evaluating perceived difficulty of the text, measured using a 5-point Likert scale, and actual difficulty measured using multiple-choice and true-false questions (comprehension) as well as free recall of information (retention). Questions were generated by generative artificial intelligence (ChatGPT-4.0). Underlying syntactic, semantic, and domain-specific features, as well as common readability formulas, were evaluated for their relation to information difficulty.</p><p><strong>Results: </strong>Text versions were perceived as easier than audio, with BMJ Lay Summary scoring 1.76 versus 2.1 and BMJ journal 2.59 versus 2.83 (lower is easier). Comprehension accuracy was higher for text across all sources (eg, BMJ journal: 76% vs 58%; Patient Instructions: 86% vs 66%). Retention was better for text, with significant differences in exact word matching for Patient Instructions and BMJ journal. Longer texts increased perceived difficulty in text but reduced free recall in both modalities (-0.23,-0.25 in audio). Higher content word frequency improved retention (0.23, 0.21) and lowered perceived difficulty (-0.20 in audio). Verb-heavy content eased comprehension (-0.29 in audio), while nouns and adjectives increased difficulty (0.20, 0.18). Readability formulas' outcomes were unrelated to comprehension or retention, but correlated with perceived difficulty in text (eg, Smog Index: 0.334 correlation).</p><p><strong>Conclusions: </strong>Text was more effective for conveying complex health information, but audio can be suitable for easier content. In addition, several textual features affect information comprehension and retention for both modalities. Finally, existing readability formulas did not explain actual difficulty. This study highlighted the importance of tailoring health information delivery to content complexity by using appropriate style and modality.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"27 ","pages":"e69772"},"PeriodicalIF":6.0000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490814/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Internet Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/69772","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Health literacy, the ability to understand and act on health information, is critical for patient outcomes and health care system effectiveness. While plain language guidelines enhance text-based communication, audio-based health information remains underexplored, despite the growing use of digital assistants and smart devices in health care. Traditional readability formulas, such as Flesch-Kincaid, provide limited insights into the complexity of health-related texts and fail to address challenges specific to audio formats. Factors like syntax and semantic features significantly influence comprehension and retention across modalities.

Objective: This study investigates features that affect comprehension of medical information delivered via text or audio formats. We also examine existing readability formulas and their correlation with perceived and actual difficulty of health information for both modalities.

Methods: We developed a parallel corpus of health-related information that differed in delivery format: text or audio. We used text from the British Medical Journal (BMJ) Lay Summary (n=193), WebMD (n=40), Patient Instruction (n=40), Simple Wikipedia (n=243), and BMJ journal (n=200). Participants (n=487) read or listened to a health text and then completed a questionnaire evaluating perceived difficulty of the text, measured using a 5-point Likert scale, and actual difficulty measured using multiple-choice and true-false questions (comprehension) as well as free recall of information (retention). Questions were generated by generative artificial intelligence (ChatGPT-4.0). Underlying syntactic, semantic, and domain-specific features, as well as common readability formulas, were evaluated for their relation to information difficulty.

Results: Text versions were perceived as easier than audio, with BMJ Lay Summary scoring 1.76 versus 2.1 and BMJ journal 2.59 versus 2.83 (lower is easier). Comprehension accuracy was higher for text across all sources (eg, BMJ journal: 76% vs 58%; Patient Instructions: 86% vs 66%). Retention was better for text, with significant differences in exact word matching for Patient Instructions and BMJ journal. Longer texts increased perceived difficulty in text but reduced free recall in both modalities (-0.23,-0.25 in audio). Higher content word frequency improved retention (0.23, 0.21) and lowered perceived difficulty (-0.20 in audio). Verb-heavy content eased comprehension (-0.29 in audio), while nouns and adjectives increased difficulty (0.20, 0.18). Readability formulas' outcomes were unrelated to comprehension or retention, but correlated with perceived difficulty in text (eg, Smog Index: 0.334 correlation).

Conclusions: Text was more effective for conveying complex health information, but audio can be suitable for easier content. In addition, several textual features affect information comprehension and retention for both modalities. Finally, existing readability formulas did not explain actual difficulty. This study highlighted the importance of tailoring health information delivery to content complexity by using appropriate style and modality.

文本和音频理解的平行语料库分析评价可读性公式有效性:定量分析。
背景:卫生素养,即理解卫生信息并据此采取行动的能力,对患者预后和卫生保健系统的有效性至关重要。虽然简单的语言指南增强了基于文本的交流,但基于音频的健康信息仍未得到充分探索,尽管在医疗保健中越来越多地使用数字助理和智能设备。传统的可读性公式,如Flesch-Kincaid,对健康相关文本的复杂性提供了有限的见解,并且无法解决音频格式特有的挑战。语法和语义特征等因素显著影响着模态的理解和记忆。目的:本研究探讨影响通过文本或音频格式传递的医学信息理解的特征。我们还研究了现有的可读性公式及其与两种模式的健康信息感知和实际困难的相关性。方法:我们开发了一个平行的健康相关信息语料库,其传递格式不同:文本或音频。我们使用的文本来自英国医学杂志(BMJ) Lay Summary (n=193)、WebMD (n=40)、Patient Instruction (n=40)、Simple Wikipedia (n=243)和BMJ杂志(n=200)。参与者(n=487)阅读或听一篇健康文本,然后完成一份评估文本感知难度的问卷,使用5分李克特量表测量,使用多项选择题和真假问题(理解)以及信息的自由回忆(保留)来测量实际难度。问题由生成式人工智能(ChatGPT-4.0)生成。潜在的语法、语义和特定于领域的特征,以及常见的可读性公式,都被评估为它们与信息难度的关系。结果:文本版本被认为比音频版本更容易,BMJ Lay Summary评分1.76比2.1,BMJ journal评分2.59比2.83(越低越容易)。所有来源的文本的理解准确率都更高(例如,BMJ杂志:76%对58%;患者说明书:86%对66%)。文本记忆效果较好,《患者说明书》和《BMJ杂志》在精确词匹配方面存在显著差异。较长的文本增加了文本的感知难度,但在两种模式下都降低了自由记忆(-0.23,-0.25)。更高的内容词频提高了记忆(0.23,0.21),降低了感知难度(-0.20)。动词较多的内容有助于理解(-0.29),而名词和形容词增加了理解难度(0.20,0.18)。可读性公式的结果与理解或记忆无关,但与文本的感知难度相关(例如,烟雾指数:0.334相关)。结论:文本对于传达复杂的健康信息更为有效,而音频则适用于更简单的内容。此外,一些文本特征会影响两种模式的信息理解和记忆。最后,现有的可读性公式没有解释实际的难度。这项研究强调了通过使用适当的风格和方式来根据内容的复杂性定制卫生信息传递的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
14.40
自引率
5.40%
发文量
654
审稿时长
1 months
期刊介绍: The Journal of Medical Internet Research (JMIR) is a highly respected publication in the field of health informatics and health services. With a founding date in 1999, JMIR has been a pioneer in the field for over two decades. As a leader in the industry, the journal focuses on digital health, data science, health informatics, and emerging technologies for health, medicine, and biomedical research. It is recognized as a top publication in these disciplines, ranking in the first quartile (Q1) by Impact Factor. Notably, JMIR holds the prestigious position of being ranked #1 on Google Scholar within the "Medical Informatics" discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信