Impaired neural encoding of naturalistic audiovisual speech in autism

IF 4.5 2区 医学 Q1 NEUROIMAGING
Theo Vanneau , Michael J. Crosse , John J. Foxe , Sophie Molholm
{"title":"Impaired neural encoding of naturalistic audiovisual speech in autism","authors":"Theo Vanneau ,&nbsp;Michael J. Crosse ,&nbsp;John J. Foxe ,&nbsp;Sophie Molholm","doi":"10.1016/j.neuroimage.2025.121397","DOIUrl":null,"url":null,"abstract":"<div><div>Visual cues from a speaker’s face can significantly improve speech comprehension in noisy environments through multisensory integration (MSI)—the process by which the brain combines auditory and visual inputs. Individuals with Autism Spectrum Disorder (ASD), however, often show atypical MSI, particularly during speech processing, which may contribute to the social communication difficulties central to the diagnosis. Understanding the neural basis of impaired MSI in ASD, especially during naturalistic speech, is critical for developing targeted interventions. Most neurophysiological studies have relied on simplified speech stimuli (e.g., isolated syllables or words), limiting their ecological validity. In this study, we used high-density EEG and linear encoding and decoding models to assess the neural processing of continuous audiovisual speech in adolescents and young adults with ASD (<em>N</em> = 23) and age-matched typically developing controls (<em>N</em> = 19). Participants watched and listened to naturalistic speech under auditory-only, visual-only, and audiovisual conditions, with varying levels of background noise, and were tasked with detecting a target word. Linear models were used to quantify cortical tracking of the speech envelope and phonetic features. In the audiovisual condition, the ASD group showed reduced behavioral performance and weaker neural tracking of both acoustic and phonetic features, relative to controls. In contrast, in the auditory-only condition, increasing background noise reduced behavioral and model performance similarly across groups. These results provide, for the first time, converging behavioral and neurophysiological evidence of impaired multisensory enhancement for continuous, natural speech in ASD.</div></div><div><h3>Significance Statement</h3><div>In adverse hearing conditions, seeing a speaker's face and their facial movements enhances speech comprehension through a process called multisensory integration, where the brain combines visual and auditory inputs to facilitate perception and communication. However, individuals with Autism Spectrum Disorder (ASD) often struggle with this process, particularly during speech comprehension. Previous findings using simple, discrete stimuli do not fully explain how the processing of continuous natural multisensory speech is affected in ASD. In our study, we used natural, continuous speech stimuli to compare the neural processing of various speech features in individuals with ASD and typically developing (TD) controls, across auditory and audiovisual conditions with varying levels of background noise. Our findings showed no group differences in the encoding of auditory-alone speech, with both groups similarly affected by increasing levels of noise. However, for audiovisual speech, individuals with ASD displayed reduced neural encoding of both the acoustic envelope and the phonetic features, marking neural processing impairment of continuous audiovisual multisensory speech in autism.</div></div>","PeriodicalId":19299,"journal":{"name":"NeuroImage","volume":"318 ","pages":"Article 121397"},"PeriodicalIF":4.5000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NeuroImage","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1053811925004008","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROIMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Visual cues from a speaker’s face can significantly improve speech comprehension in noisy environments through multisensory integration (MSI)—the process by which the brain combines auditory and visual inputs. Individuals with Autism Spectrum Disorder (ASD), however, often show atypical MSI, particularly during speech processing, which may contribute to the social communication difficulties central to the diagnosis. Understanding the neural basis of impaired MSI in ASD, especially during naturalistic speech, is critical for developing targeted interventions. Most neurophysiological studies have relied on simplified speech stimuli (e.g., isolated syllables or words), limiting their ecological validity. In this study, we used high-density EEG and linear encoding and decoding models to assess the neural processing of continuous audiovisual speech in adolescents and young adults with ASD (N = 23) and age-matched typically developing controls (N = 19). Participants watched and listened to naturalistic speech under auditory-only, visual-only, and audiovisual conditions, with varying levels of background noise, and were tasked with detecting a target word. Linear models were used to quantify cortical tracking of the speech envelope and phonetic features. In the audiovisual condition, the ASD group showed reduced behavioral performance and weaker neural tracking of both acoustic and phonetic features, relative to controls. In contrast, in the auditory-only condition, increasing background noise reduced behavioral and model performance similarly across groups. These results provide, for the first time, converging behavioral and neurophysiological evidence of impaired multisensory enhancement for continuous, natural speech in ASD.

Significance Statement

In adverse hearing conditions, seeing a speaker's face and their facial movements enhances speech comprehension through a process called multisensory integration, where the brain combines visual and auditory inputs to facilitate perception and communication. However, individuals with Autism Spectrum Disorder (ASD) often struggle with this process, particularly during speech comprehension. Previous findings using simple, discrete stimuli do not fully explain how the processing of continuous natural multisensory speech is affected in ASD. In our study, we used natural, continuous speech stimuli to compare the neural processing of various speech features in individuals with ASD and typically developing (TD) controls, across auditory and audiovisual conditions with varying levels of background noise. Our findings showed no group differences in the encoding of auditory-alone speech, with both groups similarly affected by increasing levels of noise. However, for audiovisual speech, individuals with ASD displayed reduced neural encoding of both the acoustic envelope and the phonetic features, marking neural processing impairment of continuous audiovisual multisensory speech in autism.

Abstract Image

自闭症患者自然视听语言的神经编码受损
通过多感觉整合(MSI)——大脑将听觉和视觉输入结合起来的过程,说话者面部的视觉线索可以显著提高嘈杂环境下的语言理解能力。然而,患有自闭症谱系障碍(ASD)的个体往往表现出非典型的MSI,特别是在言语处理过程中,这可能会导致社会沟通困难,而这正是诊断的核心。了解ASD中MSI受损的神经基础,特别是在自然语言中,对于制定有针对性的干预措施至关重要。大多数神经生理学研究依赖于简化的言语刺激(例如,孤立的音节或单词),限制了它们的生态有效性。在这项研究中,我们使用高密度脑电图和线性编码和解码模型来评估青少年和青年ASD患者(N = 23)和年龄匹配的典型发育对照组(N = 19)的连续视听语言的神经处理。参与者分别在听觉、视觉和视听条件下观看和听自然语言,并在不同程度的背景噪音下检测目标单词。线性模型用于量化语音包络和语音特征的皮质跟踪。在视听条件下,与对照组相比,ASD组表现出较低的行为表现和较弱的声学和语音特征的神经跟踪。相比之下,在只听声音的情况下,增加背景噪音会降低各组的行为和模型表现。这些结果首次为ASD患者连续、自然语言的多感觉增强功能受损提供了行为和神经生理学证据。在不利的听力条件下,通过一个称为多感觉整合的过程,看到说话者的脸和他们的面部运动可以增强对语言的理解,在这个过程中,大脑将视觉和听觉输入结合起来,以促进感知和交流。然而,患有自闭症谱系障碍(ASD)的人经常在这个过程中挣扎,特别是在语言理解方面。先前使用简单、离散刺激的研究结果并不能完全解释ASD患者对连续自然多感官语言的处理是如何受到影响的。在我们的研究中,我们使用自然的、连续的语音刺激来比较ASD患者和正常发育(TD)对照组在不同背景噪音水平的听觉和视听条件下的各种语音特征的神经处理。我们的研究结果显示,在听觉语言的编码方面没有组间差异,两组受噪音水平增加的影响相似。然而,对于视听语音,ASD个体的声学包络和语音特征的神经编码都有所减少,这标志着自闭症患者连续视听多感觉语音的神经加工受损。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
NeuroImage
NeuroImage 医学-核医学
CiteScore
11.30
自引率
10.50%
发文量
809
审稿时长
63 days
期刊介绍: NeuroImage, a Journal of Brain Function provides a vehicle for communicating important advances in acquiring, analyzing, and modelling neuroimaging data and in applying these techniques to the study of structure-function and brain-behavior relationships. Though the emphasis is on the macroscopic level of human brain organization, meso-and microscopic neuroimaging across all species will be considered if informative for understanding the aforementioned relationships.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信