Anna Corriveau, Jin Ke, Hiroki Terashima, Hirohito M Kondo, Monica D Rosenberg
{"title":"预测持续注意的功能性脑网络并不特定于知觉模态。","authors":"Anna Corriveau, Jin Ke, Hiroki Terashima, Hirohito M Kondo, Monica D Rosenberg","doi":"10.1162/netn_a_00430","DOIUrl":null,"url":null,"abstract":"<p><p>Sustained attention is essential for daily life and can be directed to information from different perceptual modalities, including audition and vision. Recently, cognitive neuroscience has aimed to identify neural predictors of behavior that generalize across datasets. Prior work has shown strong generalization of models trained to predict individual differences in sustained attention performance from patterns of fMRI functional connectivity. However, it is an open question whether predictions of sustained attention are specific to the perceptual modality in which they are trained. In the current study, we test whether connectome-based models predict performance on attention tasks performed in different modalities. We show first that a predefined network trained to predict adults' <i>visual</i> sustained attention performance generalizes to predict <i>auditory</i> sustained attention performance in three independent datasets (<i>N</i> <sub>1</sub> = 29, <i>N</i> <sub>2</sub> = 60, <i>N</i> <sub>3</sub> = 17). Next, we train new network models to predict performance on visual and auditory attention tasks separately. We find that functional networks are largely modality general, with both model-unique and shared model features predicting sustained attention performance in independent datasets regardless of task modality. Results support the supposition that visual and auditory sustained attention rely on shared neural mechanisms and demonstrate robust generalizability of whole-brain functional network models of sustained attention.</p>","PeriodicalId":48520,"journal":{"name":"Network Neuroscience","volume":"9 1","pages":"303-325"},"PeriodicalIF":3.6000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11949588/pdf/","citationCount":"0","resultStr":"{\"title\":\"Functional brain networks predicting sustained attention are not specific to perceptual modality.\",\"authors\":\"Anna Corriveau, Jin Ke, Hiroki Terashima, Hirohito M Kondo, Monica D Rosenberg\",\"doi\":\"10.1162/netn_a_00430\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Sustained attention is essential for daily life and can be directed to information from different perceptual modalities, including audition and vision. Recently, cognitive neuroscience has aimed to identify neural predictors of behavior that generalize across datasets. Prior work has shown strong generalization of models trained to predict individual differences in sustained attention performance from patterns of fMRI functional connectivity. However, it is an open question whether predictions of sustained attention are specific to the perceptual modality in which they are trained. In the current study, we test whether connectome-based models predict performance on attention tasks performed in different modalities. We show first that a predefined network trained to predict adults' <i>visual</i> sustained attention performance generalizes to predict <i>auditory</i> sustained attention performance in three independent datasets (<i>N</i> <sub>1</sub> = 29, <i>N</i> <sub>2</sub> = 60, <i>N</i> <sub>3</sub> = 17). Next, we train new network models to predict performance on visual and auditory attention tasks separately. We find that functional networks are largely modality general, with both model-unique and shared model features predicting sustained attention performance in independent datasets regardless of task modality. Results support the supposition that visual and auditory sustained attention rely on shared neural mechanisms and demonstrate robust generalizability of whole-brain functional network models of sustained attention.</p>\",\"PeriodicalId\":48520,\"journal\":{\"name\":\"Network Neuroscience\",\"volume\":\"9 1\",\"pages\":\"303-325\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11949588/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Network Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1162/netn_a_00430\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Network Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1162/netn_a_00430","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
摘要
持续的注意力对日常生活至关重要,可以引导到来自不同感知模式的信息,包括听觉和视觉。最近,认知神经科学的目标是识别跨数据集的行为的神经预测因子。先前的研究表明,通过fMRI功能连接模式来预测持续注意力表现的个体差异的模型具有很强的泛化性。然而,持续注意力的预测是否特定于他们所训练的感知模式,这是一个悬而未决的问题。在当前的研究中,我们测试了基于连接体的模型是否能预测不同模式下注意力任务的表现。我们首先证明了一个预定义的网络可以用来预测成人的视觉持续注意表现,并可以在三个独立的数据集(N 1 = 29, N 2 = 60, N 3 = 17)中推广到预测听觉持续注意表现。接下来,我们训练新的网络模型来分别预测视觉和听觉注意力任务的表现。我们发现功能网络在很大程度上是模态通用的,无论任务模态如何,都具有模型唯一性和共享模型特征来预测独立数据集中的持续注意力表现。结果支持了视觉和听觉持续注意依赖于共同的神经机制的假设,并证明了持续注意的全脑功能网络模型的强大泛化性。
Functional brain networks predicting sustained attention are not specific to perceptual modality.
Sustained attention is essential for daily life and can be directed to information from different perceptual modalities, including audition and vision. Recently, cognitive neuroscience has aimed to identify neural predictors of behavior that generalize across datasets. Prior work has shown strong generalization of models trained to predict individual differences in sustained attention performance from patterns of fMRI functional connectivity. However, it is an open question whether predictions of sustained attention are specific to the perceptual modality in which they are trained. In the current study, we test whether connectome-based models predict performance on attention tasks performed in different modalities. We show first that a predefined network trained to predict adults' visual sustained attention performance generalizes to predict auditory sustained attention performance in three independent datasets (N1 = 29, N2 = 60, N3 = 17). Next, we train new network models to predict performance on visual and auditory attention tasks separately. We find that functional networks are largely modality general, with both model-unique and shared model features predicting sustained attention performance in independent datasets regardless of task modality. Results support the supposition that visual and auditory sustained attention rely on shared neural mechanisms and demonstrate robust generalizability of whole-brain functional network models of sustained attention.