{"title":"Linear-nonlinear Bernoulli modeling for quantifying temporal coding of phonemes in brain responses to continuous speech","authors":"Nathaniel J. Zuk, G. D. Liberto, E. Lalor","doi":"10.32470/ccn.2019.1192-0","DOIUrl":null,"url":null,"abstract":"The electroencephalographic (EEG) response to a sound of interest is often quantified by averaging time-locked signals over many repetitions in order to get an eventrelated potential (ERP). While this technique can identify an average response, it does not easily allow one to validate the robustness of that response nor variation of the response over repetitions of the sound. Here, we extend the ERP technique as a linear-nonlinear Bernoulli (LNB) model, inspired by neural models, in order to develop a framework for decoding the timing of stimulus events. We use this technique to analyze EEG recordings during presentations of continuous speech and examine neural responses to phonemes, which have been shown to have characteristic EEG responses. Pattern analysis of the confusion between phonemes separates phonemes into vowel and constants, indicating separate ERPs that can robustly predict these phoneme classes. We also find that vowels are decoded more accurately than consonants, and the time course of vowel predictability tracks the rhythm of vowels, while consonant predictability does not track the rhythm of consonants. Overall, we demonstrate a specific instance in which a linear-nonlinear Bernoulli modeling framework can be used to compare ERPs and quantify the ability to decode stimulus events from EEG.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"105 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Conference on Cognitive Computational Neuroscience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32470/ccn.2019.1192-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The electroencephalographic (EEG) response to a sound of interest is often quantified by averaging time-locked signals over many repetitions in order to get an eventrelated potential (ERP). While this technique can identify an average response, it does not easily allow one to validate the robustness of that response nor variation of the response over repetitions of the sound. Here, we extend the ERP technique as a linear-nonlinear Bernoulli (LNB) model, inspired by neural models, in order to develop a framework for decoding the timing of stimulus events. We use this technique to analyze EEG recordings during presentations of continuous speech and examine neural responses to phonemes, which have been shown to have characteristic EEG responses. Pattern analysis of the confusion between phonemes separates phonemes into vowel and constants, indicating separate ERPs that can robustly predict these phoneme classes. We also find that vowels are decoded more accurately than consonants, and the time course of vowel predictability tracks the rhythm of vowels, while consonant predictability does not track the rhythm of consonants. Overall, we demonstrate a specific instance in which a linear-nonlinear Bernoulli modeling framework can be used to compare ERPs and quantify the ability to decode stimulus events from EEG.