{"title":"Neural Representation of Articulable and Inarticulable Novel Sound Contrasts: The Role of the Dorsal Stream.","authors":"David I Saltzman, Emily B Myers","doi":"10.1162/nol_a_00016","DOIUrl":"https://doi.org/10.1162/nol_a_00016","url":null,"abstract":"<p><p>The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the \"dorsal stream\") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions <i>outside</i> of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":" ","pages":"339-364"},"PeriodicalIF":3.2,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/nol_a_00016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40477989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ho Ming Chow, Emily O Garnett, Hua Li, Andrew Etchell, Jorge Sepulcre, Dennis Drayna, Diane Chugani, Soo-Eun Chang
{"title":"Linking Lysosomal Enzyme Targeting Genes and Energy Metabolism with Altered Gray Matter Volume in Children with Persistent Stuttering.","authors":"Ho Ming Chow, Emily O Garnett, Hua Li, Andrew Etchell, Jorge Sepulcre, Dennis Drayna, Diane Chugani, Soo-Eun Chang","doi":"10.1162/nol_a_00017","DOIUrl":"https://doi.org/10.1162/nol_a_00017","url":null,"abstract":"<p><p>Developmental stuttering is a childhood onset neurodevelopmental disorder with an unclear etiology. Subtle changes in brain structure and function are present in both children and adults who stutter. It is a highly heritable disorder, and 12-20% of stuttering cases may carry a mutation in one of four genes involved in intracellular trafficking. To better understand the relationship between genetics and neuroanatomical changes, we used gene expression data from the Allen Institute for Brain Science and voxel-based morphometry to investigate the spatial correspondence between gene expression patterns and differences in gray matter volume between children with persistent stuttering (<i>n</i> = 26, and 87 scans) and their fluent peers (<i>n</i> = 44, and 139 scans). We found that the expression patterns of two stuttering-related genes (<i>GNPTG</i> and <i>NAGPA</i>) from the Allen Institute data exhibited a strong positive spatial correlation with the magnitude of between-group gray matter volume differences. Additional gene set enrichment analyses revealed that genes whose expression was highly correlated with the gray matter volume differences were enriched for glycolysis and oxidative metabolism in mitochondria. Because our current study did not examine the participants' genomes, these results cannot establish the direct association between genetic mutations and gray matter volume differences in stuttering. However, our results support further study of the involvement of lysosomal enzyme targeting genes, as well as energy metabolism in stuttering. Future studies assessing variations of these genes in the participants' genomes may lead to increased understanding of the biological mechanisms of the observed spatial relationship between gene expression and gray matter volume.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":" ","pages":"365-380"},"PeriodicalIF":3.2,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8138901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39021457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Ferré, J. Jarret, S. Brambati, Pierre Bellec, Y. Joanette
{"title":"Task-Induced Functional Connectivity of Picture Naming in Healthy Aging: The Impacts of Age and Task Complexity","authors":"P. Ferré, J. Jarret, S. Brambati, Pierre Bellec, Y. Joanette","doi":"10.1162/nol_a_00007","DOIUrl":"https://doi.org/10.1162/nol_a_00007","url":null,"abstract":"The topological organization of the brain, governed by the capacity of brain regions to synchronize their activity, allows for cost-effective performance during everyday cognitive activity. Functional connectivity is an fMRI method deemed task-specific and demand-dependent. Although the brain undergoes significant changes during healthy aging, conceptual knowledge and word-production accuracy are generally preserved. The exploration of task-induced functional connectivity patterns during active picture naming may thus provide additional information about healthy functional cerebral mechanisms that are specifically adapted to the cognitive activity at hand. The goal of this study is to assess and describe age-related differences in functional connectivity during an overt picture-naming task, as well as to compare age-related differences under complex task demand, defined by lexical frequency. Results suggest both age-specific and task-specific mechanisms. In the context of preserved behavioral performance in a picture-naming task, older adults show a complex array of differences in functional connectivity architecture, including both increases and decreases. In brief, there is increased segregation and specialization of regions that are classically assigned to naming processes. Results also expand on previous word-production studies and suggest that motor regions are particularly subject to age-related differences. This study also provides the first indication that intrinsic task demand, as manipulated by lexical frequency, interacts little with the relationship between age and functional connectivity. Together, these findings confirm the value of task-induced functional connectivity analysis in revealing the brain organization that subserves task performance during healthy aging.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":"1 1","pages":"161-184"},"PeriodicalIF":3.2,"publicationDate":"2020-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/nol_a_00007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43920891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lexical Access in Naming and Reading: Spatiotemporal Localization of Semantic Facilitation and Interference Using MEG","authors":"Julien Dirani, L. Pylkkänen","doi":"10.1162/nol_a_00008","DOIUrl":"https://doi.org/10.1162/nol_a_00008","url":null,"abstract":"Naming an object involves quick retrieval of a target word from long-term memory. Research using the semantic interference paradigm has shown that objects take longer to name when they are preceded by primes in the same semantic category. This has been interpreted as reflecting either competition during lexical selection or as an interference effect at a later, postlexical level. Since the behavioral finding has been a core argument for the existence of competition during lexical selection in naming, understanding its processing level is important for models of language production. We used MEG to determine the spatiotemporal localization of the interference effect. We also compared its neural signature to the effect of semantic relatedness in reading, in which relatedness is expected to speed up behavioral responses and reduce activity in the left superior temporal cortex at around 200–300 ms. This is exactly what we found. However, in naming, we observed a more complex pattern for our semantically related targets. First, the angular gyrus showed a facilitory pattern at 300–400 ms, likely reflecting aspects of lexical access. This was followed by a broadly distributed and sustained interference pattern that lasted until articulatory stages. More transient interference effects were also observed at 395–485 ms in the left STG and at ∼100–200 ms before articulation in the parietal cortex. Thus, our findings suggest that the semantic interference effect originates from both early and late sources, which may explain its varying localizations in previous literature.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":"1 1","pages":"185-207"},"PeriodicalIF":3.2,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/nol_a_00008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47847137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chad S. Rogers, Michael S. Jones, Sarah McConkey, Brent Spehar, Kristin J. Van Engen, M. Sommers, J. Peelle
{"title":"Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition","authors":"Chad S. Rogers, Michael S. Jones, Sarah McConkey, Brent Spehar, Kristin J. Van Engen, M. Sommers, J. Peelle","doi":"10.1101/2020.03.05.977306","DOIUrl":"https://doi.org/10.1101/2020.03.05.977306","url":null,"abstract":"Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which the brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically-defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":"1 1","pages":"452 - 473"},"PeriodicalIF":3.2,"publicationDate":"2020-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42118411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayan S Mandal, Mackenzie E Fama, Laura M Skipper-Kallal, Andrew T DeMarco, Elizabeth H Lacey, Peter E Turkeltaub
{"title":"Brain structures and cognitive abilities important for the self-monitoring of speech errors.","authors":"Ayan S Mandal, Mackenzie E Fama, Laura M Skipper-Kallal, Andrew T DeMarco, Elizabeth H Lacey, Peter E Turkeltaub","doi":"10.1162/nol_a_00015","DOIUrl":"https://doi.org/10.1162/nol_a_00015","url":null,"abstract":"<p><p>The brain structures and cognitive abilities necessary for successful monitoring of one's own speech errors remain unknown. We aimed to inform self-monitoring models by examining the neural and behavioral correlates of phonological and semantic error detection in individuals with post-stroke aphasia. First, we determined whether detection related to other abilities proposed to contribute to monitoring according to various theories, including naming ability, fluency, word-level auditory comprehension, sentence-level auditory comprehension, and executive function. Regression analyses revealed that fluency and executive scores were independent predictors of phonological error detection, while a measure of word-level comprehension related to semantic error detection. Next, we used multivariate lesion-symptom mapping to determine lesion locations associated with reduced error detection. Reduced overall error detection related to damage to a region of frontal white matter extending into dorsolateral prefrontal cortex (DLPFC). Detection of phonological errors related to damage to the same areas, but the lesion-behavior association was stronger, suggesting the localization for overall error detection was driven primarily by phonological error detection. These findings demonstrate that monitoring of different error types relies on distinct cognitive functions, and provide causal evidence for the importance of frontal white matter tracts and DLPFC for self-monitoring of speech.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":" ","pages":"319-338"},"PeriodicalIF":3.2,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/nol_a_00015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39540702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trevor Brothers, Eddie W Wlotko, Lena Warnke, Gina R Kuperberg
{"title":"Going the Extra Mile: Effects of Discourse Context on Two Late Positivities During Language Comprehension.","authors":"Trevor Brothers, Eddie W Wlotko, Lena Warnke, Gina R Kuperberg","doi":"10.1162/nol_a_00006","DOIUrl":"https://doi.org/10.1162/nol_a_00006","url":null,"abstract":"<p><p>During language comprehension, online neural processing is strongly influenced by the constraints of the prior context. While the N400 ERP response (300-500ms) is known to be sensitive to a word's semantic predictability, less is known about a set of late positive-going ERP responses (600-1000ms) that can be elicited when an incoming word violates strong predictions about upcoming content (<i>late frontal positivity</i>) or about what is possible given the prior context (<i>late posterior positivity/P600</i>). Across three experiments, we systematically manipulated the length of the prior context and the source of lexical constraint to determine their influence on comprehenders' online neural responses to these two types of prediction violations. In Experiment 1, within minimal contexts, both lexical prediction violations and semantically anomalous words produced a larger N400 than expected continuations (<i>James unlocked the door/laptop/gardener</i>), but no late positive effects were observed. Critically, the <i>late posterior positivity/P600</i> to semantic anomalies appeared when these same sentences were embedded within longer discourse contexts (Experiment 2a), and the <i>late frontal positivity</i> appeared to lexical prediction violations when the preceding context was rich and globally constraining (Experiment 2b). We interpret these findings within a hierarchical generative framework of language comprehension. This framework highlights the role of comprehension goals and broader linguistic context, and how these factors influence both top-down prediction and the decision to update or reanalyze the prior context when these predictions are violated.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":" ","pages":"135-160"},"PeriodicalIF":3.2,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/nol_a_00006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38082825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural Components of Reading Revealed by Distributed and Symbolic Computational Models.","authors":"Ryan Staples, William W Graves","doi":"10.1162/nol_a_00018","DOIUrl":"https://doi.org/10.1162/nol_a_00018","url":null,"abstract":"<p><p>Determining how the cognitive components of reading - orthographic, phonological, and semantic representations - are instantiated in the brain has been a longstanding goal of psychology and human cognitive neuroscience. The two most prominent computational models of reading instantiate different cognitive processes, implying different neural processes. Artificial neural network (ANN) models of reading posit non-symbolic, distributed representations. The dual-route cascaded (DRC) model instead suggests two routes of processing, one representing symbolic rules of spelling-sound correspondence, the other representing orthographic and phonological lexicons. These models are not adjudicated by behavioral data and have never before been directly compared in terms of neural plausibility. We used representational similarity analysis to compare the predictions of these models to neural data from participants reading aloud. Both the ANN and DRC model representations corresponded with neural activity. However, ANN model representations correlated to more reading-relevant areas of cortex. When contributions from the DRC model were statistically controlled, partial correlations revealed that the ANN model accounted for significant variance in the neural data. The opposite analysis, examining the variance explained by the DRC model with contributions from the ANN model factored out, revealed no correspondence to neural activity. Our results suggest that ANNs trained using distributed representations provide a better correspondence between cognitive and neural coding. Additionally, this framework provides a principled approach for comparing computational models of cognitive function to gain insight into neural representations.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":" ","pages":"381-401"},"PeriodicalIF":3.2,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/nol_a_00018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40668041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chad S Rogers, Michael S Jones, Sarah McConkey, Brent Spehar, Kristin J Van Engen, Mitchell S Sommers, Jonathan E Peelle
{"title":"Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition.","authors":"Chad S Rogers, Michael S Jones, Sarah McConkey, Brent Spehar, Kristin J Van Engen, Mitchell S Sommers, Jonathan E Peelle","doi":"10.1162/nol_a_00021","DOIUrl":"https://doi.org/10.1162/nol_a_00021","url":null,"abstract":"<p><p>Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19-30 years) and 32 older adults (aged 65-81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":" ","pages":"452-473"},"PeriodicalIF":3.2,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8318202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39259104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}