Laura Giglio, D. Sharoh, M. Ostarek, Peter Hagoort
{"title":"Connectivity of fronto-temporal regions in syntactic structure building during speaking and listening","authors":"Laura Giglio, D. Sharoh, M. Ostarek, Peter Hagoort","doi":"10.1162/nol_a_00154","DOIUrl":"https://doi.org/10.1162/nol_a_00154","url":null,"abstract":"\u0000 The neural infrastructure for sentence production and comprehension has been found to be mostly shared. The same regions are engaged during speaking and listening, with some differences in how strongly they activate depending on modality. In this study, we investigated how modality affects the connectivity between regions previously found to be involved in syntactic processing across modalities. We determined how constituent size and modality affected the connectivity of the pars triangularis of the left inferior frontal gyrus (LIFG) and of the left posterior temporal lobe (LPTL) with the pars opercularis of the LIFG, the anterior temporal lobe (LATL) and the rest of the brain. We found that constituent size reliably increased the connectivity across these frontal and temporal ROIs. Connectivity between the two LIFG regions and the LPTL was enhanced as a function of constituent size in both modalities, and it was upregulated in production possibly because of linearization and motor planning in the frontal cortex. The connectivity of both ROIs with the LATL was lower and only enhanced for larger constituent sizes, suggesting a contributing role of the LATL in sentence processing in both modalities. These results thus show that the connectivity among fronto-temporal regions is upregulated for syntactic structure building in both sentence production and comprehension, providing further evidence for accounts of shared neural resources for sentence-level processing across modalities.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141824988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Small but mighty: Ten myths and misunderstandings about the cerebellum","authors":"J. Fiez, Catherine J. Stoodley","doi":"10.1162/nol_e_00152","DOIUrl":"https://doi.org/10.1162/nol_e_00152","url":null,"abstract":"\u0000 This special issue of Neurobiology of Language focuses on the role of the cerebellum in spoken and written language comprehension and production. The volume brings together behavioral and neural evidence bearing upon this question using an array of methods. As editors, we are excited by the collective impact of this work, which includes recent findings from many of the leading researchers who study the cerebellum and language. We also find ourselves pondering the term “special” as a reflection of the widespread tendency of brain researchers to comfortably relegate the cerebellum to a minor role in cognition. As a result, our 21st-century understanding of the cognitive neuroscience of the cerebellum is not yet consistently recognized by the field, leading to an under-appreciation of the cerebellar contributions to language beyond its role in the coordination of articulation. Here we offer a “top ten” list aimed at countering some of the myths and misunderstandings that keep it out of the limelight.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141683560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can the mismatch negativity really be elicited by abstract linguistic contrasts?","authors":"Stephen Politzer-Ahles, B. Jap","doi":"10.1162/nol_a_00147","DOIUrl":"https://doi.org/10.1162/nol_a_00147","url":null,"abstract":"\u0000 The mismatch negativity (MMN) is an ERP component that reflects pre-attentive change detection in the brain. As an electrophysiological index of processing that responds to differences in incoming consecutive stimuli, the MMN can be elicited through, for example, the presentation of two different categories of sounds in an oddball paradigm where sounds from the \"standard\" category occur frequently and sounds from the \"deviant\" category occur rarely. The specificity of what can elicit the MMN is yet to be fully defined. Here we test whether the MMN can be generated by an abstract linguistic contrast with no reliable acoustic cue. Previous studies have shown that the way in which an acoustic cue is used to elicit MMN is influenced by linguistic knowledge, but have not shown that a non-acoustic, abstract linguistic contrast can itself elicit MMN. In this study, we test the strongest interpretation of the claim that the MMN can be generated through a purely linguistic contrast, by contrasting tenses in ablauting irregular English verbs (where there is no reliable acoustic cue for tense). We find that this contrast elicits a negativity, as do other linguistic contrasts previously shown to elicit MMN. The findings provide evidence that the MMN is indeed sensitive to purely abstract linguistic categories.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141362893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tero Hakala, Tiina Lindh-Knuutila, Annika Hultén, Minna Lehtonen, R. Salmelin
{"title":"Subword representations successfully decode brain responses to morphologically complex written words","authors":"Tero Hakala, Tiina Lindh-Knuutila, Annika Hultén, Minna Lehtonen, R. Salmelin","doi":"10.1162/nol_a_00149","DOIUrl":"https://doi.org/10.1162/nol_a_00149","url":null,"abstract":"\u0000 This study extends the idea of decoding word-evoked brain activations using a corpus-semantic vector space to multimorphemic words in the agglutinative Finnish language. The corpus-semantic models are trained on word segments, and decoding is carried out with word vectors that are composed of these segments. We tested several alternative vector-space models using different segmentations: no segmentation (whole word), linguistic morphemes, statistical morphemes, random segmentation, and character-level 1-, 2- and 3-grams, and paired them with recorded MEG responses to multimorphemic words in a visual word recognition task. For all variants, the decoding accuracy exceeded the standard word-label permutation-based significance thresholds at 350--500 ms after stimulus onset. However, the critical segment-label permutation test revealed that only those segmentations that were morphologically aware reached significance in the brain decoding task. The results suggest that both whole-word forms and morphemes are represented in the brain and show that neural decoding using corpus-semantic word representations derived from compositional subword segments is applicable also for multimorphemic word forms. This is especially relevant for languages with complex morphology, because a large proportion of word forms are rare and it can be difficult to find statistically reliable surface representations for them in any large corpus.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141363774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oiwi Parker Jones, Sharon Geva, S. Prejawa, T. Hope, M. Oberhuber, Mohamed L. Seghier, David W. Green, Cathy J. Price
{"title":"Dissociating cerebellar regions involved in formulating and articulating words and sentences","authors":"Oiwi Parker Jones, Sharon Geva, S. Prejawa, T. Hope, M. Oberhuber, Mohamed L. Seghier, David W. Green, Cathy J. Price","doi":"10.1162/nol_a_00148","DOIUrl":"https://doi.org/10.1162/nol_a_00148","url":null,"abstract":"\u0000 This fMRI study of healthy volunteers investigated which parts of the cerebellum are involved in formulating and articulating sentences using three sentence-based tasks: (i) a sentence production task that involved describing simple events in pictures (e.g. “The goat is eating the hat”); (ii) An auditory sentence repetition task involving the same sentence articulation but not sentence formulation, and (iii) An auditory sentence-topicture matching task that involved the same pictorial events and no overt articulation. Activation for each of these tasks was compared to the equivalent word processing tasks: noun production (object naming), verb production (naming the verb in pictorial events), auditory noun repetition, and auditory noun-to-picture matching. Auditory and visual semantic association tasks were also included, in the same within-subjects design, to control for visual and auditory working memory and semantic processing.\u0000 Three distinct cerebellar regions were activated by sentence production compared to noun and verb production. First, we associate activation in bilateral cerebellum lobule VIIb with sequencing words into sentences, as well as phonemes into words because it increased for sentence production compared to all other conditions, including sentence repetition and sentence-to-picture matching; and was also activated by word production compared to word matching. Second, we associate a paravermal part of right cerebellar lobule VIIIb with overt motor execution of speech, because activation was higher during (i) production and repetition of sentences compared to the corresponding noun conditions, and (ii) noun and verb production compared to all matching tasks; with no activation relative to fixation during any silent (non-speaking) matching task. Third, we associate activation within right cerebellar Crus II with covert articulatory activity because it activated for (i) all speech production more than matching tasks, and (ii) sentences compared to nouns during silent (non-speaking) matching as well as sentence production and sentence repetition.\u0000 As all three regions were activated during word production tasks, our study serendipitously segregated, for the first time, three distinct functional roles for the cerebellum in generic speech production; and demonstrates how sentence production enhanced the demands on these three cerebellar speech production regions.","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141363625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neurobiology of LanguagePub Date : 2024-06-03eCollection Date: 2024-01-01DOI: 10.1162/nol_a_00128
Agata Wolna, Jakub Szewczyk, Michele Diaz, Aleksandra Domagalik, Marcin Szwed, Zofia Wodniecka
{"title":"Tracking Components of Bilingual Language Control in Speech Production: An fMRI Study Using Functional Localizers.","authors":"Agata Wolna, Jakub Szewczyk, Michele Diaz, Aleksandra Domagalik, Marcin Szwed, Zofia Wodniecka","doi":"10.1162/nol_a_00128","DOIUrl":"10.1162/nol_a_00128","url":null,"abstract":"<p><p>When bilingual speakers switch back to speaking in their native language (L1) after having used their second language (L2), they often experience difficulty in retrieving words in their L1. This phenomenon is referred to as the <i>L2 after-effect</i>. We used the L2 after-effect as a lens to explore the neural bases of bilingual language control mechanisms. Our goal was twofold: first, to explore whether bilingual language control draws on domain-general or language-specific mechanisms; second, to investigate the precise mechanism(s) that drive the L2 after-effect. We used a precision fMRI approach based on functional localizers to measure the extent to which the brain activity that reflects the L2 after-effect overlaps with the language network (Fedorenko et al., 2010) and the domain-general multiple demand network (Duncan, 2010), as well as three task-specific networks that tap into interference resolution, lexical retrieval, and articulation. Forty-two Polish-English bilinguals participated in the study. Our results show that the L2 after-effect reflects increased engagement of domain-general but not language-specific resources. Furthermore, contrary to previously proposed interpretations, we did not find evidence that the effect reflects increased difficulty related to lexical access, articulation, and the resolution of lexical interference. We propose that difficulty of speech production in the picture naming paradigm-manifested as the L2 after-effect-reflects interference at a nonlinguistic level of task schemas or a general increase of cognitive control engagement during speech production in L1 after L2.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neurobiology of LanguagePub Date : 2024-06-03eCollection Date: 2024-01-01DOI: 10.1162/nol_a_00136
Dorothy V M Bishop, Zoe V J Woodhead, Kate E Watkins
{"title":"Approaches to Measuring Language Lateralisation: An Exploratory Study Comparing Two fMRI Methods and Functional Transcranial Doppler Ultrasound.","authors":"Dorothy V M Bishop, Zoe V J Woodhead, Kate E Watkins","doi":"10.1162/nol_a_00136","DOIUrl":"10.1162/nol_a_00136","url":null,"abstract":"<p><p>In this exploratory study we compare and contrast two methods for deriving a laterality index (LI) from functional magnetic resonance imaging (fMRI) data: the weighted bootstrapped mean from the LI Toolbox (toolbox method), and a novel method that uses subtraction of activations from homologous regions in left and right hemispheres to give an array of difference scores (mirror method). Data came from 31 individuals who had been selected to include a high proportion of people with atypical laterality when tested with functional transcranial Doppler ultrasound (fTCD). On two tasks, word generation and semantic matching, the mirror method generally gave better agreement with fTCD laterality than the toolbox method, both for individual regions of interest, and for a large region corresponding to the middle cerebral artery. LI estimates from this method had much smaller confidence intervals (CIs) than those from the toolbox method; with the mirror method, most participants were reliably lateralised to left or right, whereas with the toolbox method, a higher proportion were categorised as bilateral (i.e., the CI for the LI spanned zero). Reasons for discrepancies between fMRI methods are discussed: one issue is that the toolbox method averages the LI across a wide range of thresholds. Furthermore, examination of task-related <i>t</i>-statistic maps from the two hemispheres showed that language lateralisation is evident in regions characterised by deactivation, and so key information may be lost by ignoring voxel activations below zero, as is done with conventional estimates of the LI.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11192441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neurobiology of LanguagePub Date : 2024-06-03eCollection Date: 2024-01-01DOI: 10.1162/nol_a_00127
Nilgoun Bahar, Gabriel J Cler, Saloni Krishnan, Salomi S Asaridou, Harriet J Smith, Hanna E Willis, Máiréad P Healy, Kate E Watkins
{"title":"Differences in Cortical Surface Area in Developmental Language Disorder.","authors":"Nilgoun Bahar, Gabriel J Cler, Saloni Krishnan, Salomi S Asaridou, Harriet J Smith, Hanna E Willis, Máiréad P Healy, Kate E Watkins","doi":"10.1162/nol_a_00127","DOIUrl":"10.1162/nol_a_00127","url":null,"abstract":"<p><p>Approximately 7% of children have developmental language disorder (DLD), a neurodevelopmental condition associated with persistent language learning difficulties without a known cause. Our understanding of the neurobiological basis of DLD is limited. Here, we used FreeSurfer to investigate cortical surface area and thickness in a large cohort of 156 children and adolescents aged 10-16 years with a range of language abilities, including 54 with DLD, 28 with a history of speech-language difficulties who did not meet criteria for DLD, and 74 age-matched controls with typical language development (TD). We also examined cortical asymmetries in DLD using an automated surface-based technique. Relative to the TD group, those with DLD showed smaller surface area bilaterally in the inferior frontal gyrus extending to the anterior insula, in the posterior temporal and ventral occipito-temporal cortex, and in portions of the anterior cingulate and superior frontal cortex. Analysis of the whole cohort using a language proficiency factor revealed that language ability correlated positively with surface area in similar regions. There were no differences in cortical thickness, nor in asymmetry of these cortical metrics between TD and DLD. This study highlights the importance of distinguishing between surface area and cortical thickness in investigating the brain basis of neurodevelopmental disorders and suggests the development of cortical surface area to be of importance to DLD. Future longitudinal studies are required to understand the developmental trajectory of these cortical differences in DLD and how they relate to language maturation.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neurobiology of LanguagePub Date : 2024-06-03eCollection Date: 2024-01-01DOI: 10.1162/nol_a_00138
Joan Orpella, Graham Flick, M Florencia Assaneo, Ravi Shroff, Liina Pylkkänen, David Poeppel, Eric S Jackson
{"title":"Reactive Inhibitory Control Precedes Overt Stuttering Events.","authors":"Joan Orpella, Graham Flick, M Florencia Assaneo, Ravi Shroff, Liina Pylkkänen, David Poeppel, Eric S Jackson","doi":"10.1162/nol_a_00138","DOIUrl":"10.1162/nol_a_00138","url":null,"abstract":"<p><p>Research points to neurofunctional differences underlying fluent speech between stutterers and non-stutterers. Considerably less work has focused on processes that underlie stuttered vs. fluent speech. Additionally, most of this research has focused on speech motor processes despite contributions from cognitive processes prior to the onset of stuttered speech. We used MEG to test the hypothesis that reactive inhibitory control is triggered prior to stuttered speech. Twenty-nine stutterers completed a delayed-response task that featured a cue (prior to a go cue) signaling the imminent requirement to produce a word that was either stuttered or fluent. Consistent with our hypothesis, we observed increased beta power likely emanating from the right pre-supplementary motor area (R-preSMA)-an area implicated in reactive inhibitory control-in response to the cue preceding stuttered vs. fluent productions. Beta power differences between stuttered and fluent trials correlated with stuttering severity and participants' percentage of trials stuttered increased exponentially with beta power in the R-preSMA. Trial-by-trial beta power modulations in the R-preSMA following the cue predicted whether a trial would be stuttered or fluent. Stuttered trials were also associated with delayed speech onset suggesting an overall slowing or freezing of the speech motor system that may be a consequence of inhibitory control. Post-hoc analyses revealed that independently generated anticipated words were associated with greater beta power and more stuttering than researcher-assisted anticipated words, pointing to a relationship between self-perceived likelihood of stuttering (i.e., anticipation) and inhibitory control. This work offers a neurocognitive account of stuttering by characterizing cognitive processes that precede overt stuttering events.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11192511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neurobiology of LanguagePub Date : 2024-06-03eCollection Date: 2024-01-01DOI: 10.1162/nol_a_00139
Sara D Beach, Ding-Lan Tang, Swathi Kiran, Caroline A Niziolek
{"title":"Pars Opercularis Underlies Efferent Predictions and Successful Auditory Feedback Processing in Speech: Evidence From Left-Hemisphere Stroke.","authors":"Sara D Beach, Ding-Lan Tang, Swathi Kiran, Caroline A Niziolek","doi":"10.1162/nol_a_00139","DOIUrl":"10.1162/nol_a_00139","url":null,"abstract":"<p><p>Hearing one's own speech allows for acoustic self-monitoring in real time. Left-hemisphere motor planning regions are thought to give rise to efferent predictions that can be compared to true feedback in sensory cortices, resulting in neural suppression commensurate with the degree of overlap between predicted and actual sensations. Sensory prediction errors thus serve as a possible mechanism of detection of deviant speech sounds, which can then feed back into corrective action, allowing for online control of speech acoustics. The goal of this study was to assess the integrity of this detection-correction circuit in persons with aphasia (PWA) whose left-hemisphere lesions may limit their ability to control variability in speech output. We recorded magnetoencephalography (MEG) while 15 PWA and age-matched controls spoke monosyllabic words and listened to playback of their utterances. From this, we measured speaking-induced suppression of the M100 neural response and related it to lesion profiles and speech behavior. Both speaking-induced suppression and cortical sensitivity to deviance were preserved at the group level in PWA. PWA with more spared tissue in pars opercularis had greater left-hemisphere neural suppression and greater behavioral correction of acoustically deviant pronunciations, whereas sparing of superior temporal gyrus was not related to neural suppression or acoustic behavior. In turn, PWA who made greater corrections had fewer overt speech errors in the MEG task. Thus, the motor planning regions that generate the efferent prediction are integral to performing corrections when that prediction is violated.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11192514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}