{"title":"Word-object and action-object learning in a unimodal context during early childhood","authors":"Sarah Eiteljoerge, Birgit Elsner, Nivedita Mani","doi":"10.1017/langcog.2024.7","DOIUrl":"https://doi.org/10.1017/langcog.2024.7","url":null,"abstract":"Word-object and action-object learning in children aged 30 to 48 months appears to develop at a similar time scale and adheres to similar attentional constraints. However, children below 36 months show different patterns of learning word-object and action-object associations when this information is presented in a bimodal context (Eiteljoerge et al., 2019b). Here, we investigated 12- and 24-month-olds’ word-object and action-object learning when this information is presented in a unimodal context. Forty 12- and 24-month-olds were presented with two novel objects that were either first associated with a novel label (word learning task) and then later with a novel action (action learning task) or vice versa. In subsequent yoked test phases, children either heard one of the novel labels or saw a hand performing one of the actions presented with the two objects on screen while we measured their target looking. Generalized linear mixed models indicate that 12-month-olds learned action-object associations but not word-object associations and 24-month-olds learned neither word- nor action-object associations. These results extend previous findings (Eiteljoerge et al., 2019b) and, together, suggest that children appear to learn action-object associations early in development while struggling with learning word-object associations in certain contexts until 2 years of age.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"51 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140167685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Language and executive function relationships in the real world: insights from deafness","authors":"Mario Figueroa, Nicola Botting, Gary Morgan","doi":"10.1017/langcog.2024.10","DOIUrl":"https://doi.org/10.1017/langcog.2024.10","url":null,"abstract":"<p>Executive functions (EFs) in both regulatory and meta-cognitive contexts are important for a wide variety of children’s daily activities, including play and learning. Despite the growing literature supporting the relationship between EF and language, few studies have focused on these links during everyday behaviours. Data were collected on 208 children from 6 to 12 years old of whom 89 were deaf children (55% female; <span>M</span> = 8;8; <span>SD</span> = 1;9) and 119 were typically hearing children (56% female, <span>M</span> = 8;9; <span>SD</span> = 1;5). Parents completed two inventories: to assess EFs and language proficiency. Parents of deaf children reported greater difficulties with EFs in daily activities than those of hearing children. Correlation analysis between EFs and language showed significant levels only in the deaf group, especially in relation to meta-cognitive EFs. The results are discussed in terms of the role of early parent–child interaction and the relevance of EFs for everyday conversational situations.</p>","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"69 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The influences of narrative perspective shift and scene detail on narrative semantic processing","authors":"Jian Jin, Siyun Liu","doi":"10.1017/langcog.2024.9","DOIUrl":"https://doi.org/10.1017/langcog.2024.9","url":null,"abstract":"The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"82 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of consciousness in Chinese nominal metaphor processing: a psychophysical approach","authors":"Kaiwen Cheng, Yu Chen, Hongmei Yan, Ling Wang","doi":"10.1017/langcog.2023.67","DOIUrl":"https://doi.org/10.1017/langcog.2023.67","url":null,"abstract":"Conceptual metaphor theory (CMT) holds that most conceptual metaphors are processed unconsciously. However, whether multiple words can be integrated into a holistic metaphoric sentence without consciousness remains controversial in cognitive science and psychology. This study aims to investigate the role of consciousness in processing Chinese nominal metaphoric sentences ‘<jats:italic>A是B</jats:italic>’ <jats:italic>(A is[like]</jats:italic> B) with a psychophysical experimental paradigm referred to as breaking continuous flash suppression (b-CFS). We manipulated sentence types (metaphoric, literal and anomalous) and word forms (upright, inverted) in a two-staged experiment (CFS and non-CFS). No difference was found in the breakthrough times among all three types of sentences in the CFS stage, while literal sentences were detected more slowly than either metaphoric or anomalous sentences in the non-CFS stage. The results suggest that the integration of multiple words may not succeed without the participation of consciousness, let alone metaphoric processing. These findings may redefine ‘unconscious’ in CMT as ‘preconscious’ and support the indirect access view regarding how the metaphoric meaning is processed in the brain.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"95 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prosody of focus in Turkish Sign Language","authors":"Serpil Karabüklü, Aslı Gürer","doi":"10.1017/langcog.2024.4","DOIUrl":"https://doi.org/10.1017/langcog.2024.4","url":null,"abstract":"Prosodic realization of focus has been a widely investigated topic across languages and modalities. Simultaneous focus strategies are intriguing to see how they interact regarding their functional and temporal alignment. We explored the multichannel (manual and nonmanual) realization of focus in Turkish Sign Language. We elicited data with focus type, syntactic roles and movement type variables from 20 signers. The results revealed the focus is encoded via increased duration in manual signs, and nonmanuals do not necessarily accompany focused signs. With a multichanneled structure, sign languages use two available channels or opt for one to express focushood.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"33 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140047143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contrasting the semantic space of ‘shame’ and ‘guilt’ in English and Japanese","authors":"Eugenia Diegoli, Emily Öhman","doi":"10.1017/langcog.2024.6","DOIUrl":"https://doi.org/10.1017/langcog.2024.6","url":null,"abstract":"This article sheds light on the significant yet nuanced roles of shame and guilt in influencing moral behaviour, a phenomenon that became particularly prominent during the COVID-19 pandemic with the community’s heightened desire to be seen as moral. These emotions are central to human interactions, and the question of how they are conveyed linguistically is a vast and important one. Our study contributes to this area by analysing the discourses around shame and guilt in English and Japanese online forums, focusing on the terms <jats:italic>shame</jats:italic>, <jats:italic>guilt</jats:italic>, <jats:italic>haji</jats:italic> (‘shame’) and <jats:italic>zaiakukan</jats:italic> (‘guilt’). We utilise a mix of corpus-based methods and natural language processing tools, including word embeddings, to examine the contexts of these emotion terms and identify semantically similar expressions. Our findings indicate both overlaps and distinct differences in the semantic landscapes of shame and guilt within and across the two languages, highlighting nuanced ways in which these emotions are expressed and distinguished. This investigation provides insights into the complex dynamics between emotion words and the internal states they denote, suggesting avenues for further research in this linguistically rich area.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"11 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Better letter: iconicity in the manual alphabets of American Sign Language and Swedish Sign Language","authors":"Carl Börstell","doi":"10.1017/langcog.2024.5","DOIUrl":"https://doi.org/10.1017/langcog.2024.5","url":null,"abstract":"While iconicity has sometimes been defined as meaning transparency, it is better defined as a subjective phenomenon bound to an individual’s perception and influenced by their previous language experience. In this article, I investigate the subjective nature of iconicity through an experiment in which 72 deaf, hard-of-hearing and hearing (signing and non-signing) participants rate the iconicity of individual letters of the American Sign Language (ASL) and Swedish Sign Language (STS) manual alphabets. It is shown that L1 signers of ASL and STS rate their own (L1) manual alphabet as more iconic than the foreign one. Hearing L2 signers of ASL and STS exhibit the same pattern as L1 signers, showing an iconic preference for their own (L2) manual alphabet. In comparison, hearing non-signers show no general iconic preference for either manual alphabet. Across all groups, some letters are consistently rated as more iconic in one sign language than the other, illustrating general iconic preferences. Overall, the results align with earlier findings from sign language linguistics that point to language experience affecting iconicity ratings and that one’s own signs are rated as more iconic than foreign signs with the same meaning, even if similar iconic mappings are used.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"13 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Blomsma, Julija Vaitonyté, Gabriel Skantze, Marc Swerts
{"title":"Backchannel behavior is idiosyncratic","authors":"Peter Blomsma, Julija Vaitonyté, Gabriel Skantze, Marc Swerts","doi":"10.1017/langcog.2024.1","DOIUrl":"https://doi.org/10.1017/langcog.2024.1","url":null,"abstract":"<p>In spoken conversations, speakers and their addressees constantly seek and provide different forms of audiovisual feedback, also known as backchannels, which include nodding, vocalizations and facial expressions. It has previously been shown that addressees backchannel at specific points during an interaction, namely after a speaker provided a cue to elicit feedback from the addressee. However, addressees may differ in the frequency and type of feedback that they provide, and likewise, speakers may vary the type of cues they generate to signal the backchannel opportunity points (BOPs). Research on the extent to which backchanneling is idiosyncratic is scant. In this article, we quantify and analyze the variability in feedback behavior of 14 addressees who all interacted with the same speaker stimulus. We conducted this research by means of a previously developed experimental paradigm that generates spontaneous interactions in a controlled manner. Our results show that (1) backchanneling behavior varies between listeners (some addressees are more active than others) and (2) backchanneling behavior varies between BOPs (some points trigger more responses than others). We discuss the relevance of these results for models of human–human and human–machine interactions.</p>","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"19 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139921497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjing Yu, Yu-Fu Chien, Bing Wang, Jianjun Zhao, Weijun Li
{"title":"The effects of word and beat priming on Mandarin lexical stress recognition: an event-related potential study","authors":"Wenjing Yu, Yu-Fu Chien, Bing Wang, Jianjun Zhao, Weijun Li","doi":"10.1017/langcog.2023.75","DOIUrl":"https://doi.org/10.1017/langcog.2023.75","url":null,"abstract":"Music and language are unique communication tools in human society, where stress plays a crucial role. Many studies have examined the recognition of lexical stress in Indo-European languages using beat/rhythm priming, but few studies have examined the cross-domain relationship between musical and linguistic stress in tonal languages. The current study investigates how musical stress and lexical stress influence lexical stress recognition in Mandarin. In the auditory priming experiment, disyllabic Mandarin words with initial or final stress were primed by disyllabic words or beats with either congruent or incongruent stress patterns. Results showed that the incongruent condition elicited larger P2 and the late positive component (LPC) amplitudes than the congruent condition. Moreover, the Strong-Weak primes elicited larger N400 amplitudes than the Weak-Strong primes, and the Weak-Strong primes yielded larger LPC amplitudes than the Strong-Weak primes. The findings reveal the neural correlates of the cross-domain influence between music and language during lexical stress recognition in Mandarin.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"50 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139751135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does word knowledge account for the effect of world knowledge on pronoun interpretation?","authors":"Cameron R. Jones, Benjamin Bergen","doi":"10.1017/langcog.2024.2","DOIUrl":"https://doi.org/10.1017/langcog.2024.2","url":null,"abstract":"To what extent can statistical language knowledge account for the effects of world knowledge in language comprehension? We address this question by focusing on a core aspect of language understanding: pronoun resolution. While existing studies suggest that comprehenders use world knowledge to resolve pronouns, the distributional hypothesis and its operationalization in large language models (LLMs) provide an alternative account of how purely linguistic information could drive apparent world knowledge effects. We addressed these confounds in two experiments. In Experiment 1, we found a strong effect of world knowledge plausibility (measured using a norming study) on responses to comprehension questions that probed pronoun interpretation. In experiment 2, participants were slower to read continuations that contradicted world knowledge-consistent interpretations of a pronoun, implying that comprehenders deploy world knowledge spontaneously. Both effects persisted when controlling for the predictions of GPT-3, an LLM, suggesting that pronoun interpretation is at least partly driven by knowledge about the world and not the word. We propose two potential mechanisms by which knowledge-driven pronoun resolution occurs, based on validation- and expectation-driven discourse processes. The results suggest that while distributional information may capture some aspects of world knowledge, human comprehenders likely draw on other sources unavailable to LLMs.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":"28 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139751353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}