Caterina Petrone, Francesca Carbone, Nicolas Audibert, Maud Champagne-Lavau
{"title":"Facial cues to anger affect meaning interpretation of subsequent spoken prosody","authors":"Caterina Petrone, Francesca Carbone, Nicolas Audibert, Maud Champagne-Lavau","doi":"10.1017/langcog.2024.3","DOIUrl":"https://doi.org/10.1017/langcog.2024.3","url":null,"abstract":"In everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent acoustic signal is emotionally unclear (neutral or produced with a limited repertoire of cues to anger). Auditory stimuli spoken with angry or neutral prosody were presented in isolation or preceded by pictures showing emotionally related or unrelated facial gestures (angry or neutral faces). In two experiments, participants rated the valence and emotional intensity of the auditory stimuli only. These stimuli were created from acted speech from movies and delexicalized via speech synthesis, then manipulated by partially preserving or degrading their global spectral characteristics. All participants relied on facial cues when the auditory stimuli were acoustically impoverished; however, only a subgroup of participants used angry faces to interpret subsequent neutral prosody. Thus, listeners are sensitive to facial cues for evaluating what they are about to hear, especially when the auditory input is less reliable. These results extend findings on face perception to the auditory domain and confirm inter-individual variability in considering different sources of emotional information.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140167613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Word-object and action-object learning in a unimodal context during early childhood","authors":"Sarah Eiteljoerge, Birgit Elsner, Nivedita Mani","doi":"10.1017/langcog.2024.7","DOIUrl":"https://doi.org/10.1017/langcog.2024.7","url":null,"abstract":"Word-object and action-object learning in children aged 30 to 48 months appears to develop at a similar time scale and adheres to similar attentional constraints. However, children below 36 months show different patterns of learning word-object and action-object associations when this information is presented in a bimodal context (Eiteljoerge et al., 2019b). Here, we investigated 12- and 24-month-olds’ word-object and action-object learning when this information is presented in a unimodal context. Forty 12- and 24-month-olds were presented with two novel objects that were either first associated with a novel label (word learning task) and then later with a novel action (action learning task) or vice versa. In subsequent yoked test phases, children either heard one of the novel labels or saw a hand performing one of the actions presented with the two objects on screen while we measured their target looking. Generalized linear mixed models indicate that 12-month-olds learned action-object associations but not word-object associations and 24-month-olds learned neither word- nor action-object associations. These results extend previous findings (Eiteljoerge et al., 2019b) and, together, suggest that children appear to learn action-object associations early in development while struggling with learning word-object associations in certain contexts until 2 years of age.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140167685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Language and executive function relationships in the real world: insights from deafness","authors":"Mario Figueroa, Nicola Botting, Gary Morgan","doi":"10.1017/langcog.2024.10","DOIUrl":"https://doi.org/10.1017/langcog.2024.10","url":null,"abstract":"<p>Executive functions (EFs) in both regulatory and meta-cognitive contexts are important for a wide variety of children’s daily activities, including play and learning. Despite the growing literature supporting the relationship between EF and language, few studies have focused on these links during everyday behaviours. Data were collected on 208 children from 6 to 12 years old of whom 89 were deaf children (55% female; <span>M</span> = 8;8; <span>SD</span> = 1;9) and 119 were typically hearing children (56% female, <span>M</span> = 8;9; <span>SD</span> = 1;5). Parents completed two inventories: to assess EFs and language proficiency. Parents of deaf children reported greater difficulties with EFs in daily activities than those of hearing children. Correlation analysis between EFs and language showed significant levels only in the deaf group, especially in relation to meta-cognitive EFs. The results are discussed in terms of the role of early parent–child interaction and the relevance of EFs for everyday conversational situations.</p>","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The immediate integration of semantic selectional restrictions of Chinese social hierarchical verbs with extralinguistic social hierarchical information in comprehension","authors":"Yajiao Shi, Tongquan Zhou, Simin Zhao, Zhenghui Sun, Zude Zhu","doi":"10.1017/langcog.2024.11","DOIUrl":"https://doi.org/10.1017/langcog.2024.11","url":null,"abstract":"\u0000 Social hierarchical information impacts language comprehension. Nevertheless, the specific process underlying the integration of linguistic and extralinguistic sources of social hierarchical information has not been identified. For example, the Chinese social hierarchical verb 赡养, /shan4yang3/, ‘support: provide for the needs and comfort of one’s elders’, only allows its Agent to have a lower social status than the Patient. Using eye-tracking, we examined the precise time course of the integration of these semantic selectional restrictions of Chinese social hierarchical verbs and extralinguistic social hierarchical information during natural reading. A 2 (Verb Type: hierarchical vs. non-hierarchical) × 2 (Social Hierarchy Sequence: match vs. mismatch) design was constructed to investigate the effect of the interaction on early and late eye-tracking measures. Thirty-two participants (15 males; age range: 18–24 years) read sentences and judged the plausibility of each sentence. The results showed that violations of semantic selectional restrictions of Chinese social hierarchical verbs induced shorter first fixation duration but longer regression path duration and longer total reading time on sentence-final nouns (NP2). These differences were absent under non-hierarchical conditions. The results suggest that a mismatch between linguistic and extralinguistic social hierarchical information is immediately detected and processed.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140231485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The influences of narrative perspective shift and scene detail on narrative semantic processing","authors":"Jian Jin, Siyun Liu","doi":"10.1017/langcog.2024.9","DOIUrl":"https://doi.org/10.1017/langcog.2024.9","url":null,"abstract":"The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of consciousness in Chinese nominal metaphor processing: a psychophysical approach","authors":"Kaiwen Cheng, Yu Chen, Hongmei Yan, Ling Wang","doi":"10.1017/langcog.2023.67","DOIUrl":"https://doi.org/10.1017/langcog.2023.67","url":null,"abstract":"Conceptual metaphor theory (CMT) holds that most conceptual metaphors are processed unconsciously. However, whether multiple words can be integrated into a holistic metaphoric sentence without consciousness remains controversial in cognitive science and psychology. This study aims to investigate the role of consciousness in processing Chinese nominal metaphoric sentences ‘<jats:italic>A是B</jats:italic>’ <jats:italic>(A is[like]</jats:italic> B) with a psychophysical experimental paradigm referred to as breaking continuous flash suppression (b-CFS). We manipulated sentence types (metaphoric, literal and anomalous) and word forms (upright, inverted) in a two-staged experiment (CFS and non-CFS). No difference was found in the breakthrough times among all three types of sentences in the CFS stage, while literal sentences were detected more slowly than either metaphoric or anomalous sentences in the non-CFS stage. The results suggest that the integration of multiple words may not succeed without the participation of consciousness, let alone metaphoric processing. These findings may redefine ‘unconscious’ in CMT as ‘preconscious’ and support the indirect access view regarding how the metaphoric meaning is processed in the brain.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prosody of focus in Turkish Sign Language","authors":"Serpil Karabüklü, Aslı Gürer","doi":"10.1017/langcog.2024.4","DOIUrl":"https://doi.org/10.1017/langcog.2024.4","url":null,"abstract":"Prosodic realization of focus has been a widely investigated topic across languages and modalities. Simultaneous focus strategies are intriguing to see how they interact regarding their functional and temporal alignment. We explored the multichannel (manual and nonmanual) realization of focus in Turkish Sign Language. We elicited data with focus type, syntactic roles and movement type variables from 20 signers. The results revealed the focus is encoded via increased duration in manual signs, and nonmanuals do not necessarily accompany focused signs. With a multichanneled structure, sign languages use two available channels or opt for one to express focushood.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140047143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contrasting the semantic space of ‘shame’ and ‘guilt’ in English and Japanese","authors":"Eugenia Diegoli, Emily Öhman","doi":"10.1017/langcog.2024.6","DOIUrl":"https://doi.org/10.1017/langcog.2024.6","url":null,"abstract":"This article sheds light on the significant yet nuanced roles of shame and guilt in influencing moral behaviour, a phenomenon that became particularly prominent during the COVID-19 pandemic with the community’s heightened desire to be seen as moral. These emotions are central to human interactions, and the question of how they are conveyed linguistically is a vast and important one. Our study contributes to this area by analysing the discourses around shame and guilt in English and Japanese online forums, focusing on the terms <jats:italic>shame</jats:italic>, <jats:italic>guilt</jats:italic>, <jats:italic>haji</jats:italic> (‘shame’) and <jats:italic>zaiakukan</jats:italic> (‘guilt’). We utilise a mix of corpus-based methods and natural language processing tools, including word embeddings, to examine the contexts of these emotion terms and identify semantically similar expressions. Our findings indicate both overlaps and distinct differences in the semantic landscapes of shame and guilt within and across the two languages, highlighting nuanced ways in which these emotions are expressed and distinguished. This investigation provides insights into the complex dynamics between emotion words and the internal states they denote, suggesting avenues for further research in this linguistically rich area.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Better letter: iconicity in the manual alphabets of American Sign Language and Swedish Sign Language","authors":"Carl Börstell","doi":"10.1017/langcog.2024.5","DOIUrl":"https://doi.org/10.1017/langcog.2024.5","url":null,"abstract":"While iconicity has sometimes been defined as meaning transparency, it is better defined as a subjective phenomenon bound to an individual’s perception and influenced by their previous language experience. In this article, I investigate the subjective nature of iconicity through an experiment in which 72 deaf, hard-of-hearing and hearing (signing and non-signing) participants rate the iconicity of individual letters of the American Sign Language (ASL) and Swedish Sign Language (STS) manual alphabets. It is shown that L1 signers of ASL and STS rate their own (L1) manual alphabet as more iconic than the foreign one. Hearing L2 signers of ASL and STS exhibit the same pattern as L1 signers, showing an iconic preference for their own (L2) manual alphabet. In comparison, hearing non-signers show no general iconic preference for either manual alphabet. Across all groups, some letters are consistently rated as more iconic in one sign language than the other, illustrating general iconic preferences. Overall, the results align with earlier findings from sign language linguistics that point to language experience affecting iconicity ratings and that one’s own signs are rated as more iconic than foreign signs with the same meaning, even if similar iconic mappings are used.","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Blomsma, Julija Vaitonyté, Gabriel Skantze, Marc Swerts
{"title":"Backchannel behavior is idiosyncratic","authors":"Peter Blomsma, Julija Vaitonyté, Gabriel Skantze, Marc Swerts","doi":"10.1017/langcog.2024.1","DOIUrl":"https://doi.org/10.1017/langcog.2024.1","url":null,"abstract":"<p>In spoken conversations, speakers and their addressees constantly seek and provide different forms of audiovisual feedback, also known as backchannels, which include nodding, vocalizations and facial expressions. It has previously been shown that addressees backchannel at specific points during an interaction, namely after a speaker provided a cue to elicit feedback from the addressee. However, addressees may differ in the frequency and type of feedback that they provide, and likewise, speakers may vary the type of cues they generate to signal the backchannel opportunity points (BOPs). Research on the extent to which backchanneling is idiosyncratic is scant. In this article, we quantify and analyze the variability in feedback behavior of 14 addressees who all interacted with the same speaker stimulus. We conducted this research by means of a previously developed experimental paradigm that generates spontaneous interactions in a controlled manner. Our results show that (1) backchanneling behavior varies between listeners (some addressees are more active than others) and (2) backchanneling behavior varies between BOPs (some points trigger more responses than others). We discuss the relevance of these results for models of human–human and human–machine interactions.</p>","PeriodicalId":45880,"journal":{"name":"Language and Cognition","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139921497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}