Mathilde Josserand, Marc Allassonnière-Tang, François Pellegrino, Dan Dediu, Bart de Boer
{"title":"How Network Structure Shapes Languages: Disentangling the Factors Driving Variation in Communicative Agents","authors":"Mathilde Josserand, Marc Allassonnière-Tang, François Pellegrino, Dan Dediu, Bart de Boer","doi":"10.1111/cogs.13439","DOIUrl":"https://doi.org/10.1111/cogs.13439","url":null,"abstract":"<p>Languages show substantial variability between their speakers, but it is currently unclear how the structure of the communicative network contributes to the patterning of this variability. While previous studies have highlighted the role of network structure in language change, the specific aspects of network structure that shape language variability remain largely unknown. To address this gap, we developed a Bayesian agent-based model of language evolution, contrasting between two distinct scenarios: language change and language emergence. By isolating the relative effects of specific global network metrics across thousands of simulations, we show that global characteristics of network structure play a critical role in shaping interindividual variation in language, while intraindividual variation is relatively unaffected. We effectively challenge the long-held belief that size and density are the main network structural factors influencing language variation, and show that path length and clustering coefficient are the main factors driving interindividual variation. In particular, we show that variation is more likely to occur in populations where individuals are not well-connected to each other. Additionally, variation is more likely to emerge in populations that are structured in small communities. Our study provides potentially important insights into the theoretical mechanisms underlying language variation.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140546748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Importance of Linguistic Factors: He Likes Subject Referents","authors":"Regina Hert, Juhani Järvikivi, Anja Arnhold","doi":"10.1111/cogs.13436","DOIUrl":"10.1111/cogs.13436","url":null,"abstract":"<p>We report the results of one visual-world eye-tracking experiment and two referent selection tasks in which we investigated the effects of information structure in the form of prosody and word order manipulation on the processing of subject pronouns <i>er</i> and <i>der</i> in German. Factors such as subjecthood, focus, and topicality, as well as order of mention have been linked to an increased probability of certain referents being selected as the pronoun's antecedent and described as increasing this referent's prominence, salience, or accessibility. The goal of this study was to find out whether pronoun processing is primarily guided by linguistic factors (e.g., grammatical role) or nonlinguistic factors (e.g., first-mention), and whether pronoun interpretation can be described in terms of referents' “prominence” / “accessibility” / “salience.” The results showed an overall subject preference for <i>er</i>, whereas <i>der</i> was affected by the object role and focus marking. While focus increases the attentional load and enhances memory representation for the focused referent making the focused referent more available, ultimately it did not affect the final interpretation of <i>er</i>, suggesting that “prominence” or the related concepts do not explain referent selection preferences. Overall, the results suggest a primacy of linguistic factors in determining pronoun resolution.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13436","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predictability and Variation in Language Are Differentially Affected by Learning and Production","authors":"Aislinn Keogh, Simon Kirby, Jennifer Culbertson","doi":"10.1111/cogs.13435","DOIUrl":"10.1111/cogs.13435","url":null,"abstract":"<p>General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory—the component of short-term memory used for temporary storage and manipulation of information. In this study, we consider the relationship between working memory and regularization of linguistic variation. Regularization is a well-documented process whereby languages become less variable (on some dimension) over time. This process has been argued to be driven by the behavior of individual language users, but the specific mechanism is not agreed upon. Here, we use an artificial language learning experiment to investigate whether limitations in working memory during either language learning or language production drive regularization behavior. We find that taxing working memory during production results in the loss of all types of variation, but the process by which random variation becomes more predictable is better explained by learning biases. A computational model offers a potential explanation for the production effect using a simple self-priming mechanism.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13435","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen Man-Kit Lee, Nicole Sin Hang Law, Shelley Xiuli Tong
{"title":"Unraveling Temporal Dynamics of Multidimensional Statistical Learning in Implicit and Explicit Systems: An X-Way Hypothesis","authors":"Stephen Man-Kit Lee, Nicole Sin Hang Law, Shelley Xiuli Tong","doi":"10.1111/cogs.13437","DOIUrl":"10.1111/cogs.13437","url":null,"abstract":"<p>Statistical learning enables humans to involuntarily process and utilize different kinds of patterns from the environment. However, the cognitive mechanisms underlying the simultaneous acquisition of multiple regularities from different perceptual modalities remain unclear. A novel multidimensional serial reaction time task was developed to test 40 participants’ ability to learn simple first-order and complex second-order relations between uni-modal visual and cross-modal audio-visual stimuli. Using the difference in reaction times between sequenced and random stimuli as the index of domain-general statistical learning, a significant difference and dissociation of learning occurred between the initial and final learning phases. Furthermore, we used a negative and positive occurrence-frequency-and-reaction-time correlation to indicate implicit and explicit learning, respectively, and found that learning simple uni-modal patterns involved an implicit-to-explicit segue, while acquiring complex cross-modal patterns required an explicit-to-implicit segue, resulting in a X-shape crossing of regularity learning. Thus, we propose an X-way hypothesis to elucidate the dynamic interplay between the implicit and explicit systems at two distinct stages when acquiring various regularities in a multidimensional probability space.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13437","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to “Introducing meta-analysis in the evaluation of computational models of infant language development”","authors":"","doi":"10.1111/cogs.13434","DOIUrl":"10.1111/cogs.13434","url":null,"abstract":"<p>Cruz Blandón, M. A., Cristia, A., Räsänen, O. (2023). Introducing meta-analysis in the evaluation of computational models of infant language development. <i>Cognitive Science</i>, <i>47</i>(7), e13307. https://doi.org/10.1111/cogs.13307</p><p>On page 15, a citation to Bunce et al. (2021; pre-print) inaccurately attributes an estimate of 5.82 h of daily infant speech exposure to their study.</p><p>Bunce et al. (2021) did not directly report on infants’ daily speech exposure. Instead, our estimate of 5.82 h of speech per day was derived from their data as follows: we first calculated the average rates of target-child-directed speech (TCDS) and adult-directed speech (ADS) per hour across the five languages studied (Table 2 in Bunce et al., 2021). The sum of these average rates—3.72 min per hour for TCDS and 10.84 min per hour for ADS—was then multiplied by 24 h to estimate full-day exposure, yielding 5.82 h per day.</p><p>However, this estimate excludes speech directed at other children but heard by the target child, accounting for an additional 4.61 min per hour as reported in the supplementary material of Bunce et al. (2021). Additionally, the estimate assumes the long-form recordings analyzed are representative of a full 24-h day, likely overestimating language exposure by including nighttime, when infants and their caregivers are typically asleep. The long-form recordings analyzed by Bunce et al. (2021) and the actual language input to infants is likely biased toward the waking hours of adults and children in the language environments studied. The estimate of 2124 h of speech heard per year presented in our paper is thus on the upper end of the likely input scale but remains within plausible bounds. For context, Hart and Risley (1995) report 45 million words heard by the age of 4 in families of the professional class, equivalent to about 937.5 h of speech (assuming an average word duration of 0.3 s), but this estimate is only for child-directed speech (CDS). Bunce et al. (2021) found that infants exposed to North-American English hear twice as much ADS as CDS, and our simulations aimed to account for all speech a learner hears.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13434","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristina Krasich, Kevin O'Neill, Felipe De Brigard
{"title":"Looking at Mental Images: Eye-Tracking Mental Simulation During Retrospective Causal Judgment","authors":"Kristina Krasich, Kevin O'Neill, Felipe De Brigard","doi":"10.1111/cogs.13426","DOIUrl":"10.1111/cogs.13426","url":null,"abstract":"<p>How do people evaluate causal relationships? Do they just consider what actually happened, or do they also consider what could have counterfactually happened? Using eye tracking and Gaussian process modeling, we investigated how people mentally simulated past events to judge what caused the outcomes to occur. Participants played a virtual ball-shooting game and then—while looking at a blank screen—mentally simulated (a) what actually happened, (b) what counterfactually could have happened, or (c) what caused the outcome to happen. Our findings showed that participants moved their eyes in patterns consistent with the actual or counterfactual events that they mentally simulated. When simulating what caused the outcome to occur, participants moved their eyes consistent with simulations of counterfactual possibilities. These results favor counterfactual theories of causal reasoning, demonstrate how eye movements can reflect simulation during this reasoning and provide a novel approach for investigating retrospective causal reasoning and counterfactual thinking.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13426","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Embodying Similarity and Difference: The Effect of Listing and Contrasting Gestures During U.S. Political Speech","authors":"Icy (Yunyi) Zhang, Tina Izad, Erica A. Cartmill","doi":"10.1111/cogs.13428","DOIUrl":"10.1111/cogs.13428","url":null,"abstract":"<p>Public speakers like politicians carefully craft their words to maximize the clarity, impact, and persuasiveness of their messages. However, these messages can be shaped by more than words. Gestures play an important role in how spoken arguments are perceived, conceptualized, and remembered by audiences. Studies of political speech have explored the ways spoken arguments are used to persuade audiences and cue applause. Studies of politicians’ gestures have explored the ways politicians illustrate different concepts with their hands, but have not focused on gesture's potential as a tool of persuasion. Our paper combines these traditions to ask first, how politicians gesture when using spoken rhetorical devices aimed at persuading audiences, and second, whether these gestures influence the ways their arguments are perceived. Study 1 examined two rhetorical devices—<i>contrasts</i> and <i>lists</i>—used by three politicians during U.S. presidential debates and asked whether the gestures produced during contrasts and lists differ. Gestures produced during contrasts were more likely to involve changes in hand location, and gestures produced during lists were more likely to involve changes in trajectory. Study 2 used footage from the same debates in an experiment to ask whether gesture influenced the way people perceived the politicians’ arguments. When participants had access to gestural information, they perceived contrasted items as more different from one another and listed items as more similar to one another than they did when they only had access to speech. This was true even when participants had access to only gesture (in muted videos). We conclude that gesture is effective at communicating concepts of similarity and difference and that politicians (and likely other speakers) take advantage of gesture's persuasive potential.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Infants Retain Statistically Segmented Words and Mappings Across a Delay?","authors":"Ferhat Karaman, Jill Lany, Jessica F. Hay","doi":"10.1111/cogs.13433","DOIUrl":"10.1111/cogs.13433","url":null,"abstract":"<p>Infants are sensitive to statistics in spoken language that aid word-form segmentation and immediate mapping to referents. However, it is not clear whether this sensitivity influences the formation and retention of word-referent mappings across a delay, two real-world challenges that learners must overcome. We tested how the timing of referent training, relative to familiarization with transitional probabilities (TPs) in speech, impacts English-learning 23-month-olds’ ability to form and retain word-referent mappings. In Experiment 1, we tested infants’ ability to retain TP information across a 10-min delay and use it in the service of word learning. Infants successfully mapped high-TP but not low-TP words to referents. In Experiment 2, infants readily mapped the same words even when they were unfamiliar. In Experiment 3, high- and low-TP word-referent mappings were trained immediately after familiarization, and infants readily remembered these associations 10 min later. In sum, although 23-month-old infants do not need strong statistics to map word forms to referents immediately, or to remember those mappings across a delay, infants are nevertheless sensitive to these statistics in the speech stream, and they influence mapping after a delay. These findings suggest that, by 23 months of age, sensitivity to statistics in speech may impact infants’ language development by leading word forms with low coherence to be poorly mapped following even a short period of consolidation.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational Modeling of the Segmentation of Sentence Stimuli From an Infant Word-Finding Study","authors":"Daniel Swingley, Robin Algayres","doi":"10.1111/cogs.13427","DOIUrl":"10.1111/cogs.13427","url":null,"abstract":"<p>Computational models of infant word-finding typically operate over transcriptions of infant-directed speech corpora. It is now possible to test models of word segmentation on speech materials, rather than transcriptions of speech. We propose that such modeling efforts be conducted over the speech of the experimental stimuli used in studies measuring infants' capacity for learning from spoken sentences. Correspondence with infant outcomes in such experiments is an appropriate benchmark for models of infants. We demonstrate such an analysis by applying the DP-Parser model of Algayres and colleagues to auditory stimuli used in infant psycholinguistic experiments by Pelucchi and colleagues. The DP-Parser model takes speech as input, and creates multiple overlapping embeddings from each utterance. Prospective words are identified as clusters of similar embedded segments. This allows segmentation of each utterance into possible words, using a dynamic programming method that maximizes the frequency of constituent segments. We show that DP-Parse mimics American English learners' performance in extracting words from Italian sentences, favoring the segmentation of words with high syllabic transitional probability. This kind of computational analysis over actual stimuli from infant experiments may be helpful in tuning future models to match human performance.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13427","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal Gestures in Different Temporal Perspectives","authors":"Emir Akbuğa, Tilbe Göksun","doi":"10.1111/cogs.13425","DOIUrl":"10.1111/cogs.13425","url":null,"abstract":"<p>Temporal perspectives allow us to place ourselves and temporal events on a timeline, making it easier to conceptualize time. This study investigates how we take different temporal perspectives in our temporal gestures. We asked participants (<i>n</i> = 36) to retell temporal scenarios written in the Moving-Ego, Moving-Time, and Time-Reference-Point perspectives in spontaneous and encouraged gesture conditions. Participants took temporal perspectives mostly in similar ways regardless of the gesture condition. Perspective comparisons showed that temporal gestures of our participants resonated better with the Ego- (i.e., Moving-Ego and Moving-Time) versus Time-Reference-Point distinction instead of the classical Moving-Ego versus Moving-Time contrast. Specifically, participants mostly produced more Moving-Ego and Time-Reference-Point gestures for the corresponding scenarios and speech; however, the Moving-Time perspective was not adopted more than the others in any condition. Similarly, the Moving-Time gestures did not favor an axis over the others, whereas Moving-Ego gestures were mostly sagittal and Time-Reference-Point gestures were mostly lateral. These findings suggest that we incorporate temporal perspectives into our temporal gestures to a considerable extent; however, the classical Moving-Ego and Moving-Time classification may not hold for temporal gestures.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13425","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}