{"title":"Predictability and Variation in Language Are Differentially Affected by Learning and Production","authors":"Aislinn Keogh, Simon Kirby, Jennifer Culbertson","doi":"10.1111/cogs.13435","DOIUrl":"10.1111/cogs.13435","url":null,"abstract":"<p>General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory—the component of short-term memory used for temporary storage and manipulation of information. In this study, we consider the relationship between working memory and regularization of linguistic variation. Regularization is a well-documented process whereby languages become less variable (on some dimension) over time. This process has been argued to be driven by the behavior of individual language users, but the specific mechanism is not agreed upon. Here, we use an artificial language learning experiment to investigate whether limitations in working memory during either language learning or language production drive regularization behavior. We find that taxing working memory during production results in the loss of all types of variation, but the process by which random variation becomes more predictable is better explained by learning biases. A computational model offers a potential explanation for the production effect using a simple self-priming mechanism.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13435","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen Man-Kit Lee, Nicole Sin Hang Law, Shelley Xiuli Tong
{"title":"Unraveling Temporal Dynamics of Multidimensional Statistical Learning in Implicit and Explicit Systems: An X-Way Hypothesis","authors":"Stephen Man-Kit Lee, Nicole Sin Hang Law, Shelley Xiuli Tong","doi":"10.1111/cogs.13437","DOIUrl":"10.1111/cogs.13437","url":null,"abstract":"<p>Statistical learning enables humans to involuntarily process and utilize different kinds of patterns from the environment. However, the cognitive mechanisms underlying the simultaneous acquisition of multiple regularities from different perceptual modalities remain unclear. A novel multidimensional serial reaction time task was developed to test 40 participants’ ability to learn simple first-order and complex second-order relations between uni-modal visual and cross-modal audio-visual stimuli. Using the difference in reaction times between sequenced and random stimuli as the index of domain-general statistical learning, a significant difference and dissociation of learning occurred between the initial and final learning phases. Furthermore, we used a negative and positive occurrence-frequency-and-reaction-time correlation to indicate implicit and explicit learning, respectively, and found that learning simple uni-modal patterns involved an implicit-to-explicit segue, while acquiring complex cross-modal patterns required an explicit-to-implicit segue, resulting in a X-shape crossing of regularity learning. Thus, we propose an X-way hypothesis to elucidate the dynamic interplay between the implicit and explicit systems at two distinct stages when acquiring various regularities in a multidimensional probability space.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13437","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to “Introducing meta-analysis in the evaluation of computational models of infant language development”","authors":"","doi":"10.1111/cogs.13434","DOIUrl":"10.1111/cogs.13434","url":null,"abstract":"<p>Cruz Blandón, M. A., Cristia, A., Räsänen, O. (2023). Introducing meta-analysis in the evaluation of computational models of infant language development. <i>Cognitive Science</i>, <i>47</i>(7), e13307. https://doi.org/10.1111/cogs.13307</p><p>On page 15, a citation to Bunce et al. (2021; pre-print) inaccurately attributes an estimate of 5.82 h of daily infant speech exposure to their study.</p><p>Bunce et al. (2021) did not directly report on infants’ daily speech exposure. Instead, our estimate of 5.82 h of speech per day was derived from their data as follows: we first calculated the average rates of target-child-directed speech (TCDS) and adult-directed speech (ADS) per hour across the five languages studied (Table 2 in Bunce et al., 2021). The sum of these average rates—3.72 min per hour for TCDS and 10.84 min per hour for ADS—was then multiplied by 24 h to estimate full-day exposure, yielding 5.82 h per day.</p><p>However, this estimate excludes speech directed at other children but heard by the target child, accounting for an additional 4.61 min per hour as reported in the supplementary material of Bunce et al. (2021). Additionally, the estimate assumes the long-form recordings analyzed are representative of a full 24-h day, likely overestimating language exposure by including nighttime, when infants and their caregivers are typically asleep. The long-form recordings analyzed by Bunce et al. (2021) and the actual language input to infants is likely biased toward the waking hours of adults and children in the language environments studied. The estimate of 2124 h of speech heard per year presented in our paper is thus on the upper end of the likely input scale but remains within plausible bounds. For context, Hart and Risley (1995) report 45 million words heard by the age of 4 in families of the professional class, equivalent to about 937.5 h of speech (assuming an average word duration of 0.3 s), but this estimate is only for child-directed speech (CDS). Bunce et al. (2021) found that infants exposed to North-American English hear twice as much ADS as CDS, and our simulations aimed to account for all speech a learner hears.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13434","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristina Krasich, Kevin O'Neill, Felipe De Brigard
{"title":"Looking at Mental Images: Eye-Tracking Mental Simulation During Retrospective Causal Judgment","authors":"Kristina Krasich, Kevin O'Neill, Felipe De Brigard","doi":"10.1111/cogs.13426","DOIUrl":"10.1111/cogs.13426","url":null,"abstract":"<p>How do people evaluate causal relationships? Do they just consider what actually happened, or do they also consider what could have counterfactually happened? Using eye tracking and Gaussian process modeling, we investigated how people mentally simulated past events to judge what caused the outcomes to occur. Participants played a virtual ball-shooting game and then—while looking at a blank screen—mentally simulated (a) what actually happened, (b) what counterfactually could have happened, or (c) what caused the outcome to happen. Our findings showed that participants moved their eyes in patterns consistent with the actual or counterfactual events that they mentally simulated. When simulating what caused the outcome to occur, participants moved their eyes consistent with simulations of counterfactual possibilities. These results favor counterfactual theories of causal reasoning, demonstrate how eye movements can reflect simulation during this reasoning and provide a novel approach for investigating retrospective causal reasoning and counterfactual thinking.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13426","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Embodying Similarity and Difference: The Effect of Listing and Contrasting Gestures During U.S. Political Speech","authors":"Icy (Yunyi) Zhang, Tina Izad, Erica A. Cartmill","doi":"10.1111/cogs.13428","DOIUrl":"10.1111/cogs.13428","url":null,"abstract":"<p>Public speakers like politicians carefully craft their words to maximize the clarity, impact, and persuasiveness of their messages. However, these messages can be shaped by more than words. Gestures play an important role in how spoken arguments are perceived, conceptualized, and remembered by audiences. Studies of political speech have explored the ways spoken arguments are used to persuade audiences and cue applause. Studies of politicians’ gestures have explored the ways politicians illustrate different concepts with their hands, but have not focused on gesture's potential as a tool of persuasion. Our paper combines these traditions to ask first, how politicians gesture when using spoken rhetorical devices aimed at persuading audiences, and second, whether these gestures influence the ways their arguments are perceived. Study 1 examined two rhetorical devices—<i>contrasts</i> and <i>lists</i>—used by three politicians during U.S. presidential debates and asked whether the gestures produced during contrasts and lists differ. Gestures produced during contrasts were more likely to involve changes in hand location, and gestures produced during lists were more likely to involve changes in trajectory. Study 2 used footage from the same debates in an experiment to ask whether gesture influenced the way people perceived the politicians’ arguments. When participants had access to gestural information, they perceived contrasted items as more different from one another and listed items as more similar to one another than they did when they only had access to speech. This was true even when participants had access to only gesture (in muted videos). We conclude that gesture is effective at communicating concepts of similarity and difference and that politicians (and likely other speakers) take advantage of gesture's persuasive potential.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Infants Retain Statistically Segmented Words and Mappings Across a Delay?","authors":"Ferhat Karaman, Jill Lany, Jessica F. Hay","doi":"10.1111/cogs.13433","DOIUrl":"10.1111/cogs.13433","url":null,"abstract":"<p>Infants are sensitive to statistics in spoken language that aid word-form segmentation and immediate mapping to referents. However, it is not clear whether this sensitivity influences the formation and retention of word-referent mappings across a delay, two real-world challenges that learners must overcome. We tested how the timing of referent training, relative to familiarization with transitional probabilities (TPs) in speech, impacts English-learning 23-month-olds’ ability to form and retain word-referent mappings. In Experiment 1, we tested infants’ ability to retain TP information across a 10-min delay and use it in the service of word learning. Infants successfully mapped high-TP but not low-TP words to referents. In Experiment 2, infants readily mapped the same words even when they were unfamiliar. In Experiment 3, high- and low-TP word-referent mappings were trained immediately after familiarization, and infants readily remembered these associations 10 min later. In sum, although 23-month-old infants do not need strong statistics to map word forms to referents immediately, or to remember those mappings across a delay, infants are nevertheless sensitive to these statistics in the speech stream, and they influence mapping after a delay. These findings suggest that, by 23 months of age, sensitivity to statistics in speech may impact infants’ language development by leading word forms with low coherence to be poorly mapped following even a short period of consolidation.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational Modeling of the Segmentation of Sentence Stimuli From an Infant Word-Finding Study","authors":"Daniel Swingley, Robin Algayres","doi":"10.1111/cogs.13427","DOIUrl":"10.1111/cogs.13427","url":null,"abstract":"<p>Computational models of infant word-finding typically operate over transcriptions of infant-directed speech corpora. It is now possible to test models of word segmentation on speech materials, rather than transcriptions of speech. We propose that such modeling efforts be conducted over the speech of the experimental stimuli used in studies measuring infants' capacity for learning from spoken sentences. Correspondence with infant outcomes in such experiments is an appropriate benchmark for models of infants. We demonstrate such an analysis by applying the DP-Parser model of Algayres and colleagues to auditory stimuli used in infant psycholinguistic experiments by Pelucchi and colleagues. The DP-Parser model takes speech as input, and creates multiple overlapping embeddings from each utterance. Prospective words are identified as clusters of similar embedded segments. This allows segmentation of each utterance into possible words, using a dynamic programming method that maximizes the frequency of constituent segments. We show that DP-Parse mimics American English learners' performance in extracting words from Italian sentences, favoring the segmentation of words with high syllabic transitional probability. This kind of computational analysis over actual stimuli from infant experiments may be helpful in tuning future models to match human performance.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13427","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal Gestures in Different Temporal Perspectives","authors":"Emir Akbuğa, Tilbe Göksun","doi":"10.1111/cogs.13425","DOIUrl":"10.1111/cogs.13425","url":null,"abstract":"<p>Temporal perspectives allow us to place ourselves and temporal events on a timeline, making it easier to conceptualize time. This study investigates how we take different temporal perspectives in our temporal gestures. We asked participants (<i>n</i> = 36) to retell temporal scenarios written in the Moving-Ego, Moving-Time, and Time-Reference-Point perspectives in spontaneous and encouraged gesture conditions. Participants took temporal perspectives mostly in similar ways regardless of the gesture condition. Perspective comparisons showed that temporal gestures of our participants resonated better with the Ego- (i.e., Moving-Ego and Moving-Time) versus Time-Reference-Point distinction instead of the classical Moving-Ego versus Moving-Time contrast. Specifically, participants mostly produced more Moving-Ego and Time-Reference-Point gestures for the corresponding scenarios and speech; however, the Moving-Time perspective was not adopted more than the others in any condition. Similarly, the Moving-Time gestures did not favor an axis over the others, whereas Moving-Ego gestures were mostly sagittal and Time-Reference-Point gestures were mostly lateral. These findings suggest that we incorporate temporal perspectives into our temporal gestures to a considerable extent; however, the classical Moving-Ego and Moving-Time classification may not hold for temporal gestures.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13425","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Pankratz, Simon Kirby, Jennifer Culbertson
{"title":"Evaluating the Relative Importance of Wordhood Cues Using Statistical Learning","authors":"Elizabeth Pankratz, Simon Kirby, Jennifer Culbertson","doi":"10.1111/cogs.13429","DOIUrl":"10.1111/cogs.13429","url":null,"abstract":"<p>Identifying wordlike units in language is typically done by applying a battery of criteria, though how to weight these criteria with respect to one another is currently unknown. We address this question by investigating whether certain criteria are also used as cues for learning an artificial language—if they are, then perhaps they can be relied on more as trustworthy top-down diagnostics. The two criteria for grammatical wordhood that we consider are a unit's free mobility and its internal immutability. These criteria also map to two cognitive mechanisms that could underlie successful statistical learning: learners might orient themselves around the low transitional probabilities at unit boundaries, or they might seek chunks with high internal transitional probabilities. We find that each criterion has its own facilitatory effect, and learning is best where they both align. This supports the battery-of-criteria approach to diagnosing wordhood, and also suggests that the mechanism behind statistical learning may not be a question of either/or; perhaps the two mechanisms do not compete, but mutually reinforce one another.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recursive Numeral Systems Optimize the Trade-off Between Lexicon Size and Average Morphosyntactic Complexity","authors":"Milica Denić, Jakub Szymanik","doi":"10.1111/cogs.13424","DOIUrl":"10.1111/cogs.13424","url":null,"abstract":"<p>Human languages vary in terms of which meanings they lexicalize, but this variation is constrained. It has been argued that languages are under two competing pressures: the pressure to be simple (e.g., to have a small lexicon) and to allow for informative (i.e., precise) communication, and that which meanings get lexicalized may be explained by languages finding a good way to trade off between these two pressures. However, in certain semantic domains, languages can reach very high levels of informativeness even if they lexicalize very few meanings in that domain. This is due to productive morphosyntax and compositional semantics, which may allow for construction of meanings which are not lexicalized. Consider the semantic domain of natural numbers: many languages lexicalize few natural number meanings as monomorphemic expressions, but can precisely convey very many natural number meanings using morphosyntactically complex numerals. In such semantic domains, lexicon size is not in direct competition with informativeness. What explains which meanings are lexicalized in such semantic domains? We will propose that in such cases, languages need to solve a different kind of trade-off problem: the trade-off between the pressure to lexicalize as few meanings as possible (i.e, to minimize lexicon size) and the pressure to produce as morphosyntactically simple utterances as possible (i.e, to minimize average morphosyntactic complexity of utterances). To support this claim, we will present a case study of 128 natural languages' numeral systems, and show computationally that they achieve a near-optimal trade-off between lexicon size and average morphosyntactic complexity of numerals. This study in conjunction with previous work on communicative efficiency suggests that languages' lexicons are shaped by a trade-off between not two but <i>three</i> pressures: be simple, be informative, and minimize average morphosyntactic complexity of utterances.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13424","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}