Rick Dale, Ruth M. J. Byrne, Emma Cohen, Ophelia Deroy, Samuel J. Gershman, Janet H. Hsiao, Ping Li, Padraic Monaghan, David C. Noelle, Iris van Rooij, Priti Shah, Michael J. Spivey, Sashank Varma
{"title":"Introduction to Progress and Puzzles of Cognitive Science","authors":"Rick Dale, Ruth M. J. Byrne, Emma Cohen, Ophelia Deroy, Samuel J. Gershman, Janet H. Hsiao, Ping Li, Padraic Monaghan, David C. Noelle, Iris van Rooij, Priti Shah, Michael J. Spivey, Sashank Varma","doi":"10.1111/cogs.13480","DOIUrl":"10.1111/cogs.13480","url":null,"abstract":"","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141601921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alicia M. Chen, Andrew Palacci, Natalia Vélez, Robert D. Hawkins, Samuel J. Gershman
{"title":"A Hierarchical Bayesian Model of Adaptive Teaching","authors":"Alicia M. Chen, Andrew Palacci, Natalia Vélez, Robert D. Hawkins, Samuel J. Gershman","doi":"10.1111/cogs.13477","DOIUrl":"10.1111/cogs.13477","url":null,"abstract":"<p>How do teachers learn about what learners already know? How do learners aid teachers by providing them with information about their background knowledge and what they find confusing? We formalize this collaborative reasoning process using a hierarchical Bayesian model of pedagogy. We then evaluate this model in two online behavioral experiments (<i>N</i> = 312 adults). In Experiment 1, we show that teachers select examples that account for learners' background knowledge, and adjust their examples based on learners' feedback. In Experiment 2, we show that learners strategically provide more feedback when teachers' examples deviate from their background knowledge. These findings provide a foundation for extending computational accounts of pedagogy to richer interactive settings.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Word Forms Reflect Trade-Offs Between Speaker Effort and Robust Listener Recognition","authors":"Stephan C. Meylan, Thomas L. Griffiths","doi":"10.1111/cogs.13478","DOIUrl":"10.1111/cogs.13478","url":null,"abstract":"<p>How do cognitive pressures shape the lexicons of natural languages? Here, we reframe George Kingsley Zipf's proposed “law of abbreviation” within a more general framework that relates it to cognitive pressures that affect speakers and listeners. In this new framework, speakers' drive to reduce effort (Zipf's proposal) is counteracted by the need for low-frequency words to have word forms that are sufficiently distinctive to allow for accurate recognition by listeners. To support this framework, we replicate and extend recent work using the prevalence of subword phonemic sequences (phonotactic probability) to measure speakers' production effort in place of Zipf's measure of length. Across languages and corpora, phonotactic probability is more strongly correlated with word frequency than word length. We also show this measure of ease of speech production (phonotactic probability) is strongly correlated with a measure of perceptual difficulty that indexes the degree of competition from alternative interpretations in word recognition. This is consistent with the claim that there must be trade-offs between these two factors, and is inconsistent with a recent proposal that phonotactic probability facilitates both perception and production. To our knowledge, this is the first work to offer an explanation why long, phonotactically improbable word forms remain in the lexicons of natural languages.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliza L. Congdon, Elizabeth M. Wakefield, Miriam A. Novack, Naureen Hemani-Lopez, Susan Goldin-Meadow
{"title":"Learners’ Spontaneous Gesture Before a Math Lesson Predicts the Efficacy of Seeing Versus Doing Gesture During the Lesson","authors":"Eliza L. Congdon, Elizabeth M. Wakefield, Miriam A. Novack, Naureen Hemani-Lopez, Susan Goldin-Meadow","doi":"10.1111/cogs.13479","DOIUrl":"10.1111/cogs.13479","url":null,"abstract":"<p>Gestures—hand movements that accompany speech and express ideas—can help children learn how to solve problems, flexibly generalize learning to novel problem-solving contexts, and retain what they have learned. But does it matter who is doing the gesturing? We know that producing gesture leads to better comprehension of a message than watching someone else produce gesture. But we do not know how producing versus observing gesture impacts deeper learning outcomes such as generalization and retention across time. Moreover, not all children benefit equally from gesture instruction, suggesting that there are individual differences that may play a role in who learns from gesture. Here, we consider two factors that might impact whether gesture leads to learning, generalization, and retention after mathematical instruction: (1) whether children see gesture or do gesture and (2) whether a child spontaneously gestures before instruction when explaining their problem-solving reasoning. For children who spontaneously gestured before instruction, both doing and seeing gesture led to better generalization and retention of the knowledge gained than a comparison manipulative action. For children who did not spontaneously gesture before instruction, doing gesture was less effective than the comparison action for learning, generalization, and retention. Importantly, this learning deficit was specific to gesture, as these children did benefit from doing the comparison manipulative action. Our findings are the first evidence that a child's use of a particular representational format for communication (gesture) directly predicts that child's propensity to learn from using the same representational format.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When the Ends Justify the Mean: The Endpoint Leverage Effect in Distribution Perception","authors":"Jonas Ebert, Roland Deutsch","doi":"10.1111/cogs.13455","DOIUrl":"10.1111/cogs.13455","url":null,"abstract":"<p>Previous research described different cognitive processes on how individuals process distributional information. Based on these processes, the current research uncovered a novel phenomenon in distribution perception: the Endpoint Leverage Effect. Subjective endpoints influence distribution estimations not only locally around the endpoint but also influence estimations across the whole value range of the distribution. The influence is largest close to the respective endpoint and decreases in size toward the opposite end of the value range. Three experiments investigate this phenomenon: Experiment 1 provides correlational evidence for the Endpoint Leverage Effect after presenting participants with a numerical distribution. Experiment 2 demonstrates the Endpoint Leverage Effect by manipulating the subjective endpoints of a numerical distribution directly. Experiment 3 generalizes the phenomenon by investigating a general population sample and estimations regarding a real-world income distribution. In addition, quantitative model analysis examines the cognitive processes underlying the effect. Overall, the novel Endpoint Leverage Effect is found in all three experiments, inspiring further research in a wide area of contexts.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Declan Devlin, Korbinian Moeller, Iro Xenidou-Dervou, Bert Reynvoet, Francesco Sella
{"title":"Familiar Sequences Are Processed Faster Than Unfamiliar Sequences, Even When They Do Not Match the Count-List","authors":"Declan Devlin, Korbinian Moeller, Iro Xenidou-Dervou, Bert Reynvoet, Francesco Sella","doi":"10.1111/cogs.13481","DOIUrl":"10.1111/cogs.13481","url":null,"abstract":"<p>In order processing, consecutive sequences (e.g., 1-2-3) are generally processed faster than nonconsecutive sequences (e.g., 1-3-5) (also referred to as the reverse distance effect). A common explanation for this effect is that order processing operates via a memory-based associative mechanism whereby consecutive sequences are processed faster because they are more familiar and thus more easily retrieved from memory. Conflicting with this proposal, however, is the finding that this effect is often absent. A possible explanation for these absences is that familiarity may vary both within and across sequence types; therefore, not all consecutive sequences are necessarily more familiar than all nonconsecutive sequences. Accordingly, under this familiarity perspective, familiar sequences should always be processed faster than unfamiliar sequences, but consecutive sequences may not always be processed faster than nonconsecutive sequences. To test this hypothesis in an adult population, we used a comparative judgment approach to measure familiarity at the individual sequence level. Using this measure, we found that although not all participants showed a reverse distance effect, all participants displayed a familiarity effect. Notably, this familiarity effect appeared stronger than the reverse distance effect at both the group and individual level; thus, suggesting the reverse distance effect may be better conceptualized as a specific instance of a more general familiarity effect.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic About Reference Production: Testing the Effects of Domain Size and Saturation","authors":"Ruud Koolen, Emiel Krahmer","doi":"10.1111/cogs.13473","DOIUrl":"10.1111/cogs.13473","url":null,"abstract":"<p>Experiments on visually grounded, definite reference production often manipulate simple visual scenes in the form of grids filled with objects, for example, to test how speakers are affected by the number of objects that are visible. Regarding the latter, it was found that speech onset times increase along with domain size, at least when speakers refer to nonsalient target objects that do not pop out of the visual domain. This finding suggests that even in the case of many distractors, speakers perform object-by-object scans of the visual scene. The current study investigates whether this systematic processing strategy can be explained by the simplified nature of the scenes that were used, and if different strategies can be identified for photo-realistic visual scenes. In doing so, we conducted a preregistered experiment that manipulated domain size and saturation; replicated the measures of speech onset times; and recorded eye movements to measure speakers’ viewing strategies more directly. Using controlled photo-realistic scenes, we find (1) that speech onset times increase linearly as more distractors are present; (2) that larger domains elicit relatively fewer fixation switches back and forth between the target and its distractors, mainly before speech onset; and (3) that speakers fixate the target relatively less often in larger domains, mainly after speech onset. We conclude that careful object-by-object scans remain the dominant strategy in our photo-realistic scenes, to a limited extent combined with low-level saliency mechanisms. A relevant direction for future research would be to employ less controlled photo-realistic stimuli that do allow for interpretation based on context.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13473","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141459937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum for “Chunking Versus Transitional Probabilities: Differentiating Between Theories of Statistical Learning”","authors":"","doi":"10.1111/cogs.13472","DOIUrl":"10.1111/cogs.13472","url":null,"abstract":"<p>Emerson, S. N. & Conway, C. M. (2023). Chunking versus transitional probabilities: Differentiating between theories of statistical learning. <i>Cognitive Science</i>, <i>47</i>(5), e13284. https://doi.org/10.1111/cogs.13284</p><p>Pre-Registration section lists an incorrect website for the project data. Data for the study can be found at https://osf.io/tnzky or through the full project page at https://osf.io/dr4ec.</p><p>We apologize for the oversight.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13472","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141459935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision Verbs Emerge First in English Acquisition but Touch, not Audition, Follows Second","authors":"Lila San Roque, Elisabeth Norcliffe, Asifa Majid","doi":"10.1111/cogs.13469","DOIUrl":"10.1111/cogs.13469","url":null,"abstract":"<p>Words that describe sensory perception give insight into how language mediates human experience, and the acquisition of these words is one way to examine how we learn to categorize and communicate sensation. We examine the differential predictions of the typological prevalence hypothesis and embodiment hypothesis regarding the acquisition of perception verbs. Studies 1 and 2 examine the acquisition trajectories of perception verbs across 12 languages using parent questionnaire responses, while Study 3 examines their relative frequencies in English corpus data. We find the vision verbs <i>see</i> and <i>look</i> are acquired first, consistent with the typological prevalence hypothesis. However, for children at 12–23 months, touch—not audition—verbs take precedence in terms of their age of acquisition, frequency in child-produced speech, and frequency in child-directed speech, consistent with the embodiment hypothesis. Later at 24–35 months old, frequency rates are observably different and audition begins to align with what has previously been reported in adult English data. It seems the initial orientation to verbalizing touch over audition in child–caregiver interaction is especially related to the control of physically and socially appropriate behaviors. Taken together, the results indicate children's acquisition of perception verbs arises from the complex interplay of embodiment, language-specific input, and child-directed socialization routines.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141459939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aspectual Processing Shifts Visual Event Apprehension","authors":"Uğurcan Vurgun, Yue Ji, Anna Papafragou","doi":"10.1111/cogs.13476","DOIUrl":"10.1111/cogs.13476","url":null,"abstract":"<p>What is the relationship between language and event cognition? Past work has suggested that linguistic/aspectual distinctions encoding the internal temporal profile of events map onto nonlinguistic event representations. Here, we use a novel visual detection task to directly test the hypothesis that processing telic versus atelic sentences (e.g., “Ebony folded a napkin in 10 seconds” vs. “Ebony did some folding for 10 seconds”) can influence whether the very same visual event is processed as containing distinct temporal stages including a well-defined endpoint or lacking such structure, respectively. In two experiments, we show that processing (a)telicity in language shifts how people later construe the temporal structure of identical visual stimuli. We conclude that event construals are malleable representations that can align with the linguistic framing of events.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 6","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141459934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}