Open MindPub Date : 2025-04-22eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00202
Sarah Brocard, Pavel V Voinov, Balthasar Bickel, Klaus Zuberbühler
{"title":"Spontaneous Encoding of Event Roles in Hominids.","authors":"Sarah Brocard, Pavel V Voinov, Balthasar Bickel, Klaus Zuberbühler","doi":"10.1162/opmi_a_00202","DOIUrl":"https://doi.org/10.1162/opmi_a_00202","url":null,"abstract":"<p><p>When observing social interactions, humans rapidly and spontaneously encode events in terms of agents, patients and causal relations. This propensity can be made visible empirically with the switch cost paradigm, a reaction time experiment and well-established tool of cognitive psychology. We adapted the paradigm for non-human primates to test whether non-linguistic animals encoded event roles in the same way. Both human and non-human participants were requested to attend to different social interactions between two artificially coloured (blue or green) actors and to target the actor masked by a specified colour (e.g., blue), regardless of her role. We found that when we switched the targeted colour mask from agents to patients (or vice versa) the processing time significantly increased in both hominid species (i.e., human and chimpanzee), suggesting that event roles were spontaneously encoded and subsequently interfered with our simplistic colour search task. We concluded that the propensity to encode social events in terms of agents and patients was a common feature of hominid cognition, as demonstrated in several human and one chimpanzee participant, pointing towards an evolutionarily old and phylogenetically shared cognitive mechanism central to language processing.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"559-575"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-04-22eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00204
Adam Morris
{"title":"Invisible Gorillas in the Mind: Internal Inattentional Blindness and the Prospect of Introspection Training.","authors":"Adam Morris","doi":"10.1162/opmi_a_00204","DOIUrl":"10.1162/opmi_a_00204","url":null,"abstract":"<p><p>Much of high-level cognition appears inaccessible to consciousness. Countless studies have revealed mental processes-like those underlying our choices, beliefs, judgments, intuitions, etc.-which people do not notice or report, and these findings have had a widespread influence on the theory and application of psychological science. However, the interpretation of these findings is uncertain. Making an analogy to perceptual consciousness research, I argue that much of the unconsciousness of high-level cognition is plausibly due to <i>internal inattentional blindness</i>: missing an otherwise consciously-accessible internal event because your attention was elsewhere. In other words, rather than being structurally unconscious, many higher mental processes might instead be \"preconscious\", and would become conscious if a person attended to them. I synthesize existing indirect evidence for this claim, argue that it is a foundational and largely untested assumption in many applied interventions (such as therapy and mindfulness practices), and suggest that, with careful experimentation, it could form the basis for a long-sought-after science of introspection training.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"606-634"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12136916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-04-02eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00188
Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky
{"title":"Towards Human-Like Emergent Communication via Utility, Informativeness, and Complexity.","authors":"Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky","doi":"10.1162/opmi_a_00188","DOIUrl":"https://doi.org/10.1162/opmi_a_00188","url":null,"abstract":"<p><p>Two prominent, yet contrasting, theoretical views are available to characterize the underlying drivers of language evolution: on the one hand, task-specific utility maximization; on the other hand, task-agnostic communicative efficiency. The latter has recently been grounded in an information-theoretic tradeoff between communicative complexity and informativeness, known as the Information Bottleneck (IB) principle. Here, we integrate these two views and propose an information-constrained emergent communication framework that trades off utility, informativeness, and complexity. To train agents within our framework, we develop a method, called Vector-Quantized Variational Information Bottleneck (VQ-VIB), that allows agents to interact using information-constrained discrete communication embedded in a continuous vector space. We test this approach in three domains and show that pressure for informativeness facilitates faster learning and better generalization to novel domains. At the same time, limiting complexity yields better alignment with actual human languages. Lastly, we find that VQ-VIB outperforms previously proposed emergent communication methods; we posit that this is due to the semantically-meaningful communication embedding space that VQ-VIB affords. Overall, our work demonstrates the role of cognitively-motivated optimality principles in inducing aspects of human-like communication among artificial agents.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"418-451"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Present-Focused Behavior as a Rational Adaptation to Precarity.","authors":"Arjun Mitra, Narayanan Srinivasan, Nisheeth Srivastava","doi":"10.1162/opmi_a_00195","DOIUrl":"https://doi.org/10.1162/opmi_a_00195","url":null,"abstract":"<p><p>Inter-temporal impulsivity has been implicated in several theoretical explanations of the self-reinforcing nature of low socioeconomic status (SES). However, how exactly this interaction transpires is yet to be identified. We hypothesize that impulsivity arises from planning failures due to unpredictable resource demands, and people learn to adapt to this by being present-focused. We tested this hypothesis across three studies using a novel paradigm in which participants used a farming simulator and chose crops with different risk and time preferences. We found that participants' revealed time preferences adaptively shortened when they faced resource shocks and expanded in the absence of such shocks. We also found greater shrinkage of temporal horizons when these shocks were unpredictable rather than predictable. Our work shows that irrationality need not be invoked to explain the occurrence of present-bias in low SES individuals, and that such behavior may simply be a rational adaptation to the environmental demands of planning under precarity.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"452-474"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-04-02eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00199
Sam Passmore, Birgit Hellwig, Rowena Garcia, Evan Kidd
{"title":"The Scientific and Cultural Cost of Convenience Sampling in the Face of Rising Language Endangerment: Highlighting the Role of Language Acquisition.","authors":"Sam Passmore, Birgit Hellwig, Rowena Garcia, Evan Kidd","doi":"10.1162/opmi_a_00199","DOIUrl":"https://doi.org/10.1162/opmi_a_00199","url":null,"abstract":"<p><p>We live in an unprecedented era of language endangerment and loss. In the midst of this crisis, it is becoming more and more evident that the psychological and cognitive sciences know very little about how most of the world's languages are acquired, represented, and processed. Therefore, the opportunity to understand our most important and defining species-specific trait is being rapidly lost. In this Perspective, we highlight the extent of this problem, focusing on a key group at the heart of language transmission and loss-child language learners. We show that, due to sampling biases, very little is known about how children learn much of the vast corners of the linguistic design space, and that our opportunity to do so this is fast running out. We end by arguing for the greater integration of the academy, government, and community in addressing this problem.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"501-514"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-04-02eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00197
Erin E Campbell, Charles P Davis, Martin Zettersten, Molly Cooke, Derek Houston, Naomi Caselli, Elika Bergelson
{"title":"Early Production of Imperceptible Words by Infants and Toddlers Born Deaf or Blind.","authors":"Erin E Campbell, Charles P Davis, Martin Zettersten, Molly Cooke, Derek Houston, Naomi Caselli, Elika Bergelson","doi":"10.1162/opmi_a_00197","DOIUrl":"https://doi.org/10.1162/opmi_a_00197","url":null,"abstract":"<p><p>We investigate the roles of linguistic and sensory experience in the early-produced visual, auditory, and abstract words of congenitally-blind toddlers, deaf toddlers, and typically-sighted/hearing peers. We also assess the role of language access by comparing early word production in children learning English or American Sign Language (ASL) from birth, versus at a delay. Using parental report data on child word production from the MacArthur-Bates Communicative Development Inventory, we found evidence that while children produced words referring to imperceptible referents before age 2, such words were less likely to be produced relative to words with perceptible referents. For instance, blind (vs. sighted) children said fewer highly visual words like \"blue\" or \"see\"; deaf signing (vs. hearing) children produced fewer auditory signs like hear. Additionally, in spoken English and ASL, children who received delayed language access were less likely to produce words overall. These results demonstrate and begin to quantify how linguistic and sensory access may influence which words young children produce.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"475-500"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-03-03eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00193
Céline Pozniak, Barbara Hemforth
{"title":"Interference of Implicit Causality in Relative Clause Processing.","authors":"Céline Pozniak, Barbara Hemforth","doi":"10.1162/opmi_a_00193","DOIUrl":"10.1162/opmi_a_00193","url":null,"abstract":"<p><p>Differences in the processing of subject and object relative clauses have been explained by a combination of syntactic, semantic, and pragmatic factors, such as a general subject advantage based on syntactic constraints, effects of animacy, and the discourse status of relative clause internal subjects. In this paper, we will focus on a factor related to verb meaning, the implicit causality of the verb, which biases the principal causer of the event described by the verb. Depending on whether the bias is on the subject or the object, implicit causality can conflict with the foregrounded antecedent of the relative clause, leading to increased difficulty in comprehension. We tested this hypothesis by manipulating implicit causality in subject and object relative clauses. We used both offline (acceptability judgment task) and online (self-paced reading task) methods to observe at which stage of processing implicit causality influences comprehension. Our findings from acceptability judgments showed that object relative clauses with subject-biased verbs were the least acceptable and the least understood. Conversely, object relative clauses with object-biased verbs were as acceptable and easy to understand as subject relative clauses in French. However, results from self-paced reading indicated that subject-biased verbs were more difficult to process regardless of the construction, suggesting that the integration of implicit causality occurs at a later level of processing, such as in acceptability judgments and comprehension questions. Further acceptability judgment tasks suggested that implicit causality influences relative clause acceptability beyond word order and thematic roles. We propose linking the role of implicit causality with the function of a restrictive relative clause and introduce the Aboutness Hypothesis to explain relative clause processing: a relative clause is more acceptable and easier to understand when everything contributes to making the head its optimal aboutness topic.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"364-400"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11964117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-03-03eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00194
Sami R Yousif, Lily B Goldstein, Elizabeth M Brannon
{"title":"Children's Understanding of Topological Relations.","authors":"Sami R Yousif, Lily B Goldstein, Elizabeth M Brannon","doi":"10.1162/opmi_a_00194","DOIUrl":"10.1162/opmi_a_00194","url":null,"abstract":"<p><p>A core aim of developmental cognitive science is to uncover the basic building blocks of human thought. For instance, work revealing that even young children, adults without formal education, and distant animal species are sensitive to basic Euclidean properties indicates that humans may be endowed with some primitive understanding of Euclidean geometry. But what about other forms of geometry? Here, we explore children's sensitivity to topological spatial forms. We show that children, like adults, spontaneously distinguish and match items in accordance with their topological relations. As well, we show that children's judgments about object similarity are remarkably consistent with adults', indicating stability in object concepts throughout the lifespan. Finally, we compare children's sensitivity to various topological forms with their sensitivity to geometric properties like curvature, perpendicularity, and symmetry, and find that while there is some variability in performance across all the features tested, overall performance for geometric vs. topological is comparable. Collectively, these findings suggest that even young children have an intuitive understanding of topological relations and suggest that topological relations may be among the building blocks of human visuospatial representation.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"401-417"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11964115/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-02-16eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00192
Igor Bascandziev, Patrick Shafto, Elizabeth Bonawitz
{"title":"Prosodic Cues Support Inferences About the Question's Pedagogical Intent.","authors":"Igor Bascandziev, Patrick Shafto, Elizabeth Bonawitz","doi":"10.1162/opmi_a_00192","DOIUrl":"10.1162/opmi_a_00192","url":null,"abstract":"<p><p>Questions may be asked with an intent to acquire new information from the recipient (i.e., information-seeking questions) or with the intent to teach (i.e., pedagogical questions). Understanding how the questions' recipients infer the intent of questions is important, because the recipients' inferences have important consequences for reasoning and learning. In the present series of studies, we tested the hypothesis that i) askers use prosodic cues-an ever-present signal-to encode information-seeking and pedagogical intent both in deliberate and spontaneous speech and that ii) adults and children can draw appropriate inferences about the question's intent on the basis of prosody alone. In Experiments 1 and 2, we found that naïve adult listeners and children aged 5 years and above have the capacity to explicitly identify which asker has an intention to teach on the basis of prosody alone. In Experiment 3, we found that parents' spontaneous speech in pedagogical or information-seeking contexts is appropriately recognized by naïve listeners as pedagogical or information-seeking. Thus, the intent of pedagogical and information-seeking questions is acoustically encoded by askers, and it can be appropriately decoded by recipients.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"340-363"},"PeriodicalIF":0.0,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11864796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143516872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open MindPub Date : 2025-02-16eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00189
Thomas P O'Connell, Tyler Bonnen, Yoni Friedman, Ayush Tewari, Vincent Sitzmann, Joshua B Tenenbaum, Nancy Kanwisher
{"title":"Approximating Human-Level 3D Visual Inferences With Deep Neural Networks.","authors":"Thomas P O'Connell, Tyler Bonnen, Yoni Friedman, Ayush Tewari, Vincent Sitzmann, Joshua B Tenenbaum, Nancy Kanwisher","doi":"10.1162/opmi_a_00189","DOIUrl":"10.1162/opmi_a_00189","url":null,"abstract":"<p><p>Humans make rich inferences about the geometry of the visual world. While deep neural networks (DNNs) achieve human-level performance on some psychophysical tasks (e.g., rapid classification of object or scene categories), they often fail in tasks requiring inferences about the underlying shape of objects or scenes. Here, we ask whether and how this gap in 3D shape representation between DNNs and humans can be closed. First, we define the problem space: after generating a stimulus set to evaluate 3D shape inferences using a match-to-sample task, we confirm that standard DNNs are unable to reach human performance. Next, we construct a set of candidate 3D-aware DNNs including 3D neural field (Light Field Network), autoencoder, and convolutional architectures. We investigate the role of the learning objective and dataset by training single-view (the model only sees one viewpoint of an object per training trial) and multi-view (the model is trained to associate multiple viewpoints of each object per training trial) versions of each architecture. When the same object categories appear in the model training and match-to-sample test sets, multi-view DNNs approach human-level performance for 3D shape matching, highlighting the importance of a learning objective that enforces a common representation across viewpoints of the same object. Furthermore, the 3D Light Field Network was the model most similar to humans across all tests, suggesting that building in 3D inductive biases increases human-model alignment. Finally, we explore the generalization performance of multi-view DNNs to out-of-distribution object categories not seen during training. Overall, our work shows that multi-view learning objectives for DNNs are necessary but not sufficient to make similar 3D shape inferences as humans and reveals limitations in capturing human-like shape inferences that may be inherent to DNN modeling approaches. We provide a methodology for understanding human 3D shape perception within a deep learning framework and highlight out-of-domain generalization as the next challenge for learning human-like 3D representations with DNNs.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"305-324"},"PeriodicalIF":0.0,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11864798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143516871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}