CognitionPub Date : 2024-10-08DOI: 10.1016/j.cognition.2024.105965
Kelly Hoogervorst , Leah Banellis , Micah G. Allen
{"title":"Domain-specific updating of metacognitive self-beliefs","authors":"Kelly Hoogervorst , Leah Banellis , Micah G. Allen","doi":"10.1016/j.cognition.2024.105965","DOIUrl":"10.1016/j.cognition.2024.105965","url":null,"abstract":"<div><div>Metacognitive self-monitoring is thought to be largely domain-general, with numerous prior studies providing evidence of a metacognitive g-factor. The observation of shared inter-individual variance across different measures of metacognition does not however preclude the possibility that some aspects may nevertheless be domain-specific. In particular, it is unknown the degree to which explicit metacognitive beliefs regarding one's own abilities may exhibit domain generality. Similarly, little is known about how such prior self-beliefs are maintained and updated in the face of new metacognitive experiences. In this study of 330 healthy individuals, we explored metacognitive belief updating across memory, visual, and general knowledge domains spanning nutritional and socioeconomic facts. We find that across all domains, participants strongly reduced their self-belief (i.e., expressed less confidence in their abilities) after completing a multi-domain metacognition test battery. Using psychological network and cross-correlation analyses, we further found that while metacognitive confidence exhibited strong domain generality, metacognitive belief updating was highly domain-specific, such that participants shifted their confidence specifically according to their performance on each domain. Overall, our findings suggest that metacognitive experiences prompt a shift in self-priors from a more general to a more specific focus.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105965"},"PeriodicalIF":2.8,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-05DOI: 10.1016/j.cognition.2024.105974
Silvia Serino, Rossana Actis-Grosso, Marta Maisto, Paola Ricciardelli, Patrizia Steca
{"title":"Emotion in action: A study on the enactment effect on emotional action sentences","authors":"Silvia Serino, Rossana Actis-Grosso, Marta Maisto, Paola Ricciardelli, Patrizia Steca","doi":"10.1016/j.cognition.2024.105974","DOIUrl":"10.1016/j.cognition.2024.105974","url":null,"abstract":"<div><div>While abundant literature suggests that both performing congruent actions and emotional stimuli can enhance memory, their combined impact on memory for action phrases remains underexplored. This study investigated the effects of enactment with emotionally charged stimuli on memory performance. Sixty participants encoded action sentences with negative, neutral, or positive emotional connotations using either enactment or verbal-reading methods. Memory performance was assessed through immediate free recall tasks and a delayed yes-no recognition task. Results demonstrated a significant memory advantage for action-enacted sentences compared to verbal reading in recall and recognition tasks. Moreover, recall accuracy was higher for negative action sentences, while recognition performance was enhanced for negative and positive sentences. No interaction was found between encoding type and emotional connotation in memory tasks. Our findings revealed that both enactment and valence independently enhance memory performance, extending the benefits of enactment to emotional stimuli. Furthermore, our results highlight the differential effects of valence on free recall and recognition tasks, suggesting task-specific processes related to memory for negative and positive stimuli.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105974"},"PeriodicalIF":2.8,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-05DOI: 10.1016/j.cognition.2024.105971
Samuel Debray , Stanislas Dehaene
{"title":"Mapping and modeling the semantic space of math concepts","authors":"Samuel Debray , Stanislas Dehaene","doi":"10.1016/j.cognition.2024.105971","DOIUrl":"10.1016/j.cognition.2024.105971","url":null,"abstract":"<div><div>Mathematics is an underexplored domain of human cognition. While many studies have focused on subsets of math concepts such as numbers, fractions, or geometric shapes, few have ventured beyond these elementary domains. Here, we attempted to map out the full space of math concepts and to answer two specific questions: can distributed semantic models, such a GloVe, provide a satisfactory fit to human semantic judgements in mathematics? And how does this fit vary with education? We first analyzed all of the French and English Wikipedia pages with math contents, and used a semi-automatic procedure to extract the 1000 most frequent math terms in both languages. In a second step, we collected extensive behavioral judgements of familiarity and semantic similarity between them. About half of the variance in human similarity judgements was explained by vector embeddings that attempt to capture latent semantic structures based on cooccurence statistics. Participants' self-reported level of education modulated familiarity and similarity, allowing us to create a partial hierarchy among high-level math concepts. Our results converge onto the proposal of a map of math space, organized as a database of math terms with information about their frequency, familiarity, grade of acquisition, and entanglement with other concepts.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105971"},"PeriodicalIF":2.8,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-04DOI: 10.1016/j.cognition.2024.105969
George Blackburne , Chris D. Frith , Daniel Yon
{"title":"Communicated priors tune the perception of control","authors":"George Blackburne , Chris D. Frith , Daniel Yon","doi":"10.1016/j.cognition.2024.105969","DOIUrl":"10.1016/j.cognition.2024.105969","url":null,"abstract":"<div><div>Action allows us to shape the world around us. But to act effectively we need to accurately sense what we can and cannot control. Classic theories across cognitive science suppose that this ‘sense of agency’ is constructed from the sensorimotor signals we experience as we interact with our surroundings. But these sensorimotor signals are inherently ambiguous, and can provide us with a distorted picture of what we can and cannot influence. Here we investigate one way that agents like us might overcome the inherent ambiguity of these signals: by combining noisy sensorimotor evidence with prior beliefs about control acquired through explicit communication with others. Using novel tools to measure and model control decisions, we find that explicit beliefs about the controllability of the environment alter both the sensitivity and bias of agentic choices; meaning that we are both better at detecting <em>and</em> more biased to feel control when we are told to expect it. These seemingly paradoxical effects on agentic choices can be captured by a computational model where expecting to be in control exaggerates the sensitivity or ‘gain’ of the mechanisms we use to detect our influence over our surroundings – making us increasingly sensitised to both true and illusory signs of agency. In combination, these results reveal a cognitive and computational mechanism that allows public communication about what we can and cannot influence to reshape our private sense of control.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105969"},"PeriodicalIF":2.8,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-04DOI: 10.1016/j.cognition.2024.105966
Lucas Battich , Elisabeth Pacherie , Julie Grèzes
{"title":"Social perspective-taking influences on metacognition","authors":"Lucas Battich , Elisabeth Pacherie , Julie Grèzes","doi":"10.1016/j.cognition.2024.105966","DOIUrl":"10.1016/j.cognition.2024.105966","url":null,"abstract":"<div><div>We often effortlessly take the perceptual perspective of others: we represent some aspect of the environment that others currently perceive. However, taking someone's perspective can interfere with one's perceptual processing: another person's gaze can spontaneously affect our ability to detect stimuli in a scene. But it is still unclear whether our cognitive evaluation of those judgements is also affected. In this study, we investigated whether social perspective-taking can influence participants' metacognitive judgements about their perceptual responses. Participants performed a contrast detection task with a task-irrelevant avatar oriented either congruently or incongruently to the stimulus location. By “blindfolding” the avatar, we tested the influence of social perspective-taking versus domain-general directional orienting. Participants had higher accuracy and perceptual sensitivity with a congruent avatar regardless of the blindfold, suggesting a directional cueing effect. However, their metacognitive efficiency was modulated only by the congruency of a seeing avatar. These results suggest that perceptual metacognitive ability can be socially enhanced by sharing perception of the same objects with others.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105966"},"PeriodicalIF":2.8,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-04DOI: 10.1016/j.cognition.2024.105967
Jing-Jing Li , Anne G.E. Collins
{"title":"An algorithmic account for how humans efficiently learn, transfer, and compose hierarchically structured decision policies","authors":"Jing-Jing Li , Anne G.E. Collins","doi":"10.1016/j.cognition.2024.105967","DOIUrl":"10.1016/j.cognition.2024.105967","url":null,"abstract":"<div><div>Learning structures that effectively abstract decision policies is key to the flexibility of human intelligence. Previous work has shown that humans use hierarchically structured policies to efficiently navigate complex and dynamic environments. However, the computational processes that support the learning and construction of such policies remain insufficiently understood. To address this question, we tested 1026 human participants, who made over 1 million choices combined, in a decision-making task where they could learn, transfer, and recompose multiple sets of hierarchical policies. We propose a novel algorithmic account for the learning processes underlying observed human behavior. We show that humans rely on compressed policies over states in early learning, which gradually unfold into hierarchical representations via meta-learning and Bayesian inference. Our modeling evidence suggests that these hierarchical policies are structured in a temporally backward, rather than forward, fashion. Taken together, these algorithmic architectures characterize how the interplay between reinforcement learning, policy compression, meta-learning, and working memory supports structured decision-making and compositionality in a resource-rational way.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105967"},"PeriodicalIF":2.8,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-04DOI: 10.1016/j.cognition.2024.105970
Marc Sabio-Albert , Lluís Fuentemilla , Alexis Pérez-Bellido
{"title":"Anticipating multisensory environments: Evidence for a supra-modal predictive system","authors":"Marc Sabio-Albert , Lluís Fuentemilla , Alexis Pérez-Bellido","doi":"10.1016/j.cognition.2024.105970","DOIUrl":"10.1016/j.cognition.2024.105970","url":null,"abstract":"<div><div>Our perceptual experience is generally framed in multisensory environments abundant in predictive information. Previous research on statistical learning has shown that humans can learn regularities in different sensory modalities in parallel, but it has not yet determined whether multisensory predictions are generated through a modality-specific predictive mechanism or instead, rely on a supra-modal predictive system. Here, across two experiments, we tested these hypotheses by presenting participants with concurrent pairs of predictable auditory and visual low-level stimuli (i.e., tones and gratings). In different experimental blocks, participants had to attend the stimuli in one modality while ignoring stimuli from the other sensory modality (distractors), and perform a perceptual discrimination task on the second stimulus of the attended modality (targets). Orthogonal to the task goal, both the attended and unattended pairs followed transitional probabilities, so targets and distractors could be expected or unexpected. We found that participants performed better for expected compared to unexpected targets. This effect generalized to the distractors but only when relevant targets were expected. Such interactive effects suggest that predictions may be gated by a supra-modal system with shared resources across sensory modalities that are distributed according to their respective behavioural relevance.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105970"},"PeriodicalIF":2.8,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-02DOI: 10.1016/j.cognition.2024.105968
Shujia Zhang , Li Wang , Yi Jiang
{"title":"Visual mental imagery of nonpredictive central social cues triggers automatic attentional orienting","authors":"Shujia Zhang , Li Wang , Yi Jiang","doi":"10.1016/j.cognition.2024.105968","DOIUrl":"10.1016/j.cognition.2024.105968","url":null,"abstract":"<div><div>Previous research has demonstrated that social cues (e.g., eye gaze, walking direction of biological motion) can automatically guide people's focus of attention, a well-known phenomenon called social attention. The current research shows that voluntarily generated social cues via visual mental imagery, without being physically presented, can produce robust attentional orienting similar to the classic social attentional orienting effect. Combining a visual imagery task with a dot-probe task, we found that imagining a non-predictive gaze cue could orient attention towards the gazed-at hemifield. Such attentional effect persisted even when the imagery gaze cue was counter-predictive of the target hemifield, and could be generalized to biological motion cue. Besides, this effect could not be simply attributed to low-level motion signal embedded in gaze cues. More importantly, an eye-tracking experiment carefully monitoring potential eye movements demonstrated the imagery-induced attentional orienting effect induced by social cues, but not by non-social cues (i.e., arrows), suggesting that such effect is specialized to visual imagery of social cues. These findings accentuate the demarcation between social and non-social attentional orienting, and may take a preliminary step in conceptualizing voluntary visual imagery as a form of internally directed attention.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105968"},"PeriodicalIF":2.8,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"People's judgments of humans and robots in a classic moral dilemma","authors":"Bertram F. Malle , Matthias Scheutz , Corey Cusimano , John Voiklis , Takanori Komatsu , Stuti Thapa , Salomi Aladia","doi":"10.1016/j.cognition.2024.105958","DOIUrl":"10.1016/j.cognition.2024.105958","url":null,"abstract":"<div><div>How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (<em>N</em> = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions—here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105958"},"PeriodicalIF":2.8,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-10-01DOI: 10.1016/j.cognition.2024.105964
Molly Brillinger, Xiaoye Michael Wang, Timothy N. Welsh
{"title":"The assumed motor capabilities of a partner influence motor imagery in a joint serial disc transfer task","authors":"Molly Brillinger, Xiaoye Michael Wang, Timothy N. Welsh","doi":"10.1016/j.cognition.2024.105964","DOIUrl":"10.1016/j.cognition.2024.105964","url":null,"abstract":"<div><div>Motor imagery (MI) of one's own movements is thought to involve the sub-threshold activation of one's own motor codes. Movement coordination during joint action is thought to occur because co-actors integrate a simulation of their own actions with the simulated actions of the partner. The present experiments gained insight into MI of joint action by investigating if and how the assumed motor capabilitiesof the imaginary partner affected MI. Participants performed a serial disc transfer task alone and then imagined performing the same task alone and with an imagined partner. In the individual tasks, participants transferred all four discs. In the joint task, participants imagined themselves transferring the first 2 discs and a partner transferring the last 2 discs. The description of the imagined partner (high/low performer) was manipulated across blocks to determine if participants adapted their MI of the joint task based on the partner's characteristics. Results revealed that imagined movement times (MTs) were shorter when the description of the imagined partner was a ‘high’ performer compared to a ‘low’ performer. Interestingly, participants not only adjusted the partner's portion of the task, but they also adjusted their own portion of the task - imagined MTs of the first disc transfers were shorter when imagining performing the task with a high performer than with a low performer. These findings suggest that MI is based on the simulation of one's own response code, and that the adaptation of MI to their partner's movements influences the MI of one's own movements.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105964"},"PeriodicalIF":2.8,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}