CognitionPub Date : 2025-05-15DOI: 10.1016/j.cognition.2025.106181
Stefano Ioannucci, Petra Vetter
{"title":"Semantic audio-visual congruence modulates visual sensitivity to biological motion across awareness levels","authors":"Stefano Ioannucci, Petra Vetter","doi":"10.1016/j.cognition.2025.106181","DOIUrl":"10.1016/j.cognition.2025.106181","url":null,"abstract":"<div><div>Whether cross-modal interaction requires conscious awareness of multisensory information or whether it can occur in the absence of awareness, is still an open question. Here, we investigated if sounds can enhance detection sensitivity of semantically matching visual stimuli at varying levels of visual awareness. We presented biological motion stimuli of human actions (walking, rowing, sawing) during dynamic continuous flash suppression (CFS) to 80 participants and measured the effect of co-occurring, semantically matching or non-matching action sounds on visual sensitivity (d′). By individually thresholding stimulus contrast, we distinguished participants who detected motion either above or at chance level.</div><div>Participants who reliably detected visual motion above chance showed higher sensitivity to upright versus inverted biological motion across all experimental conditions. In contrast, participants detecting visual motion at chance level, i.e. during successful suppression, demonstrated this upright advantage exclusively during trials with semantically congruent sounds. Across the whole sample, the impact of sounds on visual sensitivity increased as participants' visual detection performance decreased, revealing a systematic trade-off between auditory and visual processing. Our findings suggest that semantic congruence between auditory and visual information can selectively modulate biological motion perception when visual awareness is minimal or absent, while more robust visual signals enable perception of biological motion independent of auditory input. Thus, semantically congruent sounds may impact visual representations as a function of the level of visual awareness.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106181"},"PeriodicalIF":2.8,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143947454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-14DOI: 10.1016/j.cognition.2025.106178
Alberte B. Seeberg , Tomas E. Matthews , Andreas Højlund , Peter Vuust , Bjørn Petersen
{"title":"Beyond syncopation: The number of rhythmic layers shapes the pleasurable urge to move to music","authors":"Alberte B. Seeberg , Tomas E. Matthews , Andreas Højlund , Peter Vuust , Bjørn Petersen","doi":"10.1016/j.cognition.2025.106178","DOIUrl":"10.1016/j.cognition.2025.106178","url":null,"abstract":"<div><div>People experience the strongest pleasurable urge to move to music (PLUMM) with rhythms of medium complexity, showing an inverted U-shaped relationship. Rhythmic complexity is typically defined by syncopation but likely interacts with the number and instrumentation of rhythmic layers (e.g., snare only vs snare and bass drum) in affecting PLUMM. This study investigated this interaction by comparing PLUMM ratings of rhythms with varying rhythmic layers and syncopation degrees.</div><div>Two online studies (study 1, <em>n</em> = 108; study 2, <em>n</em> = 46) were conducted asking participants to rate how much they wanted to move and the pleasure they felt while listening to rhythms. Each study used 12 rhythms in four versions: 1) snare only (SN) in study I and bass drum only (BD) in study II; 2) snare and hi-hat (SN + HH) in study I and bass drum and hi-hat (BD + HH) in study II; 3) snare and bass drum (SN + BD) and 4) the original with snare, bass drum, and hi-hat (SN + BD + HH) in both studies, totaling 48 stimuli per study. We tested for linear and quadratic effects of syncopation and rhythmic layers on PLUMM ratings.</div><div>Study I showed a significant interaction between syncopation and rhythmic layers. The SN + BD + HH versions exhibited the strongest inverted U as an effect of syncopation, followed by SN + BD and SN + HH, while SN showed a near-flat pattern of ratings as an effect of syncopation.</div><div>Study II had similar findings, but differences between versions were smaller, and the interaction was mainly driven by differences between BD and BD + HH and between SN + BD and SN + BD + HH, especially at moderate syncopation levels.</div><div>These findings suggest that the PLUMM response is shaped by the number of rhythmic layers, the roles that the different instruments play, and the way that they interact with each other and with syncopation, thus extending our understanding of the rhythmic features that drive the motor and hedonic responses to music.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106178"},"PeriodicalIF":2.8,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-13DOI: 10.1016/j.cognition.2025.106177
Michael Laakasuo , Anton Kunnari , Kathryn Francis , Michaela Jirout Košová , Robin Kopecký , Paolo Buttazzoni , Mika Koverola , Jussi Palomäki , Marianna Drosinou , Ivar Hannikainen
{"title":"Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions","authors":"Michael Laakasuo , Anton Kunnari , Kathryn Francis , Michaela Jirout Košová , Robin Kopecký , Paolo Buttazzoni , Mika Koverola , Jussi Palomäki , Marianna Drosinou , Ivar Hannikainen","doi":"10.1016/j.cognition.2025.106177","DOIUrl":"10.1016/j.cognition.2025.106177","url":null,"abstract":"<div><div>A recurring discrepancy in attitudes toward decisions made by human versus artificial agents, termed the Human-Robot moral judgment asymmetry, has been documented in moral psychology of AI. Across a wide range of contexts, AI agents are subject to greater moral scrutiny than humans for the same actions and decisions. In eight experiments (total <em>N</em> = 5837), we investigated whether the asymmetry effect arises in end-of-life care contexts and explored the mechanisms underlying this effect. Our studies documented reduced approval of an AI doctor's decision to withdraw life support relative to a human doctor (Studies 1a and 1b). This effect persisted regardless of whether the AI assumed a recommender role or made the final medical decision (Studies 2a and 2b and 3), but, importantly, disappeared under two conditions: when doctors kept on rather than withdraw life support (Studies 1a, 1b and 3), and when they carried out active euthanasia (e.g., providing a lethal injection or removing a respirator on the patient's demand) rather than passive euthanasia (Study 4). These findings highlight two contextual factors–the level of automation and the patient's autonomy–that influence the presence of the asymmetry effect, neither of which is not predicted by existing theories. Finally, we found that the asymmetry effect was partly explained by perceptions of AI incompetence (Study 5) and limited explainability (Study 6). As the role of AI in medicine continues to expand, our findings help to outline the conditions under which stakeholders disfavor AI over human doctors in clinical settings.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106177"},"PeriodicalIF":2.8,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-12DOI: 10.1016/j.cognition.2025.106175
Mintao Zhao , Isabelle Bülthoff
{"title":"How much face identity information is required for face recognition?","authors":"Mintao Zhao , Isabelle Bülthoff","doi":"10.1016/j.cognition.2025.106175","DOIUrl":"10.1016/j.cognition.2025.106175","url":null,"abstract":"<div><div>Many studies have shown that degradation of face identity information impairs face recognition, however, when such information degradation reaches the limit of our face recognition ability remains unclear. Here we systematically decreased face identity information by morphing an increasing number of faces together and investigated how much identity information is required for recognizing a face in a morph. Our results show that participants could identify half of faces mixed in 3-identity morphs using only their memory of these faces (Experiment 1) and, when perceptual information is available, they could recognize two of three faces mixed in a morph (Experiment 2). When we systematically reduced the contribution of each identity to a face morph from 50 % to 6.25 % (i.e., morphing 2 to 16 faces together; Experiments 3 and 4), participants could still consistently recognize faces in a morph containing as little as 12.5 % of their identity information. Moreover, familiarity with faces enhanced participants' performance, whether they were asked to recognize all faces mixed in a morph in one go (Experiments 1 and 2) or to recognize them individually (Experiments 3 and 4). Finally, image-based similarity between the faces and morphs could predict how decreasing identity information impairs face recognition performance. Together, these results not only help quantify the minimum information required for face recognition but also offer new insights into the representational differences between familiar and unfamiliar faces.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106175"},"PeriodicalIF":2.8,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-10DOI: 10.1016/j.cognition.2025.106165
Yong Min Choi, Tzu-Yao Chiu, Jake Ferreira, Julie D. Golomb
{"title":"Maintaining visual stability in naturalistic scenes: The roles of trans-saccadic memory and default assumptions","authors":"Yong Min Choi, Tzu-Yao Chiu, Jake Ferreira, Julie D. Golomb","doi":"10.1016/j.cognition.2025.106165","DOIUrl":"10.1016/j.cognition.2025.106165","url":null,"abstract":"<div><div>How is visual stability maintained across saccades? One theory poses the visual system has an underlying assumption that the visual world has not changed during the saccade, and scrutinization of trans-saccadic memory occurs only when there is strong evidence against external stability. As support, prior studies demonstrated a “blanking effect”, where sensitivity to trans-saccadic change is increased when a short blank is inserted immediately after saccade onset. However, there remains a considerable gap between these findings, discovered with simple visual stimuli, and understanding trans-saccadic stability for rich naturalistic scenes. Here we tested human observers in a blanking paradigm with naturalistic scene images, using artificial intelligence (AI)-generated “scene wheel” stimuli that varied in a continuous and quantifiably controlled manner. Psychometric modeling revealed that inserting a brief blank screen during a saccade increased sensitivity to trans-saccadic scene changes <em>and</em> decreased the stability bias. These effects occurred only when observers made actual eye movements, but not when eye movements were simulated with retinal image shifts. These findings demonstrate that trans-saccadic memory of complex scenes and an overarching stability assumption work in tandem to achieve stable perceptual experience in natural environments.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106165"},"PeriodicalIF":2.8,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143927839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-10DOI: 10.1016/j.cognition.2025.106169
Soila Kuuluvainen , Saara Kaskivuo , Martti Vainio , Eleonore Smalle , Riikka Möttönen
{"title":"Prosody enhances learning of statistical dependencies from continuous speech streams in adults","authors":"Soila Kuuluvainen , Saara Kaskivuo , Martti Vainio , Eleonore Smalle , Riikka Möttönen","doi":"10.1016/j.cognition.2025.106169","DOIUrl":"10.1016/j.cognition.2025.106169","url":null,"abstract":"<div><div>Foreign languages sound like seamless streams of speech sounds without pauses between words and phrases. This makes it challenging for the listener to discover the underlying structure of a new language. However, all spoken languages have a melody, and changes in pitch, syllable duration and stress can provide prosodic cues about word and phrase boundaries. It is currently underspecified how adults use prosodic cues to crack the structure of a new language. Here, we investigated how pitch patterns affect the ability to learn adjacent and nonadjacent statistical dependencies from novel, artificial speech streams. In a series of eight online experiments along two studies, we presented native Finnish speakers with short, two-minute speech streams with a hidden probabilistic structure that did or did not include prosodic pitch patterns. We measured learning outcomes using a forced choice recognition task along with confidence ratings. In Study 1, we found that learning adjacent dependencies was boosted with familiar-to-listener (i.e., typical for Finnish language) prosodic pitch patterns but not with unfamiliar-to-listener or random prosodic pitch patterns. In Study 2, we found that more complex nonadjacent dependencies were only learned with the presence of familiar-to-listener prosodic patterns. Intriguingly, prosodic patterns also enabled concurrent learning of multiple adjacent and nonadjacent dependencies in speech. Moreover, they enhanced participants' confidence in remembering adjacent, but not nonadjacent, dependencies. Together, the results suggest that adults use language-background-dependent prosodic patterns to acquire novel linguistic knowledge from speech streams in a fast and efficient manner. The findings support the idea that prosody has an important role in language learning, making the underlying statistical structure of spoken languages more accessible and learnable for listeners.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106169"},"PeriodicalIF":2.8,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143927838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-08DOI: 10.1016/j.cognition.2025.106135
Tyler Bonnen , Anthony D. Wagner , Daniel L.K. Yamins
{"title":"Medial temporal cortex supports object perception by integrating over visuospatial sequences","authors":"Tyler Bonnen , Anthony D. Wagner , Daniel L.K. Yamins","doi":"10.1016/j.cognition.2025.106135","DOIUrl":"10.1016/j.cognition.2025.106135","url":null,"abstract":"<div><div>Perception unfolds across multiple timescales. For humans and other primates, many object-centric visual attributes can be inferred ‘at a glance’ (i.e., given <span><math><mo><</mo></math></span>200 ms of visual information), an ability supported by ventral temporal cortex (VTC). Other perceptual inferences require more time; to determine a novel object’s identity, we might need to represent its unique configuration of visual features, requiring multiple ‘glances.’ Here we evaluate whether medial temporal cortex (MTC), downstream from VTC, supports object perception by integrating over such visuospatial sequences. We first compare human visual inferences directly to electrophysiological recordings from macaque VTC. While human performance ‘at a glance’ is approximated by a linear readout of VTC, participants radically outperform VTC given longer viewing times (i.e., <span><math><mo>></mo></math></span>200 ms). Next, we leverage a stimulus set that enables us to characterize MTC involvement in these temporally extended visual inferences. We find that human visual inferences ‘at a glance’ resemble the deficits observed in MTC-lesioned human participants. By measuring gaze behaviors during these temporally extended viewing periods, we find that participants sequentially sample task-relevant features via multiple saccades/fixations. These patterns of visuospatial attention are both reliable across participants and necessary for MTC-dependent visual inferences. These data reveal complementary neural systems that support visual object perception: VTC provides a rich set of visual features ‘at a glance’, while MTC is able to integrate over the sequential outputs of VTC to support object-level inferences.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106135"},"PeriodicalIF":2.8,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143922492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The true colors of reading: Literacy enhances lexical-semantic processing in rapid automatized and discrete object naming","authors":"Susana Araújo , Tânia Fernandes , Margarida Cipriano , Laura Mealha , Catarina Silva-Nunes , Falk Huettig","doi":"10.1016/j.cognition.2025.106172","DOIUrl":"10.1016/j.cognition.2025.106172","url":null,"abstract":"<div><div>Semantic knowledge is a defining property of human cognition, profoundly influenced by cultural experiences. In this study, we investigated whether literacy enhances lexical-semantic processing independently of schooling. Three groups of neurotypical adults - unschooled illiterates, unschooled ex-illiterates, and schooled literates - from the same residential and socioeconomic background in Portugal were tested on serial rapid automatized naming (RAN) and on discrete naming of everyday objects (concrete concepts) and basic color patches (abstract concepts). The performance of readers, whether schooled literate or unschooled ex-illiterate, was not affected by stimulus category, whereas illiterates were much slower on color than object naming, irrespective of task. This naming advantage promoted by literacy was not significantly mediated by vocabulary size. We conclude that literacy per se, regardless of schooling, contributes to faster naming of depicted concepts, particularly those of more abstract categories. Our findings provide further evidence that literacy influences cognition beyond the mere accumulation of knowledge: Literacy enhances the quality and efficiency of lexical-semantic representations and processing.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106172"},"PeriodicalIF":2.8,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-06DOI: 10.1016/j.cognition.2025.106166
Esther Boissin , Gordon Pennycook
{"title":"Who benefits from debiasing?","authors":"Esther Boissin , Gordon Pennycook","doi":"10.1016/j.cognition.2025.106166","DOIUrl":"10.1016/j.cognition.2025.106166","url":null,"abstract":"<div><div>Reasoning errors significantly impede sound decision-making. Despite advancements in debiasing interventions designed to improve reasoning, not all individuals benefit from these approaches. This study explores the individual differences that contribute to variability in debiasing success, focusing on thinking dispositions, cognitive capacities, and pre-training conflict detection. Using the two-response paradigm, we measured intuitive and deliberative responses both before and after a base-rate neglect debiasing intervention to better understand the relationship between individual differences and training effects. Participants were categorized into three groups: consistently biased (those who did not benefit from the training), improved (those who showed better performance either intuitively or deliberately after the training), and consistently correct (those who produced correct responses without needing the training). Each group differed across the measured variables, with the improved group falling between the consistently correct and consistently biased groups. Our findings indicate that thinking dispositions, such as open-minded thinking, played a more critical role in debiasing success than cognitive capacities. Although cognitive capacity does predict overall accuracy in reasoning, once thinking dispositions were taken into account, cognitive capacity did not predict the success of the training effect. We also found that conflict detection served as a signal prompting additional cognitive effort during the intervention, suggesting that the benefit from training depended on both recognizing errors and the motivation to engage in reflective thinking during the training. These findings challenge the idea that cognitive abilities are the primary drivers of reasoning improvement and emphasize the crucial role of thinking dispositions in achieving debiasing success.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106166"},"PeriodicalIF":2.8,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2025-05-06DOI: 10.1016/j.cognition.2025.106168
Katelyn Becker , Eliana Dellinger , Frank H. Durgin
{"title":"Change blindness, subset segmentation, and the perceptual underestimation of subset numerosity","authors":"Katelyn Becker , Eliana Dellinger , Frank H. Durgin","doi":"10.1016/j.cognition.2025.106168","DOIUrl":"10.1016/j.cognition.2025.106168","url":null,"abstract":"<div><div>How well can humans perceptually estimate subsets of collections of differently colored dots? People can simultaneously evaluate the numerosity of at least two color-defined subsets. But when equal (largish) numbers of light gray and white dots are presented on a medium-gray background, there appear to be fewer white dots than gray dots. The present paper presents 6 experiments designed to test the hypothesis that these behaviors are due to figure-ground segmentation based on color similarity, which can lead to incomplete segmentation, and thus perceptual underestimation of foregrounded dots. Ironically, subset matching is most accurate for sets of dots that are difficult to segment, such as light gray among white. This is demonstrated using a color-change detection task to show that (1) accurate subset estimation is only accomplished for sets that resist foreground selection, and (2) even stereoscopically backgrounded white dots fail to be segmented (i.e., are at chance for color change detection) when a frontal plane of gray dots is more successfully segmented. Although explicit attentional biasing is shown to shift performance between dots differing in chromatic color, it does not improve performance at selecting light gray dots among white. It is also shown that the perceptual underestimation of supersets of mixed colors may be consistent with combining an underestimated foreground with an accurately estimated background.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"262 ","pages":"Article 106168"},"PeriodicalIF":2.8,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}