{"title":"Brain stimulation over dorsomedial prefrontal cortex causally affects metacognitive bias but not mentalising.","authors":"Rebekka S Mattes, Alexander Soutschek","doi":"10.3758/s13415-025-01277-1","DOIUrl":"https://doi.org/10.3758/s13415-025-01277-1","url":null,"abstract":"<p><p>Despite the importance of metacognition for everyday decision-making, its neural substrates are far from understood. Recent neuroimaging studies linked metacognitive processes to dorsomedial prefrontal cortex (dmPFC), a region known to be involved in monitoring task difficulty. dmPFC is also thought to be involved in mentalising, consistent with theoretical accounts of metacognition as a self-directed subform of mentalising. However, it is unclear whether, and if so how, dmPFC causally affects metacognitive judgements, and whether this can be explained by a more general role of dmPFC for mentalising. To test this, participants performed two tasks targeting metacognition in perceptual decisions and mentalising whilst undergoing excitatory anodal versus sham dmPFC tDCS. dmPFC tDCS significantly decreased subjective confidence reports while leaving first-level performance in accuracy and reaction times unaffected, suggesting a causal contribution of dmPFC to representing metacognitive bias. Furthermore, we found no effect of dmPFC tDCS on neither metacognitive sensitivity and efficiency nor on mentalising, providing no evidence for an overlap between perceptual metacognition and mentalising in the dmPFC. Together, our findings highlight the dmPFC's causal role in metacognition by representing subjective confidence during evaluations of cognitive performance.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approach-avoidance conflict recruits lateral frontoparietal and cinguloinsular networks in a predator-prey game setting.","authors":"Yuqian Ni, Robert F Potter, Thomas W James","doi":"10.3758/s13415-025-01278-0","DOIUrl":"https://doi.org/10.3758/s13415-025-01278-0","url":null,"abstract":"<p><p>Objects associated with both reward and threat produce approach-avoidance conflict (AAC). Although our day-to-day encounters with AAC objects are dynamic and interactive, the cognitive neuroscience literature on AAC is largely based on experiments that use static stimuli. Here, we used a dynamic, interactive, video-game environment to test neural substrates implicated in processing AAC in a more ecologically valid setting. While undergoing functional magnetic resonance imaging (fMRI), subjects (N = 31) played a predator-prey video game, guiding an avatar through a maze containing six types of aversive or appetitive agents. Of the six agent types, two were \"non-AAC\" and either always healed or always harmed the player's avatar on contact. The other four were \"AAC,\" healing or harming the avatar probabilistically. Results revealed that imminence (inverse of distance) between a player's avatar and an environmental agent was a strong predictor of activation in three brain networks: the cinguloinsular (CI), dorsal frontoparietal (DFP), and occipitotemporal (OT). Additionally, two distinct temporal patterns of heightened activation with AAC agents emerged in two networks: the CI network responded with a transient spike of activation at trial onsets, followed by rapid decay, whereas the lateral frontoparietal (LFP) network showed sustained activation across the whole trial. We conclude that, in an interactive, dynamic setting, the roles of the CI and LFP networks appear to be complimentary, with the CI involved in distinguishing between AAC and non-AAC agents when they first appeared and the LFP involved in maintaining a behavioral mode related to the level of AAC.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Reger, Oleg Vrabie, Gregor Volberg, Angelika Lingnau
{"title":"Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm.","authors":"Maximilian Reger, Oleg Vrabie, Gregor Volberg, Angelika Lingnau","doi":"10.3758/s13415-025-01272-6","DOIUrl":"https://doi.org/10.3758/s13415-025-01272-6","url":null,"abstract":"<p><p>Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143516419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"N200 and late components reveal text-emoji congruency effect in affective theory of mind.","authors":"Yi Zhong, Haiyu Zhong, Qiong Chen, Xiuling Liang, Feng Xiao, Fei Xin, Qingfei Chen","doi":"10.3758/s13415-025-01270-8","DOIUrl":"https://doi.org/10.3758/s13415-025-01270-8","url":null,"abstract":"<p><p>Emojis are thought to be important for online communication, affecting not only our emotional state, but also our ability to infer the sender's emotional state, i.e., the affective theory of mind (aToM). However, it is unclear the role of text-emoji valence congruency in aToM judgements. Participants were presented with positive, negative, or neutral instant messages followed by positive or negative emoji and were required to infer the sender's emotional state as making valence and arousal ratings. Participants rated that senders felt more positive when they displayed positive emojis as opposed to negative emojis, and the senders were more aroused when valence between emoji and sentence was congruent. Event-related potentials were time-locked to emojis and analyzed by robust mass-univariate statistics, finding larger N200 for positive emojis relative to negative emojis in the negative sentence but not in the positive and neutral sentences, possibly reflecting conflict detection. Furthermore, the N400 effect was found between emotional and neutral sentences, but not between congruent and incongruent conditions, which may reflect a rapid bypassing of deeper semantic analysis. Critically, larger later positivity and negativity (600-900 ms) were found for incongruent combinations relative to congruent combinations in emotional sentences, which was more pronounced for positive sentence, reflecting the cognitive efforts needed for reevaluating the emotional meaning of emotional state attribution under incongruent combinations. These results suggest that emoji valence exerts different effects on positive and negative aToM judgments, and affective processing of sentence-emoji combinations precedes semantic processing, highlighting the importance of emojis in aToM.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annika Stump, Torsten Wüstenberg, Jeffrey N Rouder, Andreas Voss
{"title":"The face of illusory truth: Repetition of information elicits affective facial reactions predicting judgments of truth.","authors":"Annika Stump, Torsten Wüstenberg, Jeffrey N Rouder, Andreas Voss","doi":"10.3758/s13415-025-01266-4","DOIUrl":"https://doi.org/10.3758/s13415-025-01266-4","url":null,"abstract":"<p><p>People tend to judge repeated information as more likely true compared with new information. A key explanation for this phenomenon, called the illusory truth effect, is that repeated information can be processed more fluently, causing it to appear more familiar and trustworthy. To consider the function of time in investigating its underlying cognitive and affective mechanisms, our design comprised two retention intervals. Seventy-five participants rated the truth of new and repeated statements 10 min, as well as 1 week after first exposure while spontaneous facial expressions were assessed via electromyography. Our data demonstrate that repetition results not only in an increased probability of judging information as true (illusory truth effect) but also in specific facial reactions indicating increased positive affect, reduced mental effort, and increased familiarity (i.e., relaxations of musculus corrugator supercilii and frontalis) during the evaluation of information. The results moreover highlight the relevance of time: both the repetition-induced truth effect as well as EMG activities, indicating increased positive affect and reduced mental effort, decrease significantly after a longer interval.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mei Li, DengFang Tang, Wenbin Pan, Yujie Zhang, Jiachen Lu, Hong Li
{"title":"Correction: The influence of social status and promise levels in trust games: An Event-Related Potential (ERP) study.","authors":"Mei Li, DengFang Tang, Wenbin Pan, Yujie Zhang, Jiachen Lu, Hong Li","doi":"10.3758/s13415-025-01274-4","DOIUrl":"https://doi.org/10.3758/s13415-025-01274-4","url":null,"abstract":"","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tyler Mari, S Hasan Ali, Lucrezia Pacinotti, Sarah Powsey, Nicholas Fallon
{"title":"Machine learning classification of active viewing of pain and non-pain images using EEG does not exceed chance in external validation samples.","authors":"Tyler Mari, S Hasan Ali, Lucrezia Pacinotti, Sarah Powsey, Nicholas Fallon","doi":"10.3758/s13415-025-01268-2","DOIUrl":"https://doi.org/10.3758/s13415-025-01268-2","url":null,"abstract":"<p><p>Previous research has demonstrated that machine learning (ML) could not effectively decode passive observation of neutral versus pain photographs by using electroencephalogram (EEG) data. Consequently, the present study explored whether active viewing, i.e., requiring participant engagement in a task, of neutral and pain stimuli improves ML performance. Random forest (RF) models were trained on cortical event-related potentials (ERPs) during a two-alternative forced choice paradigm, whereby participants determined the presence or absence of pain in photographs of facial expressions and action scenes. Sixty-two participants were recruited for the model development sample. Moreover, a within-subject temporal validation sample was collected, consisting of 27 subjects. In line with our previous research, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expressions. The results demonstrated that the RF successfully classified discrete categories of visual stimuli (faces and scenes) with accuracies of 78% and 66% on cross-validation and external validation, respectively. However, despite promising cross-validation results of 61% and 67% for the classification of neutral and pain scenes and neutral and pain faces, respectively, the RF models failed to exceed chance performance on the external validation dataset on both empathy classification attempts. These results align with previous research, highlighting the challenges of classifying complex states, such as pain empathy using ERPs. Moreover, the results suggest that active observation fails to enhance ML performance beyond previous passive studies. Future research should prioritise improving model performance to obtain levels exceeding chance, which would demonstrate increased utility.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lili Järvinen, Severi Santavirta, Vesa Putkinen, Henry K Karlsson, Kerttu Seppälä, Lihua Sun, Matthew Hudson, Jussi Hirvonen, Pirjo Nuutila, Lauri Nummenmaa
{"title":"Reward responses to vicarious feeding depend on body mass index.","authors":"Lili Järvinen, Severi Santavirta, Vesa Putkinen, Henry K Karlsson, Kerttu Seppälä, Lihua Sun, Matthew Hudson, Jussi Hirvonen, Pirjo Nuutila, Lauri Nummenmaa","doi":"10.3758/s13415-025-01265-5","DOIUrl":"https://doi.org/10.3758/s13415-025-01265-5","url":null,"abstract":"<p><p>Eating is inherently social for humans. Yet, most neuroimaging studies of appetite and food-induced reward have focused on studying brain responses to food intake or viewing pictures of food alone. We used functional magnetic resonance imaging (fMRI) to measure haemodynamic responses to \"vicarious\" feeding. The subjects (n = 97) viewed series of short videos representing naturalistic episodes of social eating intermixed with videos without feeding/appetite-related content. Viewing the vicarious feeding (versus control) videos activated motor and premotor cortices, thalamus, and dorsolateral prefrontal cortices, consistent with somatomotor and affective engagement. Responses to the feeding videos were negatively correlated with the participants' body mass index. Altogether these results suggest that seeing others eating engages the corresponding motor and affective programs in the viewers' brain, potentially increasing appetite and promoting mutual feeding.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Garance M Meyer, Maëlle Riou, Philippe Boulinguez, Guillaume Sescousse
{"title":"Mechanisms of Proactive Adaptation in a Rewarded Response Inhibition Task: Executive, Motor, or Attentional Effects?","authors":"Garance M Meyer, Maëlle Riou, Philippe Boulinguez, Guillaume Sescousse","doi":"10.3758/s13415-025-01269-1","DOIUrl":"https://doi.org/10.3758/s13415-025-01269-1","url":null,"abstract":"<p><p>A growing number of studies have demonstrated the effects of reward motivation on inhibitory control performance. However, the exact neurocognitive mechanisms supporting these effects are not fully elucidated. In this preregistered study, we test the hypothesis that changes in speed-accuracy trade-off across contexts that alternatively incentivize fast responses versus accurate inhibition rely on a modulation of proactive inhibitory control, a mechanism intended to lock movement initiation in anticipation of stimulus presentation. Thirty healthy participants performed a modified Go/NoGo task in which the motivation to prioritize Go vs. NoGo successes was manipulated using monetary rewards of different magnitudes. High-density EEG was recorded throughout the task. Source-space analyses were performed to track brain oscillatory activities consistent with proactive inhibitory control. We observed that participants adapted their behavior to the motivational context but found no evidence that this adaptation relied on a modulation of proactive inhibitory control, hence failing to provide support for our pre-registered hypothesis. Unplanned analyses of brain-behavior relationships suggested an association between faster reaction times and enhanced top-down attention to the stimuli associated with larger rewards, as well as between increased commission error rates and stronger motor activations when Go stimuli were associated with larger rewards. The latter was related to inter-individual differences in trait reward responsiveness. These results highlight the need to carefully parse the different contributing mechanisms when studying the influence of reward motivation on inhibitory performance in impulsivity disorders. Exploratory results suggest alternative mechanisms that may be directly tested in further studies.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143411422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Megan A Boudewyn, Yaqi Xu, Ashley R Rosenfeld, Nathan P Caines
{"title":"Common sources of linguistic conflict engage domain-general conflict control mechanisms during language comprehension.","authors":"Megan A Boudewyn, Yaqi Xu, Ashley R Rosenfeld, Nathan P Caines","doi":"10.3758/s13415-025-01267-3","DOIUrl":"https://doi.org/10.3758/s13415-025-01267-3","url":null,"abstract":"<p><p>The current study tested the hypothesis that lexical ambiguity, a common source of representational conflict during language comprehension, engages domain-general cognitive control processes that are reflected by theta-band oscillations in scalp-recorded electroencephalograms (EEG). In Experiment 1, we examined the neural signature elicited by lexically ambiguous compared to unambiguous words during sentence comprehension. The results showed that midfrontal theta activity was increased in response to linguistic conflict (lexical ambiguity). In Experiment 2, we examined postconflict adaptation effects by comparing temporarily ambiguous sentences that followed previous instances of conflict (other temporarily ambiguous sentences) to those that followed a previous low-conflict (unambiguous) sentence. A midfrontal theta effect associated with linguistic conflict was again found in Experiment 2, such that theta was increased for temporarily ambiguous sentences that followed previous low-conflict (unambiguous) sentences compared with those that followed previous high-conflict (temporarily ambiguous) sentences. In both experiments, facilitated lexical semantic processing was also observed for words that came after the point of conflict, which may reflect a downstream \"benefit\" of cognitive control engagement. Overall, our results provide novel insights into the neurocognitive mechanisms underlying conflict processing in language comprehension and suggest that the same neural computations are involved in processing nonlinguistic and linguistic conflict.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143411421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}