Michael T. Cox , Kristen Jacobson , Paul Rademacher , Laura M. Hiatt , Mark Roberts
{"title":"Estimating the difficulty of abstract classes of problems","authors":"Michael T. Cox , Kristen Jacobson , Paul Rademacher , Laura M. Hiatt , Mark Roberts","doi":"10.1016/j.cogsys.2025.101412","DOIUrl":"10.1016/j.cogsys.2025.101412","url":null,"abstract":"<div><div>Learning is most effective when an artificial agent (or a human) begins with mastering easier tasks before progressing to more difficult ones. In the reinforcement learning community, this principle has led to the concept of a curriculum, which consists of successively harder training episodes. However, the creation of such episodes requires significant manual effort. Some researchers have semi-automated this process by using specialized graphs to organize learning tasks and order them by increasing difficulty. But the degree to which one task is harder than another remains an open question. In this paper, we present a method for automatically determining the difficulty of an arbitrary task, and hence the difference between associated learning problems. In support of this goal, we will examine the fundamental question of what makes an activity hard rather than seek an incremental improvement in known algorithms or representations. Further, the scope of research will not be limited to machine learning only but will include planning problems as well. We present empirical data to support our claims, and we consider the human–machine problem of choosing good representations related to a curriculum.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101412"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boris V. Chernyshev , Larisa A. Pozniak , Kristina I. Pultsina , Andrey O. Prokofyev , Anna G. Kruychkova , Vadim L. Ushakov
{"title":"Theta power increases during intermodal configural learning: A possible mechanism for establishing network communication during stimulus encoding and feature binding","authors":"Boris V. Chernyshev , Larisa A. Pozniak , Kristina I. Pultsina , Andrey O. Prokofyev , Anna G. Kruychkova , Vadim L. Ushakov","doi":"10.1016/j.cogsys.2025.101415","DOIUrl":"10.1016/j.cogsys.2025.101415","url":null,"abstract":"<div><div>Configurations are gestalt-like conjunctions of stimuli or stimulus features leading to holistic perception. The current study in humans investigated configural threat learning with bimodal visual-auditory conjunctions. The associative learning task involved classical discriminative conditioning with elemental visual (V), elemental auditory (A) and complex bimodal audiovisual (AV) stimuli, some of which were reinforced and some not. We focused on early theta oscillations (4–7 Hz) evoked by stimuli, and we used data-driven approach to magnetoencephalographic data recorded during participants’ performance on the task. We observed a robust increase in theta-band power in response to reinforced configural audiovisual stimuli (AV+), compared either to non-reinforced audiovisual stimuli (AV−) or to reinforced elemental stimuli (A+ or V+). Notably, the effect in response to the configural stimulus exhibited non-additive properties, indicating emergent integrative processing that extends beyond a simple superposition of its elements. Source localization revealed a distributed network of higher-order associative brain regions specifically engaged during configural learning, including the parahippocampal complex and dorsolateral prefrontal cortex – areas traditionally associated with learning and memory. Significant theta power increases were also observed in the inferior parietal cortex and temporoparietal junction, as well as in the lateral and inferior temporal cortices. These regions, known for their roles in multimodal integration and higher-order cognition, are implicated in relational processing, attentional modulation, and object categorization. Together, these findings underscore the role of theta synchronization in binding complex sensory inputs into unified, higher-level representations during configural learning in humans. We interpret these results in terms of hippocampal-cortical communication and concept formation.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101415"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sang Hun Kim , Dongkyu Park , Jongmin Lee , So Young Lee , Yosep Chong
{"title":"Humanoid artificial consciousness designed with Large Language Model based on psychoanalysis and personality theory","authors":"Sang Hun Kim , Dongkyu Park , Jongmin Lee , So Young Lee , Yosep Chong","doi":"10.1016/j.cogsys.2025.101392","DOIUrl":"10.1016/j.cogsys.2025.101392","url":null,"abstract":"<div><div>Human consciousness is still a concept hard to define with current scientific understanding. Although Large Language Models (LLMs) have recently demonstrated significant advancements across various domains including translation and summarization, human consciousness is not something to imitate with current upfront technology owing to so-called hallucination. This study, therefore, proposes a novel approach to address these challenges by integrating psychoanalysis and the Myers–Briggs Type Indicator (MBTI) into constructing consciousness and personality modules. We developed three artificial consciousnesses (self-awareness, unconsciousness, and preconsciousness) based on the principles of psychoanalysis. Additionally, we designed 16 characters with different personalities representing the sixteen MBTI types, with several attributes such as needs, status, and memories. To determine if our model’s artificial consciousness exhibits human-like cognition, we created ten distinct situations considering seven attributes such as emotional understanding and logical thinking. The decision-making process of artificial consciousness and the final action were evaluated in three ways: survey evaluation, three-tier classification via ChatGPT, and qualitative review. Both quantitative and qualitative analyses indicated a high likelihood of well-simulated consciousness, although the difference in response between different characters and consciousnesses was not very significant. This implies that the developed models incorporating elements of psychoanalysis and personality theory can lead to building a more intuitive and adaptable AI system with humanoid consciousness. Therefore, this study contributes to opening up new avenues for improving AI interactions in complex cognitive contexts.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101392"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analogical mappings of facts and counterfactuals in the human mind and Peirce’s abduction: limitations in LLMs","authors":"Mariana Olezza","doi":"10.1016/j.cogsys.2025.101408","DOIUrl":"10.1016/j.cogsys.2025.101408","url":null,"abstract":"<div><div>In this work, it is proposed that the human mind engages in an analogical mapping between facts found in “expert knowledge” and the abductive reasoning process described by Charles Sanders Peirce (1839–1914). This mapping connects the human mind with the causal world and enables the generation of hypotheses—whether scientific, artistic, or related to everyday life. Artificial Neural Networks (ANNs), including Large Language Models (LLMs) (<span><span>Vaswani et al., 2017</span></span>) and models incorporating Generative Adversarial Networks (GANs) (<span><span>Goodfellow et al., 2014</span></span>), face two key limitations: (1) They cannot work with counterfactuals, relying only on correlational datasets. (2) They are unable to perform true abductive reasoning. These systems may appear to “create” mappings with varying degrees of amplitude, but this impression arises from hyperparameters—such as Temperature (T) (<span><span>Agarwal et al., 2024</span></span>, <span><span>Peeperkorn et al., 2024</span></span>) and Top–K (<span><span>Noarov et al., 2025</span></span>)—configured by system supervisors or users via prompts. These parameters control the model’s output variability: Temperature influences the distribution of logits, while Top–K limits the prediction to the top K probable tokens, thus managing how deterministic or aleatoric the output becomes.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101408"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paula Subías-Beltrán , Oriol Pujol , Itziar de Lecuona
{"title":"Safeguarding autonomy: A focus on machine learning decision systems","authors":"Paula Subías-Beltrán , Oriol Pujol , Itziar de Lecuona","doi":"10.1016/j.cogsys.2025.101413","DOIUrl":"10.1016/j.cogsys.2025.101413","url":null,"abstract":"<div><div>As global discourse on AI regulation gains momentum, this paper focuses on delineating the impact of ML on autonomy and fostering awareness. Respect for autonomy is a basic principle in bioethics that establishes people as decision-makers. While the concept of autonomy in the context of ML appears in several European normative publications, it remains a theoretical concept that has yet to be widely accepted in ML practice. Our contribution is to bridge the gap between theory and practice in ML by encouraging the respect of autonomy in ML-aided decision-making. We do this by proposing a clear framework for operationalizing autonomy and identifying the conditioning factors that currently prevent it. Consequently, we focus on the different stages of the ML pipeline to identify the potential effects on ML end-users’ autonomy. To improve its practical utility, we propose a related question for each detected impact, offering guidance for identifying possible focus points to respect ML end-users autonomy in decision-making.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101413"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie C.F. Hendrikse , Jan Treur , Sander L. Koole
{"title":"Emerging synchrony and synchrony transitions and their effects on development of affiliation in social interaction adaptivity: Comparative computational analysis of different synchrony and synchrony transition detection methods","authors":"Sophie C.F. Hendrikse , Jan Treur , Sander L. Koole","doi":"10.1016/j.cogsys.2025.101399","DOIUrl":"10.1016/j.cogsys.2025.101399","url":null,"abstract":"<div><div>Interpersonal synchrony often emerges during social interaction and in turn is linked to better interpersonal affiliation. In addition, transitions in synchrony – meaning switching between moving in and out of sync − also occur often. It might be assumed that transitions in synchrony, especially when the extent of synchrony decreases, negatively affect affiliation. Nevertheless, there is empirical evidence indicating that time periods with transitions in synchrony can have an even stronger positive effect on affiliation or liking in comparison to time periods without transitions in synchrony, possibly highlighting that timing of synchrony episodes is of equal importance for being considered as the extent of synchrony episodes is. This paper presents multiple systematic analyses of both phenomena based on an adaptive agent model simulating how persons’ affiliation might benefit both from emerging synchrony and transitions in synchrony. Both for detection of synchrony and detection of synchrony transitions, multiple methods have been proposed in the literature and applied (from an external observer viewpoint) to identify or detect forms of emerging synchrony or synchrony transitions in given pairs of time series. We systematically evaluate through simulations the performance of multiple combinations of synchrony detection methods that have been incorporated in our developed adaptive agent model. These methods model the agent’s subjective detection of synchrony and synchrony transitions. We explored and compared the synchrony scores from the following methods: complemental difference, Pearson correlation coefficient, signal matching and average mutual information. Regarding the transition detection of synchrony scores, we examined the following three methods: standard deviation based, average based, and maximum-minimum based. In a comparative manner we evaluated all 12 combinations of synchrony detection and transition detection methods in our adaptive agent model in simulation experiments for two agents with a setup in which a number of situations were encountered in different (time) episodes. Moreover, also the subjective synchrony and transition detection of each of the two agents were mutually compared and their subjective detections were compared to objective detections from an external observer viewpoint.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101399"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hendrik Buschmeier , Heike M. Buhl , Friederike Kern , Angela Grimminger , Helen Beierling , Josephine Fisher , André Groß , Ilona Horwath , Nils Klowait , Stefan Lazarov , Michael Lenke , Vivien Lohmer , Katharina Rohlfing , Ingrid Scharlau , Amit Singh , Lutz Terfloth , Anna-Lisa Vollmer , Yu Wang , Annedore Wilmes , Britta Wrede
{"title":"Forms of understanding for XAI-Explanations","authors":"Hendrik Buschmeier , Heike M. Buhl , Friederike Kern , Angela Grimminger , Helen Beierling , Josephine Fisher , André Groß , Ilona Horwath , Nils Klowait , Stefan Lazarov , Michael Lenke , Vivien Lohmer , Katharina Rohlfing , Ingrid Scharlau , Amit Singh , Lutz Terfloth , Anna-Lisa Vollmer , Yu Wang , Annedore Wilmes , Britta Wrede","doi":"10.1016/j.cogsys.2025.101419","DOIUrl":"10.1016/j.cogsys.2025.101419","url":null,"abstract":"<div><div>Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’ on the part of the explainee. However, what it means to ‘understand’ is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding for XAI-explanations and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, philosophy and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely <em>enabledness</em>, ‘knowing how’ to do or decide something, and <em>comprehension</em>, ‘knowing that’ – both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain <em>agency</em>. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101419"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soheil Sabri , Mahdi Aghaabbasi , Simon Reay Atkinson , Mary Jean Amon , Peter Hancock , Roger Azevedo , Megan Wiedbusch , Crystal Maraj , Sean Mondesire , Bulent Soykan , Stephen Fiore , Saeid Nahavandi , Ghaith Rabadi
{"title":"Integrating human–machine systems and digital twin technologies: navigating trust, interoperability, and ethical challenges","authors":"Soheil Sabri , Mahdi Aghaabbasi , Simon Reay Atkinson , Mary Jean Amon , Peter Hancock , Roger Azevedo , Megan Wiedbusch , Crystal Maraj , Sean Mondesire , Bulent Soykan , Stephen Fiore , Saeid Nahavandi , Ghaith Rabadi","doi":"10.1016/j.cogsys.2025.101414","DOIUrl":"10.1016/j.cogsys.2025.101414","url":null,"abstract":"<div><div>This commentary highlights three problems that can emerge by integrating Digital Twin Technology (DTT) and Human–Machine Systems (HMS), drawing insights from Human–Technology Interaction, Systems Engineering and Computer Science, and Learning Sciences experts, who participated in the IEEE SMC Society/SMST Workshop on HMS–DTT, hosted at the University of Central Florida. The paper focuses on ethics, human and data interoperability, and trust issues. Rather than providing a traditional literature review, it consolidates contributions from workshop discussions and highlights the need for transparent, reliable systems, standardized data protocols, and ethical frameworks to guide development and implementation. Synthesizing diverse perspectives underscores the importance of interdisciplinary approaches in realizing the benefits of HMS and DTT integration while mitigating potential risks. Overall, this work aims to inform future research agendas and foster responsible innovation by integrating viewpoints across disciplines in this rapidly evolving field.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101414"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Organizations’ interpersonal activity knowledge graph (IAKG)","authors":"Serge Sonfack Sounchio , Halguieta Trawina , Baudelaire Ismael Tankeu Nguekeu , Laurent Geneste , Bernard Kamsu-Foguem","doi":"10.1016/j.cogsys.2025.101407","DOIUrl":"10.1016/j.cogsys.2025.101407","url":null,"abstract":"<div><div>Knowledge today supports organizations’ growth, lets them stay competitive, and enables them to design new products and services or make effective decisions. This knowledge is classified into two primary forms: explicit knowledge, which is easy to encode, store, and access, and implicit knowledge, which employees possess regarding products, services, and how they carry out an organization’s activities. Unlike explicit knowledge, implicit knowledge, and particularly organizations’ personal activity knowledge, is challenging to capture, formalize, and reuse. Moreover, the human-centered personal knowledge graph approach is unfit for the personal activity knowledge representation and reasoning. On the one hand, this study describes and depicts the limitations of human-centered personal knowledge graph approaches for representing personal activity knowledge within an organization. Afterward, it elaborates on a personal activity ontology derived from an extension of the activity theory concept established in social sciences. The proposed framework enables the capture, formalization, sharing, and reasoning of personal activity knowledge within an organization.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101407"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human performance in TSP tasks: Based on symbolic cognition","authors":"Chen Chen , Ruimin Lyu , Guoying Yang , Yuan Liu","doi":"10.1016/j.cogsys.2025.101393","DOIUrl":"10.1016/j.cogsys.2025.101393","url":null,"abstract":"<div><div>As research on human cognition deepens, understanding the heuristic mechanisms humans use in planning and problem-solving is of great significance for the design and improvement of optimization algorithms. This study aims to explore the heuristic strategies based on symbolic features that humans employ when solving the Traveling Salesman Problem (TSP) and to identify key factors that enhance the efficiency of human problem-solving in TSP. By analyzing participants’ performance in TSP tasks with line features (Line-TSP), the experiment controlled the intensity and operational modes of symbolic features and compared the results with heuristic algorithms from existing literature. The results indicate that humans perform exceptionally well in Line-TSP tasks, with their overall performance approaching that of efficient heuristic algorithms. Symbolic features contribute to enhancing human problem-solving efficiency, although this efficiency slightly decreases when the operation mode resembles handwriting. This study proposes a new heuristic mechanism for solving TSP, offering fresh insights for the design and optimization of TSP algorithms.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101393"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}