{"title":"Brain Connectivity Analysis for EEG-Based Face Perception Task","authors":"Debashis Das Chakladar;Nikhil R. Pal","doi":"10.1109/TCDS.2024.3370635","DOIUrl":"10.1109/TCDS.2024.3370635","url":null,"abstract":"Face perception is considered a highly developed visual recognition skill in human beings. Most face perception studies used functional magnetic resonance imaging to identify different brain cortices related to face perception. However, studying brain connectivity networks for face perception using electroencephalography (EEG) has not yet been done. In the proposed framework, initially, a correlation-tree traversal-based channel selection algorithm is developed to identify the “optimum” EEG channels by removing the highly correlated EEG channels from the input channel set. Next, the effective brain connectivity network among those “optimum” EEG channels is developed using multivariate transfer entropy (TE) while participants watched different face stimuli (i.e., famous, unfamiliar, and scrambled). We transform EEG channels into corresponding brain regions for generalization purposes and identify the active brain regions for each face stimulus. To find the stimuluswise brain dynamics, the information transfer among the identified brain regions is estimated using several graphical measures [global efficiency (GE) and transitivity]. Our model archives the mean GE of 0.800, 0.695, and 0.581 for famous, unfamiliar, and scrambled faces, respectively. Identifying face perception-specific brain regions will enhance understanding of the EEG-based face-processing system. Understanding the brain networks of famous, unfamiliar, and scrambled faces can be useful in criminal investigation applications.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140002461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"D-FaST: Cognitive Signal Decoding With Disentangled Frequency–Spatial–Temporal Attention","authors":"WeiGuo Chen;Changjian Wang;Kele Xu;Yuan Yuan;Yanru Bai;Dongsong Zhang","doi":"10.1109/TCDS.2024.3370261","DOIUrl":"10.1109/TCDS.2024.3370261","url":null,"abstract":"Cognitive language processing (CLP), situated at the intersection of natural language processing (NLP) and cognitive science, plays a progressively pivotal role in the domains of artificial intelligence, cognitive intelligence, and brain science. Among the essential areas of investigation in CLP, cognitive signal decoding (CSD) has made remarkable achievements, yet there still exist challenges related to insufficient global dynamic representation capability and deficiencies in multidomain feature integration. In this article, we introduce a novel paradigm for CLP referred to as disentangled frequency–spatial–temporal attention (D-FaST). Specifically, we present a novel cognitive signal decoder that operates on disentangled frequency–space–time domain attention. This decoder encompasses three key components: frequency domain feature extraction employing multiview attention (MVA), spatial domain feature extraction utilizing dynamic brain connection graph attention, and temporal feature extraction relying on local time sliding window attention. These components are integrated within a novel disentangled framework. Additionally, to encourage advancements in this field, we have created a new CLP dataset, MNRED. Subsequently, we conducted an extensive series of experiments, evaluating D-FaST's performance on MNRED, as well as on publicly available datasets including ZuCo, BCIC IV-2A, and BCIC IV-2B. Our experimental results demonstrate that D-FaST outperforms existing methods significantly on both our datasets and traditional CSD datasets including establishing a state-of-the-art accuracy score 78.72% on MNRED, pushing the accuracy score on ZuCo to 78.35%, accuracy score on BCIC IV-2A to 74.85%, and accuracy score on BCIC IV-2B to 76.81%.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139979066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DTCM: Deep Transformer Capsule Mutual Distillation for Multivariate Time Series Classification","authors":"Zhiwen Xiao;Xin Xu;Huanlai Xing;Bowen Zhao;Xinhan Wang;Fuhong Song;Rong Qu;Li Feng","doi":"10.1109/TCDS.2024.3370219","DOIUrl":"10.1109/TCDS.2024.3370219","url":null,"abstract":"This article proposes a dual-network-based feature extractor, perceptive capsule network (PCapN), for multivariate time series classification (MTSC), including a local feature network (LFN) and a global relation network (GRN). The LFN has two heads (i.e., Head_A and Head_B), each containing two squash convolutional neural network (CNN) blocks and one dynamic routing block to extract the local features from the data and mine the connections among them. The GRN consists of two capsule-based transformer blocks and one dynamic routing block to capture the global patterns of each variable and correlate the useful information of multiple variables. Unfortunately, it is difficult to directly deploy PCapN on mobile devices due to its strict requirement for computing resources. So, this article designs a lightweight capsule network (LCapN) to mimic the cumbersome PCapN. To promote knowledge transfer from PCapN to LCapN, this article proposes a deep transformer capsule mutual (DTCM) distillation method. It is targeted and offline, using one- and two-way operations to supervise the knowledge distillation (KD) process for the dual-network-based student and teacher models. Experimental results show that the proposed PCapN and DTCM achieve excellent performance on University of East Anglia 2018 (UEA2018) datasets regarding top-1 accuracy.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139979417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Agree to Disagree: Exploring Partial Semantic Consistency Against Visual Deviation for Compositional Zero-Shot Learning","authors":"Xiangyu Li;Xu Yang;Xi Wang;Cheng Deng","doi":"10.1109/TCDS.2024.3367957","DOIUrl":"10.1109/TCDS.2024.3367957","url":null,"abstract":"Compositional zero-shot learning (CZSL) aims to recognize novel concepts from known subconcepts. However, it is still challenging since the intricate interaction between subconcepts is entangled with their corresponding visual features, which affects the recognition accuracy of concepts. Besides, the domain gap between training and testing data leads to the model poor generalization. In this article, we tackle these problems by exploring partial semantic consistency (PSC) to eliminate visual deviation to guarantee the discrimination and generalization of representations. Considering the complicated interaction between subconcepts and their visual features, we decompose seen images into visual elements according to their labels and obtain the instance-level subdeviations from compositions, which is utilized to excavate the category-level primitives of subconcepts. Furthermore, we present a multiscale concept composition (MSCC) approach to produce virtual samples from two aspects, which augments the sufficiency and diversity of samples so that the proposed model can generalize to novel compositions. Extensive experiments indicate that our method significantly outperforms the state-of-the-art approaches on three benchmark datasets.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressed Video Anomaly Detection of Human Behavior Based on Abnormal Region Determination","authors":"Lijun He;Miao Zhang;Hao Liu;Liejun Wang;Fan Li","doi":"10.1109/TCDS.2024.3367493","DOIUrl":"10.1109/TCDS.2024.3367493","url":null,"abstract":"Video anomaly detection has a wide range of applications in video monitoring-related scenarios. The existing image-domain-based anomaly detection algorithms usually require completely decoding the received videos, complex information extraction, and network structure, which makes them difficult to be implemented directly. In this article, we focus on anomaly detection directly for compressed videos. The compressed videos need not be fully decoded and auxiliary information can be obtained directly, which have low computational complexity. We propose a compressed video anomaly detection algorithm based on accurate abnormal region determination (ARD-VAD), which is suitable to be deployed on edge servers. First, to ensure the overall low complexity and save storage space, we sparsely sample the prior knowledge of I-frame representing the appearance information and motion vector (MV) representing the motion information from compressed videos. Based on the sampled information, a two-branch network structure, which consists of MV reconstruction branch and future I-frame prediction branch, is designed. Specifically, the two branches are connected by an attention network based on the MV residuals to guide the prediction network to focus on the abnormal regions. Furthermore, to emphasize the abnormal regions, we develop an adaptive sensing of abnormal regions determination module based on motion intensity represented by the second derivative of MV. This module can enhance the difference of the real anomaly region between the generated frame and the current frame. The experiments show that our algorithm can achieve a good balance between performance and complexity.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Reinforcement Learning With Multicritic TD3 for Decentralized Multirobot Path Planning","authors":"Heqing Yin;Chang Wang;Chao Yan;Xiaojia Xiang;Boliang Cai;Changyun Wei","doi":"10.1109/TCDS.2024.3368055","DOIUrl":"10.1109/TCDS.2024.3368055","url":null,"abstract":"Centralized multirobot path planning is a prevalent approach involving a global planner computing feasible paths for each robot using shared information. Nonetheless, this approach encounters limitations due to communication constraints and computational complexity. To address these challenges, we introduce a novel decentralized multirobot path planning approach that eliminates the need for sharing the states and intentions of robots. Our approach harnesses deep reinforcement learning and features an asynchronous multicritic twin delayed deep deterministic policy gradient (AMC-TD3) algorithm, which enhances the original gate recurrent unit (GRU)-attention-based TD3 algorithm by incorporating a multicritic network and employing an asynchronous training mechanism. By training each critic with a unique reward function, our learned policy enables each robot to navigate toward its long-term objective without colliding with other robots in complex environments. Furthermore, our reward function, grounded in social norms, allows the robots to naturally avoid each other in congested situations. Specifically, we train three critics to encourage each robot to achieve its long-term navigation goal, maintain its moving direction, and prevent collisions with other robots. Our model can learn an end-to-end navigation policy without relying on an accurate map or any localization information, rendering it highly adaptable to various environments. Simulation results reveal that our proposed approach surpasses baselines in several environments with different levels of complexity and robot populations.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial IEEE Transactions on Cognitive and Developmental Systems","authors":"Huajin Tang","doi":"10.1109/TCDS.2024.3353515","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3353515","url":null,"abstract":"As we usher into the new year of 2024, in my capacity as the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems (TCDS), I am happy to extend to you a tapestry of New Year greetings, may this year be filled with prosperity, success, and groundbreaking achievements in our shared fields.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Tang;Wei Lin;Chenguang Yang;Nicola Gatti;Gary G. Yen
{"title":"Guest Editorial Special Issue on Cognitive Learning of Multiagent Systems","authors":"Yang Tang;Wei Lin;Chenguang Yang;Nicola Gatti;Gary G. Yen","doi":"10.1109/TCDS.2023.3325505","DOIUrl":"https://doi.org/10.1109/TCDS.2023.3325505","url":null,"abstract":"The development and cognition of biological and intelligent individuals shed light on the development of cognitive, autonomous, and evolutionary robotics. Take the collective behavior of birds as an example, each individual effectively communicates information and learns from multiple neighbors, facilitating cooperative decision making among them. These interactions among individuals illuminate the growth and cognition of natural groups throughout the evolutionary process, and they can be effectively modeled as multiagent systems. Multiagent systems have the ability to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve, which also improves the robustness and efficiency through collaborative learning. Multiagent learning is playing an increasingly important role in various fields, such as aerospace systems, intelligent transportation, smart grids, etc. With the environment growing increasingly intricate, characterized by factors, such as high dynamism and incomplete/imperfect observational data, the challenges associated with various tasks are escalating. These challenges encompass issues like information sharing, the definition of learning objectives, and grappling with the curse of dimensionality. Unfortunately, many of the existing methods are struggling to effectively address these multifaceted issues in the realm of cognitive intelligence. Furthermore, the field of cognitive learning in multiagent systems underscores the efficiency of distributed learning, demonstrating the capacity to acquire the skill of learning itself collectively. In light of this, multiagent learning, while holding substantial research significance, confronts a spectrum of learning problems that span from single to multiple agents, simplicity to complexity, low dimensionality to high dimensionality, and one domain to various other domains. Agents autonomously and rapidly make swarm intelligent decisions through cognitive learning overcoming the above challenges, which holds significant importance for the advancement of various practical fields.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419126","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3352771","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352771","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}