{"title":"Editorial IEEE Transactions on Cognitive and Developmental Systems","authors":"Huajin Tang","doi":"10.1109/TCDS.2024.3353515","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3353515","url":null,"abstract":"As we usher into the new year of 2024, in my capacity as the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems (TCDS), I am happy to extend to you a tapestry of New Year greetings, may this year be filled with prosperity, success, and groundbreaking achievements in our shared fields.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"3-3"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Tang;Wei Lin;Chenguang Yang;Nicola Gatti;Gary G. Yen
{"title":"Guest Editorial Special Issue on Cognitive Learning of Multiagent Systems","authors":"Yang Tang;Wei Lin;Chenguang Yang;Nicola Gatti;Gary G. Yen","doi":"10.1109/TCDS.2023.3325505","DOIUrl":"https://doi.org/10.1109/TCDS.2023.3325505","url":null,"abstract":"The development and cognition of biological and intelligent individuals shed light on the development of cognitive, autonomous, and evolutionary robotics. Take the collective behavior of birds as an example, each individual effectively communicates information and learns from multiple neighbors, facilitating cooperative decision making among them. These interactions among individuals illuminate the growth and cognition of natural groups throughout the evolutionary process, and they can be effectively modeled as multiagent systems. Multiagent systems have the ability to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve, which also improves the robustness and efficiency through collaborative learning. Multiagent learning is playing an increasingly important role in various fields, such as aerospace systems, intelligent transportation, smart grids, etc. With the environment growing increasingly intricate, characterized by factors, such as high dynamism and incomplete/imperfect observational data, the challenges associated with various tasks are escalating. These challenges encompass issues like information sharing, the definition of learning objectives, and grappling with the curse of dimensionality. Unfortunately, many of the existing methods are struggling to effectively address these multifaceted issues in the realm of cognitive intelligence. Furthermore, the field of cognitive learning in multiagent systems underscores the efficiency of distributed learning, demonstrating the capacity to acquire the skill of learning itself collectively. In light of this, multiagent learning, while holding substantial research significance, confronts a spectrum of learning problems that span from single to multiple agents, simplicity to complexity, low dimensionality to high dimensionality, and one domain to various other domains. Agents autonomously and rapidly make swarm intelligent decisions through cognitive learning overcoming the above challenges, which holds significant importance for the advancement of various practical fields.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"4-7"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419126","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3352771","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352771","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3352775","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352775","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li
{"title":"An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback","authors":"Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li","doi":"10.1109/TCDS.2024.3357547","DOIUrl":"10.1109/TCDS.2024.3357547","url":null,"abstract":"An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1405-1417"},"PeriodicalIF":5.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li
{"title":"Depression Detection Using an Automatic Sleep Staging Method With an Interpretable Channel-Temporal Attention Mechanism","authors":"Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li","doi":"10.1109/TCDS.2024.3358022","DOIUrl":"10.1109/TCDS.2024.3358022","url":null,"abstract":"Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1418-1432"},"PeriodicalIF":5.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruiqi Wang;Wonse Jo;Dezhong Zhao;Weizheng Wang;Arjun Gupte;Baijian Yang;Guohua Chen;Byung-Cheol Min
{"title":"Husformer: A Multimodal Transformer for Multimodal Human State Recognition","authors":"Ruiqi Wang;Wonse Jo;Dezhong Zhao;Weizheng Wang;Arjun Gupte;Baijian Yang;Guohua Chen;Byung-Cheol Min","doi":"10.1109/TCDS.2024.3357618","DOIUrl":"10.1109/TCDS.2024.3357618","url":null,"abstract":"Human state recognition is a critical topic with pervasive and important applications in human–machine systems. Multimodal fusion, which entails integrating metrics from various data sources, has proven to be a potent method for boosting recognition performance. Although recent multimodal-based models have shown promising results, they often fall short in fully leveraging sophisticated fusion strategies essential for modeling adequate cross-modal dependencies in the fusion representation. Instead, they rely on costly and inconsistent feature crafting and alignment. To address this limitation, we propose an end-to-end multimodal transformer framework for multimodal human state recognition called \u0000<italic>Husformer</i>\u0000. Specifically, we propose using cross-modal transformers, which inspire one modality to reinforce itself through directly attending to latent relevance revealed in other modalities, to fuse different modalities while ensuring sufficient awareness of the cross-modal interactions introduced. Subsequently, we utilize a self-attention transformer to further prioritize contextual information in the fusion representation. Extensive experiments on two human emotion corpora (DEAP and WESAD) and two cognitive load datasets [multimodal dataset for objective cognitive workload assessment on simultaneous tasks (MOCAS) and CogLoad] demonstrate that in the recognition of the human state, our \u0000<italic>Husformer</i>\u0000 outperforms both state-of-the-art multimodal baselines and the use of a single modality by a large margin, especially when dealing with raw multimodal features. We also conducted an ablation study to show the benefits of each component in \u0000<italic>Husformer</i>\u0000. Experimental details and source code are available at \u0000<uri>https://github.com/SMARTlab-Purdue/Husformer</uri>\u0000.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1374-1390"},"PeriodicalIF":5.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoge Cao;Tao Lu;Liming Zheng;Yinghao Cai;Shuo Wang
{"title":"PLOT: Human-Like Push-Grasping Synergy Learning in Clutter With One-Shot Target Recognition","authors":"Xiaoge Cao;Tao Lu;Liming Zheng;Yinghao Cai;Shuo Wang","doi":"10.1109/TCDS.2024.3357084","DOIUrl":"10.1109/TCDS.2024.3357084","url":null,"abstract":"In unstructured environments, robotic grasping tasks are frequently required to interactively search for and retrieve specific objects from a cluttered workspace under the condition that only partial information about the target is available, like images, text descriptions, 3-D models, etc. It is a great challenge to correctly recognize the targets with limited information and learn synergies between different action primitives to grasp the targets from densely occluding objects efficiently. In this article, we propose a novel human-like push-grasping method that could grasp unknown objects in clutter using only one target RGB with Depth (RGB-D) image, called push-grasping synergy learning in clutter with one-shot target recognition (PLOT). First, we propose a target recognition (TR) method which automatically segments the objects both from the query image and workspace image, and extract the robust features of each segmented object. Through the designed feature matching criterion, the targets could be quickly located in the workspace. Second, we introduce a self-supervised target-oriented grasping system based on synergies between push and grasp actions. In this system, we propose a salient Q (SQ)-learning framework that focuses the \u0000<italic>Q</i>\u0000 value learning in the area including targets and a coordination mechanism (CM) that selects the proper actions to search and isolate the targets from the surrounding objects, even in the condition of targets invisible. Our method is inspired by the working memory mechanism of human brain and can grasp any target object shown through the image and has good generality in application. Experimental results in simulation and real-world show that our method achieved the best performance compared with the baselines in finding the unknown target objects from the cluttered environment with only one demonstrated target RGB-D image and had the high efficiency of grasping under the synergies of push and grasp actions.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1391-1404"},"PeriodicalIF":5.0,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raveendra Pilli;Tripti Goel;R. Murugan;M. Tanveer;P. N. Suganthan
{"title":"Kernel-Ridge-Regression-Based Randomized Network for Brain Age Classification and Estimation","authors":"Raveendra Pilli;Tripti Goel;R. Murugan;M. Tanveer;P. N. Suganthan","doi":"10.1109/TCDS.2024.3349593","DOIUrl":"10.1109/TCDS.2024.3349593","url":null,"abstract":"Accelerated brain aging and abnormalities are associated with variations in brain patterns. Effective and reliable assessment methods are required to accurately classify and estimate brain age. In this study, a brain age classification and estimation framework is proposed using structural magnetic resonance imaging (sMRI) scans, a 3-D convolutional neural network (3-D-CNN), and a kernel ridge regression-based random vector functional link (KRR-RVFL) network. We used 480 brain MRI images from the publicly availabel IXI database and segmented them into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images to show age-related associations by region. Features from MRI images are extracted using 3-D-CNN and fed into the wavelet KRR-RVFL network for brain age classification and prediction. The proposed algorithm achieved high classification accuracy, 97.22%, 99.31%, and 95.83% for GM, WM, and CSF regions, respectively. Moreover, the proposed algorithm demonstrated excellent prediction accuracy with a mean absolute error (MAE) of \u0000<inline-formula><tex-math>$3.89$</tex-math></inline-formula>\u0000 years, \u0000<inline-formula><tex-math>$3.64$</tex-math></inline-formula>\u0000 years, and \u0000<inline-formula><tex-math>$4.49$</tex-math></inline-formula>\u0000 years for GM, WM, and CSF regions, confirming that changes in WM volume are significantly associated with normal brain aging. Additionally, voxel-based morphometry (VBM) examines age-related anatomical alterations in different brain regions in GM, WM, and CSF tissue volumes.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1342-1351"},"PeriodicalIF":5.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10405861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}