{"title":"Edge-Centric Functional-Connectivity-Based Cofluctuation-Guided Subcortical Connectivity Network Construction","authors":"Qinrui Ling;Aiping Liu;Taomian Mi;Piu Chan;Xun Chen","doi":"10.1109/TCDS.2024.3462709","DOIUrl":"10.1109/TCDS.2024.3462709","url":null,"abstract":"Subcortical regions can be functionally organized into connectivity networks and are extensively communicated with the cortex via reciprocal connections. However, most current research on subcortical networks ignores these interconnections, and networks of the whole brain are of high dimensionality and computational complexity. In this article, we propose a novel cofluctuation-guided subcortical connectivity network construction model based on edge-centric functional connectivity (FC). It is capable of extracting the cofluctuations between the cortex and subcortex and constructing dynamic subcortical networks based on these interconnections. Blind source separation approaches with domain knowledge are designed for dimensionality reduction and feature extraction. Great reproducibility and reliability were achieved when applying our model to two sessions of functional magnetic resonance imaging (fMRI) data. Cortical areas having synchronous communications with the cortex were detected, which was unable to be revealed by traditional node-centric FC. Significant alterations in connectivity patterns were observed when dealing with fMRI of subjects with and without Parkinson's disease, which were further correlated to clinical scores. These validations demonstrated that our model provided a promising strategy for brain network construction, exhibiting great potential in clinical practice.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"390-399"},"PeriodicalIF":5.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unveiling Thoughts: A Review of Advancements in EEG Brain Signal Decoding Into Text","authors":"Saydul Akbar Murad;Nick Rahimi","doi":"10.1109/TCDS.2024.3462452","DOIUrl":"10.1109/TCDS.2024.3462452","url":null,"abstract":"The conversion of brain activity into text using electroencephalography (EEG) has gained significant traction in recent years. Many researchers are working to develop new models to decode EEG signals into text form. Although this area has shown promising developments, it still faces numerous challenges that necessitate further improvement. It is important to outline this area's recent developments and future research directions to provide a comprehensive understanding of the current state of technology, guide future research efforts, and enhance the effectiveness and accessibility of EEG-to-text systems. In this review article, we thoroughly summarize the progress in EEG-to-text conversion. First, we talk about how EEG-to-text technology has grown and what problems the field still faces. Second, we discuss existing techniques used in this field. This includes methods for collecting EEG data, the steps to process these signals, and the development of systems capable of translating these signals into coherent text. We conclude with potential future research directions, emphasizing the need for enhanced accuracy, reduced system constraints, and the exploration of novel applications across varied sectors. By addressing these aspects, this review aims to contribute to developing more accessible and effective brain–computer interface (BCI) technology for a broader user base.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"61-76"},"PeriodicalIF":5.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Methodology of Quantitative Social Intention Evaluation and Robot Gaze Behavior Control in Multiobjects Scenario","authors":"Haoyu Zhu;Xiaorui Liu;Hang Su;Wei Wang;Jinpeng Yu","doi":"10.1109/TCDS.2024.3461335","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3461335","url":null,"abstract":"This article focuses on the multiple objects selection problem for the robot in social scenarios, and proposes a novel methodology composed of quantitative social intention evaluation and gaze behavior control. For the social scenarios containing various persons and multimodal social cues, a combination of the entropy weight method (EWM) and gray correlation-order preference by similarity to the ideal solution (GC-TOPSIS) model is proposed to fuse the multimodal social cues, and evaluate the social intention of candidates. According to the quantitative evaluation of social intention, a robot can generate the interaction priority among multiple social candidates. To ensure this interaction selection mechanism in behavior level, an optimal control framework composed of model predictive controller (MPC) and online Gaussian process (GP) observer is employed to drive the eye-head coordinated gaze behavior of robot. Through the experiments conducted on the Xiaopang robot, the availability of the proposed methodology can be illustrated. This work enables robots to generate social behavior based on quantitative intention perception, which could bring the potential to explore the sensory principles and biomechanical mechanism underlying the human-robot interaction, and broaden the application of robot in the social scenario.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"400-409"},"PeriodicalIF":5.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mental Workload Assessment Using Deep Learning Models From EEG Signals: A Systematic Review","authors":"Kunjira Kingphai;Yashar Moshfeghi","doi":"10.1109/TCDS.2024.3460750","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3460750","url":null,"abstract":"Mental workload (MWL) assessment is crucial in information systems (IS), impacting task performance, user experience, and system effectiveness. Deep learning offers promising techniques for MWL classification using electroencephalography (EEG), which monitors cognitive states dynamically and unobtrusively. Our research explores deep learning's potential and challenges in EEG-based MWL classification, focusing on training inputs, cross-validation methods, and classification problem types. We identify five types of EEG-based MWL classification: within-subject, cross subject, cross session, cross task, and combined cross task and cross subject. Success depends on managing dataset uniqueness, session and task variability, and artifact removal. Despite the potential, real-world applications are limited. Enhancements are necessary for self-reporting methods, universal preprocessing standards, and MWL assessment accuracy. Specifically, inaccuracies are inflated when data are shuffled before splitting to train and test sets, disrupting EEG signals’ temporal sequence. In contrast, methods such as the time-series cross validation and leave-session-out approach better preserve temporal integrity, offering more accurate model performance evaluations. Utilizing deep learning for EEG-based MWL assessment could significantly improve IS functionality and adaptability in real time based on user cognitive states.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 1","pages":"40-60"},"PeriodicalIF":5.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongguang Pan;Shiyu Tong;Xuqiang Wei;Bingyang Teng
{"title":"Fatigue State Recognition System for Miners Based on a Multimodal Feature Extraction and Fusion Framework","authors":"Hongguang Pan;Shiyu Tong;Xuqiang Wei;Bingyang Teng","doi":"10.1109/TCDS.2024.3461713","DOIUrl":"10.1109/TCDS.2024.3461713","url":null,"abstract":"The fatigue factor is widely recognized as a primary contributor to accidents in the mining industry. Proactively recognizing fatigue states in miners before starting work can effectively establish a safety boundary for both miners safety and coal mine production. Therefore, this study designs a fatigue state recognition system for miners based on a multimodal extraction and fusion framework. First, the system is equipped with various sensors, a core processor and a display to collect and process physiological data such as electrocardiogram (ECG), electrodermal activity (EDA), blood pressure (BP), blood oxygen saturation (SpO<inline-formula><tex-math>${}_{2}$</tex-math></inline-formula>), skin temperature (SKT), as well as facial data, and to present fatigue state, respectively. Second, based on the multimodal feature extraction and fusion framework, after the necessary preprocessing steps, the system extracts physiological features by time and frequency domain analysis, extracts facial features by ResNeXt-50 and gated recurrent unit (GRU), and fuses multifeatures by Transformer+. Finally, in the comprehensive laboratory for coal-related programs of Xi’an University of Science and Technology, we test the system and build a multimodal dataset, and the results demonstrate an average accuracy of 93.15%.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"410-420"},"PeriodicalIF":5.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shike Yang;Ziming He;Jingchen Li;Haobin Shi;Qingbing Ji;Kao-Shing Hwang;Xianshan Li
{"title":"Neighborhood-Curiosity-Based Exploration in Multiagent Reinforcement Learning","authors":"Shike Yang;Ziming He;Jingchen Li;Haobin Shi;Qingbing Ji;Kao-Shing Hwang;Xianshan Li","doi":"10.1109/TCDS.2024.3460368","DOIUrl":"10.1109/TCDS.2024.3460368","url":null,"abstract":"Efficient exploration in cooperative multiagent reinforcement learning is still tricky in complex tasks. In this article, we propose a novel multiagent collaborative exploration method called neighborhood-curiosity-based exploration (NCE), by which agents can explore not only novel states but also new partnerships. Concretely, we use the attention mechanism in graph convolutional networks to perform a weighted summation of features from neighbors. The calculated attention weights can be regarded as an embodiment of the relationship among agents. Then, we use the prediction errors of the aggregated features as intrinsic rewards to facilitate exploration. When agents encounter novel states or new partnerships, NCE will produce large prediction errors, resulting in large intrinsic rewards. In addition, agents are more influenced by their neighbors and only interact directly with them in multiagent systems. Exploring partnerships between agents and their neighbors can enable agents to capture the most important cooperative relations with other agents. Therefore, NCE can effectively promote collaborative exploration even in environments with a large number of agents. Our experimental results show that NCE achieves significant performance improvements on the challenging StarCraft II micromanagement (SMAC) benchmark.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"379-389"},"PeriodicalIF":5.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Qian;Zhenhong Li;Yihui Zhao;Jie Zhang;Xianwen Kong;Samit Chakrabarty;Zhiqiang Zhang;Sheng Quan Xie
{"title":"Progressive-Learning-Based Assist-as-Needed Control for Ankle Rehabilitation","authors":"Kun Qian;Zhenhong Li;Yihui Zhao;Jie Zhang;Xianwen Kong;Samit Chakrabarty;Zhiqiang Zhang;Sheng Quan Xie","doi":"10.1109/TCDS.2024.3455795","DOIUrl":"10.1109/TCDS.2024.3455795","url":null,"abstract":"This article proposes a progressive-learning-based assist-as-needed (AAN) control scheme for ankle rehabilitation. To quantify the training performance, a fuzzy logic (FL) system is established to generate a holistic metric based on multiple kinematic and dynamic indicators. Subsequently, a cost function that contains both the tracking error and robot stiffness is constructed. A novel learning scheme is then proposed to enhance subjects’ engagement, leveraging the FL metric to uphold a declining trend in the robot's stiffness. The system stability is analyzed using the Lyapunov theory, the control ultimate bounds are specified and the effects of parameter tuning are discussed. Experiments are conducted on an ankle robot and the minimal assist-as-needed (MAAN) scheme is adopted for comparison. With a training session consisting of 11 trials, the quantitative performance evaluations, individual error convergences, progressive stiffness learning and human–robot interaction are evaluated. It is shown that within eight trials under the progressive AAN and MAAN, the robot assistive torques have an average reduction of 13.45% and 20.25% while subjects’ active torques are increased by 56.53% and 58.39%, respectively. During the late stage of training, the progressive AAN further improves two criteria by 9.44% and 6.29%, while the MAAN partially loses subjects’ participation (active torques are reduced by 36.38%) due to the occurrence of motion adaption.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"328-339"},"PeriodicalIF":5.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation","authors":"Zhang Zihao;Yang Yale;Hou Huifang;Meng Fanman;Zhang Fan;Xie Kangzhan;Zhuang Chunsheng","doi":"10.1109/TCDS.2024.3455356","DOIUrl":"10.1109/TCDS.2024.3455356","url":null,"abstract":"The semantic segmentation of indoor scenes based on RGB and depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification (BCFR) module to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion (CAF) module is designed to realize multistage deep multimodal feature fusion. Finally, a multisupervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"366-378"},"PeriodicalIF":5.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Behavioral Decision-Making Model of Learning and Memory for Mobile Robot Triggered by Curiosity","authors":"Dongshu Wang;Qi Liu;Xulin Gao;Lei Liu","doi":"10.1109/TCDS.2024.3454779","DOIUrl":"10.1109/TCDS.2024.3454779","url":null,"abstract":"Learning and memorizing behavioral decision in the process of environmental cognition to guide future decision is an important aspect of research and application in mobile robotics. Traditional rule-based behavioral decision approaches have difficulty in adapting to complex and changing environments. The offline decision-making approaches lead to poor adaptability to dynamic environments, while behavioral decision-making based on reinforcement learning relies on data acquisition, and the learned knowledge cannot guide mobile robots to quickly adapt to new environments. To address this issue, this article proposes a brain-inspired behavioral decision model that can perform incremental learning by simulating the logical structure of memory classification in the brain, as well as the memory conversion mechanisms of hippocampus, prefrontal cortex, and anterior cingulate cortex. The model interacts with the environment through semisupervised learning and learns the current decision online, simulating the memory function of humans to enable mobile robots to adapt to changing environments. In addition, an internal reward mechanism driven by curiosity is designed, simulating the reinforcement mechanism of curiosity in human memory, encoding the memory of unfamiliar behavioral decisions for mobile robots, and consolidating the memory of frequently made behavioral decisions, improving the learning and memory capacity of mobile robots in environmental cognition. The feasibility of the proposed model is verified by physical experiments in different environments.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"352-365"},"PeriodicalIF":5.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. A. Badarin;V. M. Antipov;V. V. Grubov;A. V. Andreev;E. N. Pitsik;S. A. Kurkin;A. E. Hramov
{"title":"Brain Compensatory Mechanisms During the Prolonged Cognitive Task: fNIRS and Eye-Tracking Study","authors":"A. A. Badarin;V. M. Antipov;V. V. Grubov;A. V. Andreev;E. N. Pitsik;S. A. Kurkin;A. E. Hramov","doi":"10.1109/TCDS.2024.3453590","DOIUrl":"10.1109/TCDS.2024.3453590","url":null,"abstract":"The problem of maintaining cognitive performance under fatigue is crucial in fields requiring high concentration and efficiency to successfully complete critical tasks. In this context, the study of compensatory mechanisms that help the brain overcome fatigue is particularly important. This research investigates the correlations between physiological, behavioral, and subjective measures while considering the impact of fatigue on the performance of working memory tasks. A combined approach of functional near-infrared spectroscopy (fNIRS) and eye-tracking was used to reconstruct brain functional networks based on fNIRS data and analyze them in terms of network characteristics such as global clustering coefficient and global efficiency. Results showed a significant increase in subjective fatigue but no significant change in performance during the experiment. The study confirmed that despite fatigue, subjects can maintain performance through compensatory mechanisms, increasing mental effort, with the level of compensation depending on the task's complexity. Furthermore, the study showed that compensatory effort maintains the efficiency of the frontoparietal network, and the degree of compensatory effort is related to the difference in response times between high- and low-complexity tasks.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"303-314"},"PeriodicalIF":5.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}