IEEE Transactions on Cognitive and Developmental Systems最新文献

筛选
英文 中文
BitSNNs: Revisiting Energy-Efficient Spiking Neural Networks BitSNNs:重新审视高能效尖峰神经网络
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-04-01 DOI: 10.1109/TCDS.2024.3383428
Yangfan Hu;Qian Zheng;Gang Pan
{"title":"BitSNNs: Revisiting Energy-Efficient Spiking Neural Networks","authors":"Yangfan Hu;Qian Zheng;Gang Pan","doi":"10.1109/TCDS.2024.3383428","DOIUrl":"10.1109/TCDS.2024.3383428","url":null,"abstract":"To address the energy bottleneck in deep neural networks (DNNs), the research community has developed binary neural networks (BNNs) and spiking neural networks (SNNs) from different perspectives. To combine the advantages of both BNNs and SNNs for better energy efficiency, this article proposes BitSNNs, which leverage binary weights, single-step inference, and activation sparsity. During the development of BitSNNs, we observed performance degradation in deep ResNets due to the gradient approximation error. To mitigate this issue, we delve into the learning process and propose the utilization of a hardtanh function before activation binarization. Additionally, this article investigates the critical role of activation sparsity in BitSNNs for energy efficiency, a topic often overlooked in the existing literature. Our study reveals strategies to strike a balance between accuracy and energy consumption during the training/testing stage, potentially benefiting applications in edge computing. Notably, our proposed method achieves state-of-the-art performance while significantly reducing energy consumption.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1736-1747"},"PeriodicalIF":5.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAT: Morphological Adaptive Transformer for Universal Morphology Policy Learning MAT:用于通用形态学策略学习的形态学自适应变换器
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-04-01 DOI: 10.1109/TCDS.2024.3383158
Boyu Li;Haoran Li;Yuanheng Zhu;Dongbin Zhao
{"title":"MAT: Morphological Adaptive Transformer for Universal Morphology Policy Learning","authors":"Boyu Li;Haoran Li;Yuanheng Zhu;Dongbin Zhao","doi":"10.1109/TCDS.2024.3383158","DOIUrl":"10.1109/TCDS.2024.3383158","url":null,"abstract":"Agent-agnostic reinforcement learning aims to learn a universal control policy that can simultaneously control a set of robots with different morphologies. Recent studies have suggested that using the transformer model can address variations in state and action spaces caused by different morphologies, and morphology information is necessary to improve policy performance. However, existing methods have limitations in exploiting morphological information, where the rationality of observation integration cannot be guaranteed. We propose morphological adaptive transformer (MAT), a transformer-based universal control algorithm that can adapt to various morphologies without any modifications. MAT includes two essential components: functional position encoding (FPE) and morphological attention mechanism (MAM). The FPE provides robust and consistent positional prior information for limb observation to avoid limb confusion and implicitly obtain functional descriptions of limbs. The MAM enhances the attribute prior information of limbs, improves the correlation between observations, and makes the policy pay attention to more limbs. We combine observation with prior information to help policy adapt to the morphology of robots, thereby optimizing its performance with unknown morphologies. Experiments on agent-agnostic tasks in Gym MuJoCo environment demonstrate that our algorithm can assign more reasonable morphological prior information to each limb, and the performance of our algorithm is comparable to the prior state-of-the-art algorithm with better generalization.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1611-1621"},"PeriodicalIF":5.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Control With Style: Style Embedding-Based Variational Autoencoder for Controlled Stylized Caption Generation Framework 用风格控制:基于风格嵌入的变异自动编码器用于受控风格化标题生成框架
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-30 DOI: 10.1109/TCDS.2024.3405573
Dhruv Sharma;Chhavi Dhiman;Dinesh Kumar
{"title":"Control With Style: Style Embedding-Based Variational Autoencoder for Controlled Stylized Caption Generation Framework","authors":"Dhruv Sharma;Chhavi Dhiman;Dinesh Kumar","doi":"10.1109/TCDS.2024.3405573","DOIUrl":"10.1109/TCDS.2024.3405573","url":null,"abstract":"Automatic image captioning is a computationally intensive and structurally complicated task that describes the contents of an image in the form of a natural language sentence. Methods developed in the recent past focused mainly on the description of factual content in images thereby ignoring the different emotions and styles (romantic, humorous, angry, etc.) associated with the image. To overcome this, few works incorporated style-based caption generation that captures the variability in the generated descriptions. This article presents a style embedding-based variational autoencoder for controlled stylized caption generation framework (RFCG+SE-VAE-CSCG). It generates controlled text-based stylized descriptions of images. It works in two phases, i.e., \u0000<inline-formula><tex-math>$ 1)$</tex-math></inline-formula>\u0000 refined factual caption generation (RFCG); and \u0000<inline-formula><tex-math>$ 2)$</tex-math></inline-formula>\u0000 SE-VAE-CSCG. The former defines an encoder–decoder model for the generation of refined factual captions. Whereas, the latter presents a SE-VAE for controlled stylized caption generation. The overall proposed framework generates style-based descriptions of images by leveraging bag of captions (BoCs). More so, with the use of a controlled text generation model, the proposed work efficiently learns disentangled representations and generates realistic stylized descriptions of images. Experiments on MSCOCO, Flickr30K, and FlickrStyle10K provide state-of-the-art results for both refined and style-based caption generation, supported with an ablation study.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2032-2042"},"PeriodicalIF":5.0,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning for Autonomous Driving Based on Safety Experience Replay 基于安全体验回放的自动驾驶深度强化学习
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-30 DOI: 10.1109/TCDS.2024.3405896
Xiaohan Huang;Yuhu Cheng;Qiang Yu;Xuesong Wang
{"title":"Deep Reinforcement Learning for Autonomous Driving Based on Safety Experience Replay","authors":"Xiaohan Huang;Yuhu Cheng;Qiang Yu;Xuesong Wang","doi":"10.1109/TCDS.2024.3405896","DOIUrl":"10.1109/TCDS.2024.3405896","url":null,"abstract":"In the field of autonomous driving, safety has always been a top priority, especially in recent years with the development and increasing application of deep reinforcement learning (DRL) in autonomous driving. Ensuring the safety of algorithms has become an indispensable concern. Reinforcement learning (RL), which involves interacting with the environment through trial and error, may result in unsafe behavior in autonomous driving without any safety constraints. Such behavior could result in the drive path deviation and even collision, causing catastrophic accidents. Therefore, this article proposes a reinforcement learning algorithm based on a safety experience replay mechanism, which is primarily to enhance the safety of reinforcement learning in autonomous driving. First, the ego vehicle conducts preliminary exploration of the environment to collect data. Based on the performance of completing tasks observed from each data trajectory, safety labels of different levels are assigned to all state-action pairs, which establishes a safety experience buffer. Further, a safety-critic network is constructed, which is trained by randomly sampling from the safety experience buffer. This enables the network to quantitatively evaluate the safety of driving actions, and the goal of safe driving for ego vehicle is achieved. The experimental results indicate that the proposed method can effectively reduce driving risks and improve task success rates compared with conventional reinforcement learning algorithms.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2070-2084"},"PeriodicalIF":5.0,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Transfer Learning for Dexterous In-Hand Manipulation With Multifingered Anthropomorphic Hand 使用多指拟人手进行灵巧手部操控的渐进式迁移学习
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-29 DOI: 10.1109/TCDS.2024.3406730
Yongkang Luo;Wanyi Li;Peng Wang;Haonan Duan;Wei Wei;Jia Sun
{"title":"Progressive Transfer Learning for Dexterous In-Hand Manipulation With Multifingered Anthropomorphic Hand","authors":"Yongkang Luo;Wanyi Li;Peng Wang;Haonan Duan;Wei Wei;Jia Sun","doi":"10.1109/TCDS.2024.3406730","DOIUrl":"10.1109/TCDS.2024.3406730","url":null,"abstract":"Dexterous in-hand manipulation poses significant challenges for a multifingered anthropomorphic hand due to the high-dimensional state and action spaces, as well as the intricate contact patterns between the fingers and objects. Although deep reinforcement learning has made moderate progress and demonstrated its strong potential for manipulation, it faces certain challenges, including large-scale data collection and high sample complexity. Particularly in scenes with slight changes, it necessitates the recollection of vast amounts of data and numerous iterations of fine-tuning. Remarkably, humans can quickly transfer their learned manipulation skills to different scenarios with minimal supervision. Inspired by the flexible transfer learning capability of humans, we propose a novel framework called progressive transfer learning (PTL) for dexterous in-hand manipulation. This framework efficiently utilizes the collected trajectories and the dynamics model trained on a source dataset. It adopts progressive neural networks for dynamics model transfer learning on samples selected using a new method based on dynamics properties, rewards, and trajectory scores. Experimental results on contact-rich anthropomorphic hand manipulation tasks demonstrate that our method can efficiently and effectively learn in-hand manipulation skills with just a few online attempts and adjustment learning in the new scene. Moreover, compared to learning from scratch, our method significantly reduces training time costs by 85%.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2019-2031"},"PeriodicalIF":5.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring Human Comfort in Human–Robot Collaboration via Wearable Sensing 通过可穿戴传感技术测量人机协作中的舒适度
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-29 DOI: 10.1109/TCDS.2024.3383296
Yuchen Yan;Haotian Su;Yunyi Jia
{"title":"Measuring Human Comfort in Human–Robot Collaboration via Wearable Sensing","authors":"Yuchen Yan;Haotian Su;Yunyi Jia","doi":"10.1109/TCDS.2024.3383296","DOIUrl":"10.1109/TCDS.2024.3383296","url":null,"abstract":"The development of collaborative robots has enabled a safer and more efficient human–robot collaboration (HRC) manufacturing environment. Tremendous research efforts have been conducted to improve user safety and robot working efficiency after the debut of collaborative robots. However, human comfort in HRC scenarios has not been thoroughly discussed but is critically important to the user acceptance of collaborative robots. Previous studies mostly utilize the subjective rating method to evaluate how human comfort varies as one robot factor changes, yet such method is limited in evaluating comfort online. Some other studies leverage wearable sensors to collect physiological signals to detect human emotions, but few of them implement this for a human comfort model in HRC scenarios. In this study, we designed an online comfort model for HRC using wearable sensing data. The model uses physiological signals acquired from wearable sensing and calculates the in-situ human comfort levels based on our developed algorithms. We have conducted experiments in realistic HRC tasks, and the prediction results demonstrated the effectiveness of the proposed approach in identifying human comfort levels in HRC.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1748-1758"},"PeriodicalIF":5.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Neural Networks for Automatic Sleep Stage Classification and Consciousness Assessment in Patients With Disorder of Consciousness 用于意识障碍患者自动睡眠阶段分类和意识评估的深度神经网络
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-26 DOI: 10.1109/TCDS.2024.3382109
Jiahui Pan;Yangzuyi Yu;Jianhui Wu;Xinjie Zhou;Yanbin He;Yuanqing Li
{"title":"Deep Neural Networks for Automatic Sleep Stage Classification and Consciousness Assessment in Patients With Disorder of Consciousness","authors":"Jiahui Pan;Yangzuyi Yu;Jianhui Wu;Xinjie Zhou;Yanbin He;Yuanqing Li","doi":"10.1109/TCDS.2024.3382109","DOIUrl":"10.1109/TCDS.2024.3382109","url":null,"abstract":"Disorders of consciousness (DOC) are often related to serious changes in sleep structure. This article presents a sleep evaluation algorithm that scores the sleep structure of DOC patients to assist in assessing their consciousness level. The sleep evaluation algorithm is divided into two parts: 1) automatic sleep staging model: convolutional neural networks (CNNs) are employed for the extraction of signal features from electroencephalogram (EEG) and electrooculogram (EOG), and bidirectional long short-term memory (Bi-LSTM) with attention mechanism is applied to learn sequential information; and 2) consciousness assessment: automated sleep staging results are used to extract consciousness-related sleep features that are utilized by a support vector machine (SVM) classifier to assess consciousness. In this study, the CNN-BiLSTM model with an attention sleep network (CBASleepNet) was conducted using the sleep-EDF and MASS datasets. The experimental results demonstrated the effectiveness of the proposed model, which outperformed similar models. Moreover, CBASleepNet was applied to sleep staging in DOC patients through transfer learning and fine-tuning. Consciousness assessments were conducted on seven minimally conscious state (MCS) patients and four vegetative state (VS)/unresponsive wakefulness syndrome (UWS) patients, achieving an overall accuracy of 81.8%. The sleep evaluation algorithm can be used to evaluate the consciousness level of patients effectively.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1589-1603"},"PeriodicalIF":5.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long-Term and Short-Term Opponent Intention Inference for Football Multiplayer Policy Learning 足球多人策略学习的长期和短期对手意图推断
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-22 DOI: 10.1109/TCDS.2024.3404061
Shijie Wang;Zhiqiang Pu;Yi Pan;Boyin Liu;Hao Ma;Jianqiang Yi
{"title":"Long-Term and Short-Term Opponent Intention Inference for Football Multiplayer Policy Learning","authors":"Shijie Wang;Zhiqiang Pu;Yi Pan;Boyin Liu;Hao Ma;Jianqiang Yi","doi":"10.1109/TCDS.2024.3404061","DOIUrl":"10.1109/TCDS.2024.3404061","url":null,"abstract":"A highly competitive and confrontational football match is full of strategic and tactical challenges. Therefore, player's cognition on their opponents’ strategies and tactics is quite crucial. However, the match's complexity results in that the opponents’ intentions are often changeable. Under these circumstances, how to discriminate and predict the opponents’ intentions of future actions and tactics is an important problem for football players’ decision-making. Considering that the opponents’ cognitive processes involve deliberative and reactive processes, a long-term and short-term opponent intention inference (LS-OII) method for football multiplayer policy learning is proposed. First, to capture the cognition about opponents’ deliberative process, we design an opponent tactics deduction module for inferring the opponents’ long-term tactical intentions from a macro perspective. Furthermore, an opponent decision prediction module is designed to infer the opponents’ short-term decision which often yields rapid and direct impacts on football matches. Additionally, an opponent-driven incentive module is designed to enhance the players’ causal awareness of the opponents’ intentions, further to improve the players exploration capabilities and effectively obtain outstanding policies. Representative results demonstrate that the LS-OII method significantly enhances the efficacy of players’ strategies in the Google Research Football environment, thereby affirming the superiority of our method.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"2055-2069"},"PeriodicalIF":5.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141149785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EventAugment: Learning Augmentation Policies From Asynchronous Event-Based Data EventAugment:从基于事件的异步数据中学习增强策略
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-22 DOI: 10.1109/TCDS.2024.3380907
Fuqiang Gu;Jiarui Dou;Mingyan Li;Xianlei Long;Songtao Guo;Chao Chen;Kai Liu;Xianlong Jiao;Ruiyuan Li
{"title":"EventAugment: Learning Augmentation Policies From Asynchronous Event-Based Data","authors":"Fuqiang Gu;Jiarui Dou;Mingyan Li;Xianlei Long;Songtao Guo;Chao Chen;Kai Liu;Xianlong Jiao;Ruiyuan Li","doi":"10.1109/TCDS.2024.3380907","DOIUrl":"10.1109/TCDS.2024.3380907","url":null,"abstract":"Data augmentation is an effective way to overcome the overfitting problem of deep learning models. However, most existing studies on data augmentation work on framelike data (e.g., images), and few tackles with event-based data. Event-based data are different from framelike data, rendering the augmentation techniques designed for framelike data unsuitable for event-based data. This work deals with data augmentation for event-based object classification and semantic segmentation, which is important for self-driving and robot manipulation. Specifically, we introduce EventAugment, a new method to augment asynchronous event-based data by automatically learning augmentation policies. We first identify 13 types of operations for augmenting event-based data. Next, we formulate the problem of finding optimal augmentation policies as a hyperparameter optimization problem. To tackle this problem, we propose a random search-based framework. Finally, we evaluate the proposed method on six public datasets including N-Caltech101, N-Cars, ST-MNIST, N-MNIST, DVSGesture, and DDD17. Experimental results demonstrate that EventAugment exhibits substantial performance improvements for both deep neural network-based and spiking neural network-based models, with gains of up to approximately 4%. Notably, EventAugment outperform state-of-the-art methods in terms of overall performance.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1521-1532"},"PeriodicalIF":5.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG Decoding Based on Normalized Mutual Information for Motor Imagery Brain–Computer Interfaces 基于归一化互信息的脑电解码用于运动图像脑机接口
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-03-20 DOI: 10.1109/TCDS.2024.3401717
Chao Tang;Dongyao Jiang;Lujuan Dang;Badong Chen
{"title":"EEG Decoding Based on Normalized Mutual Information for Motor Imagery Brain–Computer Interfaces","authors":"Chao Tang;Dongyao Jiang;Lujuan Dang;Badong Chen","doi":"10.1109/TCDS.2024.3401717","DOIUrl":"10.1109/TCDS.2024.3401717","url":null,"abstract":"In current research, noninvasive brain–computer interfaces (BCIs) typically rely on electroencephalogram (EEG) signals to measure brain activity. Motor imagery EEG decoding is an important research field of BCIs. Although multichannel EEG signals provide higher resolution, they contain noise and redundant data unrelated to the task, which affect the performance of BCI systems. We investigate the interactions between EEG signals from dependence analysis to improve the classification accuracy. In this article, a novel channel selection method based on normalized mutual information (NMI) is first proposed to select the informative channels. Then, a histogram of oriented gradient is applied to feature extraction in the rearranged NMI matrices. Finally, a support vector machine with a radial basis function kernel is used for the classification of different motor imagery tasks. Four publicly available BCI datasets are employed to evaluate the effectiveness of the proposed method. The experimental results show that the proposed decoding scheme significantly improves classification accuracy and outperforms other competing methods.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 6","pages":"1997-2007"},"PeriodicalIF":5.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141149933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信