Frontiers in Neurorobotics最新文献

筛选
英文 中文
Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1520983
Shun Yao, Yihan Ping, Xiaoyu Yue, He Chen
{"title":"Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition.","authors":"Shun Yao, Yihan Ping, Xiaoyu Yue, He Chen","doi":"10.3389/fnbot.2024.1520983","DOIUrl":"10.3389/fnbot.2024.1520983","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate recognition of martial arts leg poses is essential for applications in sports analytics, rehabilitation, and human-computer interaction. Traditional pose recognition models, relying on sequential or convolutional approaches, often struggle to capture the complex spatial-temporal dependencies inherent in martial arts movements. These methods lack the ability to effectively model the nuanced dynamics of joint interactions and temporal progression, leading to limited generalization in recognizing complex actions.</p><p><strong>Methods: </strong>To address these challenges, we propose PoseGCN, a Graph Convolutional Network (GCN)-based model that integrates spatial, temporal, and contextual features through a novel framework. PoseGCN leverages spatial-temporal graph encoding to capture joint motion dynamics, an action-specific attention mechanism to assign importance to relevant joints depending on the action context, and a self-supervised pretext task to enhance temporal robustness and continuity. Experimental results on four benchmark datasets-Kinetics-700, Human3.6M, NTU RGB+D, and UTD-MHAD-demonstrate that PoseGCN outperforms existing models, achieving state-of-the-art accuracy and F1 scores.</p><p><strong>Results and discussion: </strong>These findings highlight the model's capacity to generalize across diverse datasets and capture fine-grained pose details, showcasing its potential in advancing complex pose recognition tasks. The proposed framework offers a robust solution for precise action recognition and paves the way for future developments in multi-modal pose analysis.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1520983"},"PeriodicalIF":2.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11792168/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143188921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved object detection method for autonomous driving based on DETR. 基于 DETR 的改进型自动驾驶物体检测方法。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1484276
Huaqi Zhao, Songnan Zhang, Xiang Peng, Zhengguang Lu, Guojing Li
{"title":"Improved object detection method for autonomous driving based on DETR.","authors":"Huaqi Zhao, Songnan Zhang, Xiang Peng, Zhengguang Lu, Guojing Li","doi":"10.3389/fnbot.2024.1484276","DOIUrl":"10.3389/fnbot.2024.1484276","url":null,"abstract":"<p><p>Object detection is a critical component in the development of autonomous driving technology and has demonstrated significant growth potential. To address the limitations of current techniques, this paper presents an improved object detection method for autonomous driving based on a detection transformer (DETR). First, we introduce a multi-scale feature and location information extraction method, which solves the inadequacy of the model for multi-scale object localization and detection. In addition, we developed a transformer encoder based on the group axial attention mechanism. This allows for efficient attention range control in the horizontal and vertical directions while reducing computation, ultimately enhancing the inference speed. Furthermore, we propose a novel dynamic hyperparameter tuning training method based on Pareto efficiency, which coordinates the training state of the loss functions through dynamic weights, overcoming issues associated with manually setting fixed weights and enhancing model convergence speed and accuracy. Experimental results demonstrate that the proposed method surpasses others, with improvements of 3.3%, 4.5%, and 3% in average precision on the COCO, PASCAL VOC, and KITTI datasets, respectively, and an 84% increase in FPS.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1484276"},"PeriodicalIF":2.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143122709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modality fusion with EEG and text for enhanced emotion detection in English writing.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1529880
Jing Wang, Ci Zhang
{"title":"Cross-modality fusion with EEG and text for enhanced emotion detection in English writing.","authors":"Jing Wang, Ci Zhang","doi":"10.3389/fnbot.2024.1529880","DOIUrl":"10.3389/fnbot.2024.1529880","url":null,"abstract":"<p><strong>Introduction: </strong>Emotion detection in written text is critical for applications in human-computer interaction, affective computing, and personalized content recommendation. Traditional approaches to emotion detection primarily leverage textual features, using natural language processing techniques such as sentiment analysis, which, while effective, may miss subtle nuances of emotions. These methods often fall short in recognizing the complex, multimodal nature of human emotions, as they ignore physiological cues that could provide richer emotional insights.</p><p><strong>Methods: </strong>To address these limitations, this paper proposes Emotion Fusion-Transformer, a cross-modality fusion model that integrates EEG signals and textual data to enhance emotion detection in English writing. By utilizing the Transformer architecture, our model effectively captures contextual relationships within the text while concurrently processing EEG signals to extract underlying emotional states. Specifically, the Emotion Fusion-Transformer first preprocesses EEG data through signal transformation and filtering, followed by feature extraction that complements the textual embeddings. These modalities are fused within a unified Transformer framework, allowing for a holistic view of both the cognitive and physiological dimensions of emotion.</p><p><strong>Results and discussion: </strong>Experimental results demonstrate that the proposed model significantly outperforms text-only and EEG-only approaches, with improvements in both accuracy and F1-score across diverse emotional categories. This model shows promise for enhancing affective computing applications by bridging the gap between physiological and textual emotion detection, enabling more nuanced and accurate emotion analysis in English writing.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1529880"},"PeriodicalIF":2.6,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11782560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143079354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Brain-inspired autonomous driving.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-16 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1543115
Elishai Ezra Tsur, Gianluca Di Flumeri, Hadar Cohen Duwek
{"title":"Editorial: Brain-inspired autonomous driving.","authors":"Elishai Ezra Tsur, Gianluca Di Flumeri, Hadar Cohen Duwek","doi":"10.3389/fnbot.2025.1543115","DOIUrl":"10.3389/fnbot.2025.1543115","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1543115"},"PeriodicalIF":2.6,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11779717/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143079357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep reinforcement learning and robust SLAM based robotic control algorithm for self-driving path optimization.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-15 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1428358
Samiullah Khan, Ashfaq Niaz, Dou Yinke, Muhammad Usman Shoukat, Saqib Ali Nawaz
{"title":"Deep reinforcement learning and robust SLAM based robotic control algorithm for self-driving path optimization.","authors":"Samiullah Khan, Ashfaq Niaz, Dou Yinke, Muhammad Usman Shoukat, Saqib Ali Nawaz","doi":"10.3389/fnbot.2024.1428358","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1428358","url":null,"abstract":"<p><p>A reward shaping deep deterministic policy gradient (RS-DDPG) and simultaneous localization and mapping (SLAM) path tracking algorithm is proposed to address the issues of low accuracy and poor robustness of target path tracking for robotic control during maneuver. RS-DDPG algorithm is based on deep reinforcement learning (DRL) and designs a reward function to optimize the parameters of DDPG to achieve the required tracking accuracy and stability. A visual SLAM algorithm based on semantic segmentation and geometric information is proposed to address the issues of poor robustness and susceptibility to interference from dynamic objects in dynamic scenes for SLAM based on visual sensors. Using the Apollo autonomous driving simulation platform, simulation experiments were conducted on the actual DDPG algorithm and the improved RS-DDPG path-tracking control algorithm. The research results indicate that the proposed RS-DDPG algorithm outperforms the DDPG algorithm in terms of path tracking accuracy and robustness. The results showed that it effectively improved the performance of visual SLAM systems in dynamic scenarios.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1428358"},"PeriodicalIF":2.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11775903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A portable EEG signal acquisition system and a limited-electrode channel classification network for SSVEP.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-15 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1502560
Yunxiao Ma, Jinming Huang, Chuan Liu, Meiyu Shi
{"title":"A portable EEG signal acquisition system and a limited-electrode channel classification network for SSVEP.","authors":"Yunxiao Ma, Jinming Huang, Chuan Liu, Meiyu Shi","doi":"10.3389/fnbot.2024.1502560","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1502560","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs) have garnered significant research attention, yet their complexity has hindered widespread adoption in daily life. Most current electroencephalography (EEG) systems rely on wet electrodes and numerous electrodes to enhance signal quality, making them impractical for everyday use. Portable and wearable devices offer a promising solution, but the limited number of electrodes in specific regions can lead to missing channels and reduced BCI performance. To overcome these challenges and enable better integration of BCI systems with external devices, this study developed an EEG signal acquisition platform (Gaitech BCI) based on the Robot Operating System (ROS) using a 10-channel dry electrode EEG device. Additionally, a multi-scale channel attention selection network based on the Squeeze-and-Excitation (SE) module (SEMSCS) is proposed to improve the classification performance of portable BCI devices with limited channels. Steady-state visual evoked potential (SSVEP) data were collected using the developed BCI system to evaluate both the system and network performance. Offline data from ten subjects were analyzed using within-subject and cross-subject experiments, along with ablation studies. The results demonstrated that the SEMSCS model achieved better classification performance than the comparative reference model, even with a limited number of channels. Additionally, the implementation of online experiments offers a rational solution for controlling external devices via BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1502560"},"PeriodicalIF":2.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating attention mechanism and boundary detection for building segmentation from remote sensing images.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-14 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1482051
Ping Liu, Yu Gao, Xiangtian Zheng, Hesong Wang, Yimeng Zhao, Xinru Wu, Zehao Lu, Zhichuan Yue, Yuting Xie, Shufeng Hao
{"title":"Integrating attention mechanism and boundary detection for building segmentation from remote sensing images.","authors":"Ping Liu, Yu Gao, Xiangtian Zheng, Hesong Wang, Yimeng Zhao, Xinru Wu, Zehao Lu, Zhichuan Yue, Yuting Xie, Shufeng Hao","doi":"10.3389/fnbot.2024.1482051","DOIUrl":"10.3389/fnbot.2024.1482051","url":null,"abstract":"<p><p>Accurate building segmentation has become critical in various fields such as urban management, urban planning, mapping, and navigation. With the increasing diversity in the number, size, and shape of buildings, convolutional neural networks have been used to segment and extract buildings from such images, resulting in increased efficiency and utilization of image features. We propose a building semantic segmentation method to improve the traditional Unet convolutional neural network by integrating attention mechanism and boundary detection. The attention mechanism module combines attention in the channel and spatial dimensions. The module captures image feature information in the channel dimension using a one-dimensional convolutional cross-channel method and automatically adjusts the cross-channel dimension using adaptive convolutional kernel size. Additionally, a weighted boundary loss function is designed to replace the traditional semantic segmentation cross-entropy loss to detect the boundary of a building. The loss function optimizes the extraction of building boundaries in backpropagation, ensuring the integrity of building boundary extraction in the shadow part. Experimental results show that the proposed model AMBDNet achieves high-performance metrics, including a recall rate of 0.9046, an IoU of 0.7797, and a pixel accuracy of 0.9140 on high-resolution remote sensing images, demonstrating its robustness and effectiveness in precise building segmentation. Experimental results further indicate that AMBDNet improves the single-class recall of buildings by 0.0322 and the single-class pixel accuracy by 0.0169 in the high-resolution remote sensing image recognition task.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1482051"},"PeriodicalIF":2.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11772425/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143058772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FusionU10: enhancing pedestrian detection in low-light complex tourist scenes through multimodal fusion. FusionU10:通过多模态融合增强低照度复杂旅游场景中的行人检测。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-10 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1504070
Xuefan Zhou, Jiapeng Li, Yingzheng Li
{"title":"FusionU10: enhancing pedestrian detection in low-light complex tourist scenes through multimodal fusion.","authors":"Xuefan Zhou, Jiapeng Li, Yingzheng Li","doi":"10.3389/fnbot.2024.1504070","DOIUrl":"10.3389/fnbot.2024.1504070","url":null,"abstract":"<p><p>With the rapid development of tourism, the concentration of visitor flows poses significant challenges for public safety management, especially in low-light and highly occluded environments, where existing pedestrian detection technologies often struggle to achieve satisfactory accuracy. Although infrared images perform well under low-light conditions, they lack color and detail, making them susceptible to background noise interference, particularly in complex outdoor environments where the similarity between heat sources and pedestrian features further reduces detection accuracy. To address these issues, this paper proposes the FusionU10 model, which combines information from both infrared and visible light images. The model first incorporates an Attention Gate mechanism (AGUNet) into an improved UNet architecture to focus on key features and generate pseudo-color images, followed by pedestrian detection using YOLOv10. During the prediction phase, the model optimizes the loss function with Complete Intersection over Union (CIoU), objectness loss (obj loss), and classification loss (cls loss), thereby enhancing the performance of the detection network and improving the quality and feature extraction capabilities of the pseudo-color images through a feedback mechanism. Experimental results demonstrate that FusionU10 significantly improves detection accuracy and robustness in complex scenes on the FLIR, M3FD, and LLVIP datasets, showing great potential for application in challenging environments.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1504070"},"PeriodicalIF":2.6,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143046357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1464572
Yang Jing, Li Weiya
{"title":"RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning.","authors":"Yang Jing, Li Weiya","doi":"10.3389/fnbot.2024.1464572","DOIUrl":"10.3389/fnbot.2024.1464572","url":null,"abstract":"<p><strong>Introduction: </strong>Path planning in complex and dynamic environments poses a significant challenge in the field of mobile robotics. Traditional path planning methods such as genetic algorithms, Dijkstra's algorithm, and Floyd's algorithm typically rely on deterministic search strategies, which can lead to local optima and lack global search capabilities in dynamic settings. These methods have high computational costs and are not efficient for real-time applications.</p><p><strong>Methods: </strong>To address these issues, this paper presents a Quantum-behaved Particle Swarm Optimization model enhanced by deep reinforcement learning (RL-QPSO Net) aimed at improving global optimality and adaptability in path planning. The RL-QPSO Net combines quantum-inspired particle swarm optimization (QPSO) and deep reinforcement learning (DRL) modules through a dual control mechanism to achieve path optimization and environmental adaptation. The QPSO module is responsible for global path optimization, using quantum mechanics to avoid local optima, while the DRL module adjusts strategies in real-time based on environmental feedback, thus enhancing decision-making capabilities in complex high-dimensional scenarios.</p><p><strong>Results and discussion: </strong>Experiments were conducted on multiple datasets, including Cityscapes, NYU Depth V2, Mapillary Vistas, and ApolloScape, and the results showed that RL-QPSO Net outperforms traditional methods in terms of accuracy, computational efficiency, and model complexity. This method demonstrated significant improvements in accuracy and computational efficiency, providing an effective path planning solution for real-time applications in complex environments for mobile robots. In the future, this method could be further extended to resource-limited environments to achieve broader practical applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1464572"},"PeriodicalIF":2.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition. 基于脑电图的定向空间与频谱注意网络(DSSA Net)。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481746
Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang
{"title":"Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition.","authors":"Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang","doi":"10.3389/fnbot.2024.1481746","DOIUrl":"10.3389/fnbot.2024.1481746","url":null,"abstract":"<p><p>Significant strides have been made in emotion recognition from Electroencephalography (EEG) signals. However, effectively modeling the diverse spatial, spectral, and temporal features of multi-channel brain signals remains a challenge. This paper proposes a novel framework, the Directional Spatial and Spectral Attention Network (DSSA Net), which enhances emotion recognition accuracy by capturing critical spatial-spectral-temporal features from EEG signals. The framework consists of three modules: Positional Attention (PA), Spectral Attention (SA), and Temporal Attention (TA). The PA module includes Vertical Attention (VA) and Horizontal Attention (HA) branches, designed to detect active brain regions from different orientations. Experimental results on three benchmark EEG datasets demonstrate that DSSA Net outperforms most competitive methods. On the SEED and SEED-IV datasets, it achieves accuracies of 96.61% and 85.07% for subject-dependent emotion recognition, respectively, and 87.03% and 75.86% for subject-independent recognition. On the DEAP dataset, it attains accuracies of 94.97% for valence and 94.73% for arousal. These results showcase the framework's ability to leverage both spatial and spectral differences across brain hemispheres and regions, enhancing classification accuracy for emotion recognition.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481746"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信