Frontiers in NeuroroboticsPub Date : 2025-01-15eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1428358
Samiullah Khan, Ashfaq Niaz, Dou Yinke, Muhammad Usman Shoukat, Saqib Ali Nawaz
{"title":"Deep reinforcement learning and robust SLAM based robotic control algorithm for self-driving path optimization.","authors":"Samiullah Khan, Ashfaq Niaz, Dou Yinke, Muhammad Usman Shoukat, Saqib Ali Nawaz","doi":"10.3389/fnbot.2024.1428358","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1428358","url":null,"abstract":"<p><p>A reward shaping deep deterministic policy gradient (RS-DDPG) and simultaneous localization and mapping (SLAM) path tracking algorithm is proposed to address the issues of low accuracy and poor robustness of target path tracking for robotic control during maneuver. RS-DDPG algorithm is based on deep reinforcement learning (DRL) and designs a reward function to optimize the parameters of DDPG to achieve the required tracking accuracy and stability. A visual SLAM algorithm based on semantic segmentation and geometric information is proposed to address the issues of poor robustness and susceptibility to interference from dynamic objects in dynamic scenes for SLAM based on visual sensors. Using the Apollo autonomous driving simulation platform, simulation experiments were conducted on the actual DDPG algorithm and the improved RS-DDPG path-tracking control algorithm. The research results indicate that the proposed RS-DDPG algorithm outperforms the DDPG algorithm in terms of path tracking accuracy and robustness. The results showed that it effectively improved the performance of visual SLAM systems in dynamic scenarios.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1428358"},"PeriodicalIF":2.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11775903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-15eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1502560
Yunxiao Ma, Jinming Huang, Chuan Liu, Meiyu Shi
{"title":"A portable EEG signal acquisition system and a limited-electrode channel classification network for SSVEP.","authors":"Yunxiao Ma, Jinming Huang, Chuan Liu, Meiyu Shi","doi":"10.3389/fnbot.2024.1502560","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1502560","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs) have garnered significant research attention, yet their complexity has hindered widespread adoption in daily life. Most current electroencephalography (EEG) systems rely on wet electrodes and numerous electrodes to enhance signal quality, making them impractical for everyday use. Portable and wearable devices offer a promising solution, but the limited number of electrodes in specific regions can lead to missing channels and reduced BCI performance. To overcome these challenges and enable better integration of BCI systems with external devices, this study developed an EEG signal acquisition platform (Gaitech BCI) based on the Robot Operating System (ROS) using a 10-channel dry electrode EEG device. Additionally, a multi-scale channel attention selection network based on the Squeeze-and-Excitation (SE) module (SEMSCS) is proposed to improve the classification performance of portable BCI devices with limited channels. Steady-state visual evoked potential (SSVEP) data were collected using the developed BCI system to evaluate both the system and network performance. Offline data from ten subjects were analyzed using within-subject and cross-subject experiments, along with ablation studies. The results demonstrated that the SEMSCS model achieved better classification performance than the comparative reference model, even with a limited number of channels. Additionally, the implementation of online experiments offers a rational solution for controlling external devices via BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1502560"},"PeriodicalIF":2.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating attention mechanism and boundary detection for building segmentation from remote sensing images.","authors":"Ping Liu, Yu Gao, Xiangtian Zheng, Hesong Wang, Yimeng Zhao, Xinru Wu, Zehao Lu, Zhichuan Yue, Yuting Xie, Shufeng Hao","doi":"10.3389/fnbot.2024.1482051","DOIUrl":"10.3389/fnbot.2024.1482051","url":null,"abstract":"<p><p>Accurate building segmentation has become critical in various fields such as urban management, urban planning, mapping, and navigation. With the increasing diversity in the number, size, and shape of buildings, convolutional neural networks have been used to segment and extract buildings from such images, resulting in increased efficiency and utilization of image features. We propose a building semantic segmentation method to improve the traditional Unet convolutional neural network by integrating attention mechanism and boundary detection. The attention mechanism module combines attention in the channel and spatial dimensions. The module captures image feature information in the channel dimension using a one-dimensional convolutional cross-channel method and automatically adjusts the cross-channel dimension using adaptive convolutional kernel size. Additionally, a weighted boundary loss function is designed to replace the traditional semantic segmentation cross-entropy loss to detect the boundary of a building. The loss function optimizes the extraction of building boundaries in backpropagation, ensuring the integrity of building boundary extraction in the shadow part. Experimental results show that the proposed model AMBDNet achieves high-performance metrics, including a recall rate of 0.9046, an IoU of 0.7797, and a pixel accuracy of 0.9140 on high-resolution remote sensing images, demonstrating its robustness and effectiveness in precise building segmentation. Experimental results further indicate that AMBDNet improves the single-class recall of buildings by 0.0322 and the single-class pixel accuracy by 0.0169 in the high-resolution remote sensing image recognition task.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1482051"},"PeriodicalIF":2.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11772425/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143058772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-10eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1504070
Xuefan Zhou, Jiapeng Li, Yingzheng Li
{"title":"FusionU10: enhancing pedestrian detection in low-light complex tourist scenes through multimodal fusion.","authors":"Xuefan Zhou, Jiapeng Li, Yingzheng Li","doi":"10.3389/fnbot.2024.1504070","DOIUrl":"10.3389/fnbot.2024.1504070","url":null,"abstract":"<p><p>With the rapid development of tourism, the concentration of visitor flows poses significant challenges for public safety management, especially in low-light and highly occluded environments, where existing pedestrian detection technologies often struggle to achieve satisfactory accuracy. Although infrared images perform well under low-light conditions, they lack color and detail, making them susceptible to background noise interference, particularly in complex outdoor environments where the similarity between heat sources and pedestrian features further reduces detection accuracy. To address these issues, this paper proposes the FusionU10 model, which combines information from both infrared and visible light images. The model first incorporates an Attention Gate mechanism (AGUNet) into an improved UNet architecture to focus on key features and generate pseudo-color images, followed by pedestrian detection using YOLOv10. During the prediction phase, the model optimizes the loss function with Complete Intersection over Union (CIoU), objectness loss (obj loss), and classification loss (cls loss), thereby enhancing the performance of the detection network and improving the quality and feature extraction capabilities of the pseudo-color images through a feedback mechanism. Experimental results demonstrate that FusionU10 significantly improves detection accuracy and robustness in complex scenes on the FLIR, M3FD, and LLVIP datasets, showing great potential for application in challenging environments.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1504070"},"PeriodicalIF":2.6,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143046357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-08eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1464572
Yang Jing, Li Weiya
{"title":"RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning.","authors":"Yang Jing, Li Weiya","doi":"10.3389/fnbot.2024.1464572","DOIUrl":"10.3389/fnbot.2024.1464572","url":null,"abstract":"<p><strong>Introduction: </strong>Path planning in complex and dynamic environments poses a significant challenge in the field of mobile robotics. Traditional path planning methods such as genetic algorithms, Dijkstra's algorithm, and Floyd's algorithm typically rely on deterministic search strategies, which can lead to local optima and lack global search capabilities in dynamic settings. These methods have high computational costs and are not efficient for real-time applications.</p><p><strong>Methods: </strong>To address these issues, this paper presents a Quantum-behaved Particle Swarm Optimization model enhanced by deep reinforcement learning (RL-QPSO Net) aimed at improving global optimality and adaptability in path planning. The RL-QPSO Net combines quantum-inspired particle swarm optimization (QPSO) and deep reinforcement learning (DRL) modules through a dual control mechanism to achieve path optimization and environmental adaptation. The QPSO module is responsible for global path optimization, using quantum mechanics to avoid local optima, while the DRL module adjusts strategies in real-time based on environmental feedback, thus enhancing decision-making capabilities in complex high-dimensional scenarios.</p><p><strong>Results and discussion: </strong>Experiments were conducted on multiple datasets, including Cityscapes, NYU Depth V2, Mapillary Vistas, and ApolloScape, and the results showed that RL-QPSO Net outperforms traditional methods in terms of accuracy, computational efficiency, and model complexity. This method demonstrated significant improvements in accuracy and computational efficiency, providing an effective path planning solution for real-time applications in complex environments for mobile robots. In the future, this method could be further extended to resource-limited environments to achieve broader practical applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1464572"},"PeriodicalIF":2.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-07eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1481746
Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang
{"title":"Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition.","authors":"Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang","doi":"10.3389/fnbot.2024.1481746","DOIUrl":"10.3389/fnbot.2024.1481746","url":null,"abstract":"<p><p>Significant strides have been made in emotion recognition from Electroencephalography (EEG) signals. However, effectively modeling the diverse spatial, spectral, and temporal features of multi-channel brain signals remains a challenge. This paper proposes a novel framework, the Directional Spatial and Spectral Attention Network (DSSA Net), which enhances emotion recognition accuracy by capturing critical spatial-spectral-temporal features from EEG signals. The framework consists of three modules: Positional Attention (PA), Spectral Attention (SA), and Temporal Attention (TA). The PA module includes Vertical Attention (VA) and Horizontal Attention (HA) branches, designed to detect active brain regions from different orientations. Experimental results on three benchmark EEG datasets demonstrate that DSSA Net outperforms most competitive methods. On the SEED and SEED-IV datasets, it achieves accuracies of 96.61% and 85.07% for subject-dependent emotion recognition, respectively, and 87.03% and 75.86% for subject-independent recognition. On the DEAP dataset, it attains accuracies of 94.97% for valence and 94.73% for arousal. These results showcase the framework's ability to leverage both spatial and spectral differences across brain hemispheres and regions, enhancing classification accuracy for emotion recognition.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481746"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-07eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1460255
Siyuan Shen, Jichen Chen, Guanfeng Yu, Zhengjun Zhai, Pujie Han
{"title":"KalmanFormer: using transformer to model the Kalman Gain in Kalman Filters.","authors":"Siyuan Shen, Jichen Chen, Guanfeng Yu, Zhengjun Zhai, Pujie Han","doi":"10.3389/fnbot.2024.1460255","DOIUrl":"10.3389/fnbot.2024.1460255","url":null,"abstract":"<p><strong>Introduction: </strong>Tracking the hidden states of dynamic systems is a fundamental task in signal processing. Recursive Kalman Filters (KF) are widely regarded as an efficient solution for linear and Gaussian systems, offering low computational complexity. However, real-world applications often involve non-linear dynamics, making it challenging for traditional Kalman Filters to achieve accurate state estimation. Additionally, the accurate modeling of system dynamics and noise in practical scenarios is often difficult. To address these limitations, we propose the KalmanFormer, a hybrid model-driven and data-driven state estimator. By leveraging data, the KalmanFormer promotes the performance of state estimation under non-linear conditions and partial information scenarios.</p><p><strong>Methods: </strong>The proposed KalmanFormer integrates classical Kalman Filter with a Transformer framework. Specifically, it utilizes the Transformer to learn the Kalman Gain directly from data without requiring prior knowledge of noise parameters. The learned Kalman Gain is then incorporated into the standard Kalman Filter workflow, enabling the system to better handle non-linearities and model mismatches. The hybrid approach combines the strengths of data-driven learning and model-driven methodologies to achieve robust state estimation.</p><p><strong>Results and discussion: </strong>To evaluate the effectiveness of KalmanFormer, we conducted numerical experiments in both synthetic and real-world dataset. The results demonstrate that KalmanFormer outperforms the classical Extended Kalman Filter (EKF) in the same settings. It achieves superior accuracy in tracking hidden states, demonstrating resilience to non-linearities and imprecise system models.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1460255"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747084/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-06eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1480055
Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan
{"title":"MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation.","authors":"Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan","doi":"10.3389/fnbot.2024.1480055","DOIUrl":"10.3389/fnbot.2024.1480055","url":null,"abstract":"<p><p>U-Net and its variants have been widely used in the field of image segmentation. In this paper, a lightweight multi-scale Ghost U-Net (MSGU-Net) network architecture is proposed. This can efficiently and quickly process image segmentation tasks while generating high-quality object masks for each object. The pyramid structure (SPP-Inception) module and ghost module are seamlessly integrated in a lightweight manner. Equipped with an efficient local attention (ELA) mechanism and an attention gate mechanism, they are designed to accurately identify the region of interest (ROI). The SPP-Inception module and ghost module work in tandem to effectively merge multi-scale information derived from low-level features, high-level features, and decoder masks at each stage. Comparative experiments were conducted between the proposed MSGU-Net and state-of-the-art networks on the ISIC2017 and ISIC2018 datasets. In short, compared to the baseline U-Net, our model achieves superior segmentation performance while reducing parameter and computation costs by 96.08 and 92.59%, respectively. Moreover, MSGU-Net can serve as a lightweight deep neural network suitable for deployment across a range of intelligent devices and mobile platforms, offering considerable potential for widespread adoption.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1480055"},"PeriodicalIF":2.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-03eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1517960
Zhe Zhang, Yuchun Zheng
{"title":"Architectural planning robot driven by unsupervised learning for space optimization.","authors":"Zhe Zhang, Yuchun Zheng","doi":"10.3389/fnbot.2024.1517960","DOIUrl":"10.3389/fnbot.2024.1517960","url":null,"abstract":"<p><strong>Introduction: </strong>Space optimization in architectural planning is a crucial task for maximizing functionality and improving user experience in built environments. Traditional approaches often rely on manual planning or supervised learning techniques, which can be limited by the availability of labeled data and may not adapt well to complex spatial requirements.</p><p><strong>Methods: </strong>To address these limitations, this paper presents a novel architectural planning robot driven by unsupervised learning for automatic space optimization. The proposed framework integrates spatial attention, clustering, and state refinement mechanisms to autonomously learn and optimize spatial configurations without the need for labeled training data. The spatial attention mechanism focuses the model on key areas within the architectural space, clustering identifies functional zones, and state refinement iteratively improves the spatial layout by adjusting based on learned patterns. Experiments conducted on multiple 3D datasets demonstrate the effectiveness of the proposed approach in achieving optimized space layouts with reduced computational requirements.</p><p><strong>Results and discussion: </strong>The results show significant improvements in layout efficiency and processing time compared to traditional methods, indicating the potential for real-world applications in automated architectural planning and dynamic space management. This work contributes to the field by providing a scalable solution for architectural space optimization that adapts to diverse spatial requirements through unsupervised learning.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1517960"},"PeriodicalIF":2.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-12-20eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1499734
Qikai Sun
{"title":"EEG-powered cerebral transformer for athletic performance.","authors":"Qikai Sun","doi":"10.3389/fnbot.2024.1499734","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1499734","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.</p><p><strong>Methods: </strong>To address these challenges, this paper proposes a \"Cerebral Transformer\" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.</p><p><strong>Results and discussion: </strong>The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1499734"},"PeriodicalIF":2.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}