Frontiers in NeuroroboticsPub Date : 2025-02-05eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1478758
Chuangri Zhao, Yang Yu, Zeqi Ye, Ziyang Tian, Yifan Zhang, Ling-Li Zeng
{"title":"Universal slip detection of robotic hand with tactile sensing.","authors":"Chuangri Zhao, Yang Yu, Zeqi Ye, Ziyang Tian, Yifan Zhang, Ling-Li Zeng","doi":"10.3389/fnbot.2025.1478758","DOIUrl":"10.3389/fnbot.2025.1478758","url":null,"abstract":"<p><p>Slip detection is to recognize whether an object remains stable during grasping, which can significantly enhance manipulation dexterity. In this study, we explore slip detection for five-finger robotic hands being capable of performing various grasp types, and detect slippage across all five fingers as a whole rather than concentrating on individual fingertips. First, we constructed a dataset collected during the grasping of common objects from daily life across six grasp types, comprising more than 200 k data points. Second, according to the principle of deep double descent, we designed a lightweight universal slip detection convolutional network for different grasp types (USDConvNet-DG) to classify grasp states (no-touch, slipping, and stable grasp). By combining frequency with time domain features, the network achieves a computation time of only 1.26 ms and an average accuracy of over 97% on both the validation and test datasets, demonstrating strong generalization capabilities. Furthermore, we validated the proposed USDConvNet-DG in real-time grasp force adjustment in real-world scenarios, showing that it can effectively improve the stability and reliability of robotic manipulation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1478758"},"PeriodicalIF":2.6,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11843555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143482326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-02-05eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1550787
Jinchi Han, Duojicairang Ma
{"title":"Construction of multi-robot platform based on dobot robots.","authors":"Jinchi Han, Duojicairang Ma","doi":"10.3389/fnbot.2025.1550787","DOIUrl":"10.3389/fnbot.2025.1550787","url":null,"abstract":"<p><p>For the researches of cooperative control scheme for multirobot systems, this paper sets up an experimental platform based on dobot robots, which can be used to perform physical experiments to verify related schemes. A distributed scheme is proposed to achieve cooperative control for multirobot systems. Simulation results prove the effectiveness of the distributed scheme. Then, the experimental platform based on dobot robots is built to verify the proposed scheme. Specifically, a computer sends data to the microcontroller inside the host through WiFi communication, then the host distributes data to the slaves. Finally, the physical experiment of related schemes is performed on the experimental platform. Comparing the simulations with the physical experiments, the task is successfully completed on this experimental platform, which proves the effectiveness of the scheme and the feasibility of the platform. The experimental platform developed in this paper possesses the capability to validate various schemes and exhibits strong expandability and practicality.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1550787"},"PeriodicalIF":2.6,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835969/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143457683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-02-05eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1546731
Yuxin Zhao, Jiahao Wu, Mianjie Zheng
{"title":"Noise-immune zeroing neural dynamics for dynamic signal source localization system and robotic applications in the presence of noise.","authors":"Yuxin Zhao, Jiahao Wu, Mianjie Zheng","doi":"10.3389/fnbot.2025.1546731","DOIUrl":"10.3389/fnbot.2025.1546731","url":null,"abstract":"<p><p>Time angle of arrival (AoA) and time difference of arrival (TDOA) are two widely used methods for solving dynamic signal source localization (DSSL) problems, where the position of a moving target is determined by measuring the angle and time difference of the signal's arrival, respectively. In robotic manipulator applications, accurate and real-time joint information is crucial for tasks such as trajectory tracking and visual servoing. However, signal propagation and acquisition are susceptible to noise interference, which poses challenges for real-time systems. To address this issue, a noise-immune zeroing neural dynamics (NIZND) model is proposed. The NIZND model is a brain-inspired algorithm that incorporates an integral term and an activation function into the traditional zeroing neural dynamics (ZND) model, designed to effectively mitigate noise interference during localization tasks. Theoretical analysis confirms that the proposed NIZND model exhibits global convergence and high precision under noisy conditions. Simulation experiments demonstrate the robustness and effectiveness of the NIZND model in comparison to traditional DSSL-solving schemes and in a trajectory tracking scheme for robotic manipulators. The NIZND model offers a promising solution to the challenge of accurate localization in noisy environments, ensuring both high precision and effective noise suppression. The experimental results highlight its superiority in real-time applications where noise interference is prevalent.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1546731"},"PeriodicalIF":2.6,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143457686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-30eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1513458
Lei Jiang, Chaojie Fu, Yanhong Liang, Yongbin Jin, Hongtao Wang
{"title":"Critical review on the relationship between design variables and performance of dexterous hands: a quantitative analysis.","authors":"Lei Jiang, Chaojie Fu, Yanhong Liang, Yongbin Jin, Hongtao Wang","doi":"10.3389/fnbot.2024.1513458","DOIUrl":"10.3389/fnbot.2024.1513458","url":null,"abstract":"<p><p>Dexterous hands play vital roles in tasks performed by humanoid robots. For the first time, we quantify the correlation between design variables and the performance of 65 dexterous hands using Cramér's V. Comprehensive cross-correlation analysis quantitatively reveals how the performance, such as speed, weight, fingertip force, and compactness are related to the design variables including degrees of freedom (DOF), structural form, driving form, and transmission mode. This study shows how various design parameters are coupled inherently, leading to compromise in performance metrics. These findings provide a theoretical basis for the design of dexterous hands in various application scenarios and offer new insights for performance optimization.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513458"},"PeriodicalIF":2.6,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11821616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143413708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-29eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1490267
Ugur Akcal, Ivan Georgiev Raikov, Ekaterina Dmitrievna Gribkova, Anwesa Choudhuri, Seung Hyun Kim, Mattia Gazzola, Rhanor Gillette, Ivan Soltesz, Girish Chowdhary
{"title":"LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition.","authors":"Ugur Akcal, Ivan Georgiev Raikov, Ekaterina Dmitrievna Gribkova, Anwesa Choudhuri, Seung Hyun Kim, Mattia Gazzola, Rhanor Gillette, Ivan Soltesz, Girish Chowdhary","doi":"10.3389/fnbot.2024.1490267","DOIUrl":"10.3389/fnbot.2024.1490267","url":null,"abstract":"<p><p>Visual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-art (SOTA) VPR approaches based on artificial neural networks (ANNs) suffer from computational inefficiency. However, spiking neural networks (SNNs) implemented on neuromorphic hardware are reported to have remarkable potential for more efficient solutions computationally. Still, training SOTA SNNs for VPR is often intractable on large and diverse datasets, and they typically demonstrate poor real-time operation performance. To address these shortcomings, we developed an end-to-end convolutional SNN model for VPR that leverages backpropagation for tractable training. Rate-based approximations of leaky integrate-and-fire (LIF) neurons are employed during training, which are then replaced with spiking LIF neurons during inference. The proposed method significantly outperforms existing SOTA SNNs on challenging datasets like Nordland and Oxford RobotCar, achieving 78.6% precision at 100% recall on the Nordland dataset (compared to 73.0% from the current SOTA) and 45.7% on the Oxford RobotCar dataset (compared to 20.2% from the current SOTA). Our approach offers a simpler training pipeline while yielding significant improvements in both training and inference times compared to SOTA SNNs for VPR. Hardware-in-the-loop tests using Intel's neuromorphic USB form factor, Kapoho Bay, show that our on-chip spiking models for VPR trained via the ANN-to-SNN conversion strategy continue to outperform their SNN counterparts, despite a slight but noticeable decrease in performance when transitioning from off-chip to on-chip, while offering significant energy efficiency. The results highlight the outstanding rapid prototyping and real-world deployment capabilities of this approach, showing it to be a substantial step toward more prevalent SNN-based real-world robotics solutions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1490267"},"PeriodicalIF":2.6,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11813887/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143407057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-29eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1549414
Kun Zhang, Kezhen Han, Zhijian Hu, Guoqiang Tan
{"title":"Privacy-preserving ADP for secure tracking control of AVRs against unreliable communication.","authors":"Kun Zhang, Kezhen Han, Zhijian Hu, Guoqiang Tan","doi":"10.3389/fnbot.2025.1549414","DOIUrl":"10.3389/fnbot.2025.1549414","url":null,"abstract":"<p><p>In this study, we developed an encrypted guaranteed-cost tracking control scheme for autonomous vehicles or robots (AVRs), by using the adaptive dynamic programming technique. To construct the tracking dynamics under unreliable communication, the AVR's motion is analyzed. To mitigate information leakage and unauthorized access in vehicular network systems, an encrypted guaranteed-cost policy iteration algorithm is developed, incorporating encryption and decryption schemes between the vehicle and the cloud based on the tracking dynamics. Building on a simplified single-network framework, the Hamilton-Jacobi-Bellman equation is approximately solved, avoiding the complexity of dual-network structures and reducing the computational costs. The input-constrained issue is successfully handled using a non-quadratic value function. Furthermore, the approximate optimal control is verified to stabilize the tracking system. A case study involving an AVR system validates the effectiveness and practicality of the proposed algorithm.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1549414"},"PeriodicalIF":2.6,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11813875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143407034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-24eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1513354
Ye Li, Li Yang, Meifang Yang, Fei Yan, Tonghua Liu, Chensi Guo, Rufeng Chen
{"title":"NavBLIP: a visual-language model for enhancing unmanned aerial vehicles navigation and object detection.","authors":"Ye Li, Li Yang, Meifang Yang, Fei Yan, Tonghua Liu, Chensi Guo, Rufeng Chen","doi":"10.3389/fnbot.2024.1513354","DOIUrl":"10.3389/fnbot.2024.1513354","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, Unmanned Aerial Vehicles (UAVs) have increasingly been deployed in various applications such as autonomous navigation, surveillance, and object detection. Traditional methods for UAV navigation and object detection have often relied on either handcrafted features or unimodal deep learning approaches. While these methods have seen some success, they frequently encounter limitations in dynamic environments, where robustness and computational efficiency become critical for real-time performance. Additionally, these methods often fail to effectively integrate multimodal inputs, which restricts their adaptability and generalization capabilities when facing complex and diverse scenarios.</p><p><strong>Methods: </strong>To address these challenges, we introduce NavBLIP, a novel visual-language model specifically designed to enhance UAV navigation and object detection by utilizing multimodal data. NavBLIP incorporates transfer learning techniques along with a Nuisance-Invariant Multimodal Feature Extraction (NIMFE) module. The NIMFE module plays a key role in disentangling relevant features from intricate visual and environmental inputs, allowing UAVs to swiftly adapt to new environments and improve object detection accuracy. Furthermore, NavBLIP employs a multimodal control strategy that dynamically selects context-specific features to optimize real-time performance, ensuring efficiency in high-stakes operations.</p><p><strong>Results and discussion: </strong>Extensive experiments on benchmark datasets such as RefCOCO, CC12M, and Openlmages reveal that NavBLIP outperforms existing state-of-the-art models in terms of accuracy, recall, and computational efficiency. Additionally, our ablation study emphasizes the significance of the NIMFE and transfer learning components in boosting the model's performance, underscoring NavBLIP's potential for real-time UAV applications where adaptability and computational efficiency are paramount.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513354"},"PeriodicalIF":2.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11802496/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143382200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-24eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1502071
Yuening Li, Xiuhua Yang, Changkui Chen
{"title":"Brain-inspired multimodal motion and fine-grained action recognition.","authors":"Yuening Li, Xiuhua Yang, Changkui Chen","doi":"10.3389/fnbot.2024.1502071","DOIUrl":"10.3389/fnbot.2024.1502071","url":null,"abstract":"<p><strong>Introduction: </strong>Traditional action recognition methods predominantly rely on a single modality, such as vision or motion, which presents significant limitations when dealing with fine-grained action recognition. These methods struggle particularly with video data containing complex combinations of actions and subtle motion variations.</p><p><strong>Methods: </strong>Typically, they depend on handcrafted feature extractors or simple convolutional neural network (CNN) architectures, which makes effective multimodal fusion challenging. This study introduces a novel architecture called FGM-CLIP (Fine-Grained Motion CLIP) to enhance fine-grained action recognition. FGM-CLIP leverages the powerful capabilities of Contrastive Language-Image Pretraining (CLIP), integrating a fine-grained motion encoder and a multimodal fusion layer to achieve precise end-to-end action recognition. By jointly optimizing visual and motion features, the model captures subtle action variations, resulting in higher classification accuracy in complex video data.</p><p><strong>Results and discussion: </strong>Experimental results demonstrate that FGM-CLIP significantly outperforms existing methods on multiple fine-grained action recognition datasets. Its multimodal fusion strategy notably improves the model's robustness and accuracy, particularly for videos with intricate action patterns.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1502071"},"PeriodicalIF":2.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11802800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143382178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-01-23eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1527908
Ande Chang, Yuting Ji, Yiming Bie
{"title":"Transformer-based short-term traffic forecasting model considering traffic spatiotemporal correlation.","authors":"Ande Chang, Yuting Ji, Yiming Bie","doi":"10.3389/fnbot.2025.1527908","DOIUrl":"10.3389/fnbot.2025.1527908","url":null,"abstract":"<p><p>Traffic forecasting is crucial for a variety of applications, including route optimization, signal management, and travel time estimation. However, many existing prediction models struggle to accurately capture the spatiotemporal patterns in traffic data due to its inherent nonlinearity, high dimensionality, and complex dependencies. To address these challenges, a short-term traffic forecasting model, Trafficformer, is proposed based on the Transformer framework. The model first uses a multilayer perceptron to extract features from historical traffic data, then enhances spatial interactions through Transformer-based encoding. By incorporating road network topology, a spatial mask filters out noise and irrelevant interactions, improving prediction accuracy. Finally, traffic speed is predicted using another multilayer perceptron. In the experiments, Trafficformer is evaluated on the Seattle Loop Detector dataset. It is compared with six baseline methods, with Mean Absolute Error, Mean Absolute Percentage Error, and Root Mean Square Error used as metrics. The results show that Trafficformer not only has higher prediction accuracy, but also can effectively identify key sections, and has great potential in intelligent traffic control optimization and refined traffic resource allocation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1527908"},"PeriodicalIF":2.6,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143364427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AMEEGNet: attention-based multiscale EEGNet for effective motor imagery EEG decoding.","authors":"Xuejian Wu, Yaqi Chu, Qing Li, Yang Luo, Yiwen Zhao, Xingang Zhao","doi":"10.3389/fnbot.2025.1540033","DOIUrl":"10.3389/fnbot.2025.1540033","url":null,"abstract":"<p><p>Recently, electroencephalogram (EEG) based on motor imagery (MI) have gained significant traction in brain-computer interface (BCI) technology, particularly for the rehabilitation of paralyzed patients. But the low signal-to-noise ratio of MI EEG makes it difficult to decode effectively and hinders the development of BCI. In this paper, a method of attention-based multiscale EEGNet (AMEEGNet) was proposed to improve the decoding performance of MI-EEG. First, three parallel EEGNets with fusion transmission method were employed to extract the high-quality temporal-spatial feature of EEG data from multiple scales. Then, the efficient channel attention (ECA) module enhances the acquisition of more discriminative spatial features through a lightweight approach that weights critical channels. The experimental results demonstrated that the proposed model achieves decoding accuracies of 81.17, 89.83, and 95.49% on BCI-2a, 2b and HGD datasets. The results show that the proposed AMEEGNet effectively decodes temporal-spatial features, providing a novel perspective on MI-EEG decoding and advancing future BCI applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1540033"},"PeriodicalIF":2.6,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143254282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}