{"title":"E2VIDX: improved bridge between conventional vision and bionic vision.","authors":"Xujia Hou, Feihu Zhang, Dhiraj Gulati, Tingfeng Tan, Wei Zhang","doi":"10.3389/fnbot.2023.1277160","DOIUrl":"10.3389/fnbot.2023.1277160","url":null,"abstract":"<p><p>Common RGBD, CMOS, and CCD-based cameras produce motion blur and incorrect exposure under high-speed and improper lighting conditions. According to the bionic principle, the event camera developed has the advantages of low delay, high dynamic range, and no motion blur. However, due to its unique data representation, it encounters significant obstacles in practical applications. The image reconstruction algorithm based on an event camera solves the problem by converting a series of \"events\" into common frames to apply existing vision algorithms. Due to the rapid development of neural networks, this field has made significant breakthroughs in past few years. Based on the most popular Events-to-Video (E2VID) method, this study designs a new network called E2VIDX. The proposed network includes group convolution and sub-pixel convolution, which not only achieves better feature fusion but also the network model size is reduced by 25%. Futhermore, we propose a new loss function. The loss function is divided into two parts, first part calculates the high level features and the second part calculates the low level features of the reconstructed image. The experimental results clearly outperform against the state-of-the-art method. Compared with the original method, Structural Similarity (SSIM) increases by 1.3%, Learned Perceptual Image Patch Similarity (LPIPS) decreases by 1.7%, Mean Squared Error (MSE) decreases by 2.5%, and it runs faster on GPU and CPU. Additionally, we evaluate the results of E2VIDX with application to image classification, object detection, and instance segmentation. The experiments show that conversions using our method can help event cameras directly apply existing vision algorithms in most scenarios.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1277160"},"PeriodicalIF":3.1,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10639115/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89717906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-25eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1239581
Camilo Amaya, Axel von Arnim
{"title":"Neurorobotic reinforcement learning for domains with parametrical uncertainty.","authors":"Camilo Amaya, Axel von Arnim","doi":"10.3389/fnbot.2023.1239581","DOIUrl":"10.3389/fnbot.2023.1239581","url":null,"abstract":"<p><p>Neuromorphic hardware paired with brain-inspired learning strategies have enormous potential for robot control. Explicitly, these advantages include low energy consumption, low latency, and adaptability. Therefore, developing and improving learning strategies, algorithms, and neuromorphic hardware integration in simulation is a key to moving the state-of-the-art forward. In this study, we used the neurorobotics platform (NRP) simulation framework to implement spiking reinforcement learning control for a robotic arm. We implemented a force-torque feedback-based classic object insertion task (\"peg-in-hole\") and controlled the robot for the first time with neuromorphic hardware in the loop. We therefore provide a solution for training the system in uncertain environmental domains by using randomized simulation parameters. This leads to policies that are robust to real-world parameter variations in the target domain, filling the sim-to-real gap.To the best of our knowledge, it is the first neuromorphic implementation of the peg-in-hole task in simulation with the neuromorphic Loihi chip in the loop, and with scripted accelerated interactive training in the Neurorobotics Platform, including randomized domains.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1239581"},"PeriodicalIF":3.1,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107591032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-20eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1305786
Emilio Trigili, Sandra Hirche
{"title":"Editorial: Wearable robotics in the rehabilitation continuum of care: assessment, treatment and home assistance.","authors":"Emilio Trigili, Sandra Hirche","doi":"10.3389/fnbot.2023.1305786","DOIUrl":"10.3389/fnbot.2023.1305786","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1305786"},"PeriodicalIF":2.6,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10623441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71480608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-19eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1304597
Jesús D Rivero-Ortega, Juan S Mosquera-Maturana, Josh Pardo-Cabrera, Julián Hurtado-López, Juan D Hernández, Victor Romero-Cano, David F Ramírez-Moreno
{"title":"Corrigendum: Ring attractor bio-inspired neural network for social robot navigation.","authors":"Jesús D Rivero-Ortega, Juan S Mosquera-Maturana, Josh Pardo-Cabrera, Julián Hurtado-López, Juan D Hernández, Victor Romero-Cano, David F Ramírez-Moreno","doi":"10.3389/fnbot.2023.1304597","DOIUrl":"10.3389/fnbot.2023.1304597","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/fnbot.2023.1211570.].</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1304597"},"PeriodicalIF":3.1,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10622753/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71480606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-17eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1270860
Ling Zheng, Chengzhi Hong, Huashan Song, Rong Chen
{"title":"An autonomous mobile robot path planning strategy using an enhanced slime mold algorithm.","authors":"Ling Zheng, Chengzhi Hong, Huashan Song, Rong Chen","doi":"10.3389/fnbot.2023.1270860","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1270860","url":null,"abstract":"<p><strong>Introduction: </strong>Autonomous mobile robot encompasses modules such as perception, path planning, decision-making, and control. Among these modules, path planning serves as a prerequisite for mobile robots to accomplish tasks. Enhancing path planning capability of mobile robots can effectively save costs, reduce energy consumption, and improve work efficiency. The primary slime mold algorithm (SMA) exhibits characteristics such as a reduced number of parameters, strong robustness, and a relatively high level of exploratory ability. SMA performs well in path planning for mobile robots. However, it is prone to local optimization and lacks dynamic obstacle avoidance, making it less effective in real-world settings.</p><p><strong>Methods: </strong>This paper presents an enhanced SMA (ESMA) path-planning algorithm for mobile robots. The ESMA algorithm incorporates adaptive techniques to enhance global search capabilities and integrates an artificial potential field to improve dynamic obstacle avoidance.</p><p><strong>Results and discussion: </strong>Compared to the SMA algorithm, the SMA-AGDE algorithm, which combines the Adaptive Guided Differential Evolution algorithm, and the Lévy Flight-Rotation SMA (LRSMA) algorithm, resulted in an average reduction in the minimum path length of (3.92%, 8.93%, 2.73%), along with corresponding reductions in path minimum values and processing times. Experiments show ESMA can find shortest collision-free paths for mobile robots in both static and dynamic environments.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1270860"},"PeriodicalIF":3.1,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10616528/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71422812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-16eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1285673
Zhuqin Han
{"title":"Multimodal intelligent logistics robot combining 3D CNN, LSTM, and visual SLAM for path planning and control.","authors":"Zhuqin Han","doi":"10.3389/fnbot.2023.1285673","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1285673","url":null,"abstract":"<p><strong>Introduction: </strong>In today's dynamic logistics landscape, the role of intelligent robots is paramount for enhancing efficiency, reducing costs, and ensuring safety. Traditional path planning methods often struggle to adapt to changing environments, resulting in issues like collisions and conflicts. This research addresses the challenge of path planning and control for logistics robots operating in complex environments. The proposed method aims to integrate information from various perception sources to enhance path planning and obstacle avoidance, thereby increasing the autonomy and reliability of logistics robots.</p><p><strong>Methods: </strong>The method presented in this paper begins by employing a 3D Convolutional Neural Network (CNN) to learn feature representations of objects within the environment, enabling object recognition. Subsequently, Long Short-Term Memory (LSTM) models are utilized to capture spatio-temporal features and predict the behavior and trajectories of dynamic obstacles. This predictive capability empowers robots to more accurately anticipate the future positions of obstacles in intricate settings, thereby mitigating potential collision risks. Finally, the Dijkstra algorithm is employed for path planning and control decisions to ensure the selection of optimal paths across diverse scenarios.</p><p><strong>Results: </strong>In a series of rigorous experiments, the proposed method outperforms traditional approaches in terms of both path planning accuracy and obstacle avoidance performance. These substantial improvements underscore the efficacy of the intelligent path planning and control scheme.</p><p><strong>Discussion: </strong>This research contributes to enhancing the practicality of logistics robots in complex environments, thereby fostering increased efficiency and safety within the logistics industry. By combining object recognition, spatio-temporal modeling, and optimized path planning, the proposed method enables logistics robots to navigate intricate scenarios with higher precision and reliability, ultimately advancing the capabilities of autonomous logistics operations.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1285673"},"PeriodicalIF":3.1,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10613672/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71422813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-13eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1274543
Jun Zhang, Dayong Tao
{"title":"Research on deep reinforcement learning basketball robot shooting skills improvement based on end to end architecture and multi-modal perception.","authors":"Jun Zhang, Dayong Tao","doi":"10.3389/fnbot.2023.1274543","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1274543","url":null,"abstract":"<p><strong>Introduction: </strong>In the realm of basketball, refining shooting skills and decision-making levels using intelligent agents has garnered significant interest. This study addresses the challenge by introducing an innovative framework that combines multi-modal perception and deep reinforcement learning. The goal is to create basketball robots capable of executing precise shots and informed choices by effectively integrating sensory inputs and learned strategies.</p><p><strong>Methods: </strong>The proposed approach consists of three main components: multi-modal perception, deep reinforcement learning, and end-to-end architecture. Multi-modal perception leverages the multi-head attention mechanism (MATT) to merge visual, motion, and distance cues for a holistic perception of the basketball scenario. The deep reinforcement learning framework utilizes the Deep Q-Network (DQN) algorithm, enabling the robots to learn optimal shooting strategies over iterative interactions with the environment. The end-to-end architecture connects these components, allowing seamless integration of perception and decision-making processes.</p><p><strong>Results: </strong>The experiments conducted demonstrate the effectiveness of the proposed approach. Basketball robots equipped with multi-modal perception and deep reinforcement learning exhibit improved shooting accuracy and enhanced decision-making abilities. The multi-head attention mechanism enhances the robots' perception of complex scenes, leading to more accurate shooting decisions. The application of the DQN algorithm results in gradual skill improvement and strategic optimization through interaction with the environment.</p><p><strong>Discussion: </strong>The integration of multi-modal perception and deep reinforcement learning within an end-to-end architecture presents a promising avenue for advancing basketball robot training and performance. The ability to fuse diverse sensory inputs and learned strategies empowers robots to make informed decisions and execute accurate shots. The research not only contributes to the field of robotics but also has potential implications for human basketball training and coaching methodologies.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1274543"},"PeriodicalIF":3.1,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10615595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71422814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-12eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1276519
Jing Xu, Wanruo Zhang, Jialun Cai, Hong Liu
{"title":"SafeCrowdNav: safety evaluation of robot crowd navigation in complex scenes.","authors":"Jing Xu, Wanruo Zhang, Jialun Cai, Hong Liu","doi":"10.3389/fnbot.2023.1276519","DOIUrl":"10.3389/fnbot.2023.1276519","url":null,"abstract":"<p><p>Navigating safely and efficiently in dense crowds remains a challenging problem for mobile robots. The interaction mechanisms involved in collision avoidance require robots to exhibit active and foresighted behaviors while understanding the crowd dynamics. Deep reinforcement learning methods have shown superior performance compared to model-based approaches. However, existing methods lack an intuitive and quantitative safety evaluation for agents, and they may potentially trap agents in local optima during training, hindering their ability to learn optimal strategies. In addition, sparse reward problems further compound these limitations. To address these challenges, we propose SafeCrowdNav, a comprehensive crowd navigation algorithm that emphasizes obstacle avoidance in complex environments. Our approach incorporates a safety evaluation function to quantitatively assess the current safety score and an intrinsic exploration reward to balance exploration and exploitation based on scene constraints. By combining prioritized experience replay and hindsight experience replay techniques, our model effectively learns the optimal navigation policy in crowded environments. Experimental outcomes reveal that our approach enables robots to improve crowd comprehension during navigation, resulting in reduced collision probabilities and shorter navigation times compared to state-of-the-art algorithms. Our code is available at https://github.com/Janet-xujing-1216/SafeCrowdNav.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1276519"},"PeriodicalIF":3.1,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10613488/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71411835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2023-10-12eCollection Date: 2023-01-01DOI: 10.3389/fnbot.2023.1244417
Binbin Su, Elena M Gutierrez-Farewik
{"title":"Simulating human walking: a model-based reinforcement learning approach with musculoskeletal modeling.","authors":"Binbin Su, Elena M Gutierrez-Farewik","doi":"10.3389/fnbot.2023.1244417","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1244417","url":null,"abstract":"<p><strong>Introduction: </strong>Recent advancements in reinforcement learning algorithms have accelerated the development of control models with high-dimensional inputs and outputs that can reproduce human movement. However, the produced motion tends to be less human-like if algorithms do not involve a biomechanical human model that accounts for skeletal and muscle-tendon properties and geometry. In this study, we have integrated a reinforcement learning algorithm and a musculoskeletal model including trunk, pelvis, and leg segments to develop control modes that drive the model to walk.</p><p><strong>Methods: </strong>We simulated human walking first without imposing target walking speed, in which the model was allowed to settle on a stable walking speed itself, which was 1.45 <i>m</i>/<i>s</i>. A range of other speeds were imposed for the simulation based on the previous self-developed walking speed. All simulations were generated by solving the Markov decision process problem with covariance matrix adaptation evolution strategy, without any reference motion data.</p><p><strong>Results: </strong>Simulated hip and knee kinematics agreed well with those in experimental observations, but ankle kinematics were less well-predicted.</p><p><strong>Discussion: </strong>We finally demonstrated that our reinforcement learning framework also has the potential to model and predict pathological gait that can result from muscle weakness.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1244417"},"PeriodicalIF":3.1,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10601656/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71411836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}