{"title":"MS‐SLAM: Memory‐Efficient Visual SLAM With Sliding Window Map Sparsification","authors":"Xiaoyu Zhang, Jinhu Dong, Yin Zhang, Yun‐Hui Liu","doi":"10.1002/rob.22431","DOIUrl":"https://doi.org/10.1002/rob.22431","url":null,"abstract":"<jats:label/>While most visual SLAM systems traditionally prioritize accuracy or speed, the associated memory consumption would also become a concern for robots working in large‐scale environments, primarily due to the perpetual preservation of increasing number of redundant map points. Although these redundant map points are initially constructed to ensure robust frame tracking, they contribute little once the robot moves to other locations and are primarily kept for potential loop closure. After continuous optimization, these map points are accurate and actually not all of them are essential for loop closure. Therefore, this paper proposes MS‐SLAM, a memory‐efficient visual SLAM system with map sparsification aimed at selecting only parts of useful map points to keep in the global map. In MS‐SLAM, all local map points are temporarily kept to ensure robust frame tracking and further optimization, while redundant nonlocal map points are removed through the proposed novel sliding window map sparsification, which is efficient and running concurrently with original SLAM tracking. The loop closure still operates well with the selected useful map points. Through exhaustive experiments across various scenes in both public and self‐collected data sets, MS‐SLAM has demonstrated comparable accuracy with the state‐of‐the‐art visual SLAM while significantly reducing memory consumption by over 70% in large‐scale scenes. This facilitates the scalability of visual SLAM in large‐scale environments, making it a promising solution for real‐world applications. We will release our codes at <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/fishmarch/MS-SLAM\">https://github.com/fishmarch/MS-SLAM</jats:ext-link>.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"54 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Igor Ferreira da Costa, Antonio Candea Leite, Wouter Caarls
{"title":"Data set diversity in crop row detection based on CNN models for autonomous robot navigation","authors":"Igor Ferreira da Costa, Antonio Candea Leite, Wouter Caarls","doi":"10.1002/rob.22418","DOIUrl":"https://doi.org/10.1002/rob.22418","url":null,"abstract":"Agricultural automation emerges as a vital tool to increase field efficiency, pest control, and reduce labor burdens. While agricultural mobile robots hold promise for automation, challenges persist, particularly in navigating a plantation environment. Accurate robot localization is already possible, but existing Global Navigation Satellite System with Real‐time Kinematic systems are costly, while also demanding careful and precise mapping. In response, onboard navigation approaches gain traction, leveraging sensors like cameras and light detection and rangings. However, the machine learning methods used in camera‐based systems are highly sensitive to the training data set used. In this paper, we study the effects of data set diversity on a proposed deep learning‐based visual navigation system. Leveraging multiple data sets, we assess the model robustness and adaptability while investigating the effects of data diversity available during the training phase. The system is presented with a range of different camera configurations, hardware, field structures, as well as a simulated environment. The results show that mixing images from different cameras and fields can improve not only system robustness to changing conditions but also its single‐condition performance. Real‐world tests were conducted which show that good results can be achieved with reasonable amounts of data.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"1 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moh Shahid Khan, Ravi Kumar Mandava, Vijay Panchore
{"title":"Optimizing PID control for enhanced stability of a 16‐DOF biped robot during ditch crossing","authors":"Moh Shahid Khan, Ravi Kumar Mandava, Vijay Panchore","doi":"10.1002/rob.22425","DOIUrl":"https://doi.org/10.1002/rob.22425","url":null,"abstract":"The current research article discusses the design of a proportional–integral–derivative (PID) controller to obtain the optimal gait planning algorithm for a 16‐degrees‐of‐freedom biped robot while crossing the ditch. The gait planning algorithm integrates an initial posture, position, and desired trajectories of the robot's wrist, hip, and foot. A cubic polynomial trajectory is assigned for wrist, hip, and foot trajectories to generate the motion. The foot and wrist joint angles of the biped robot along the polynomial trajectory are obtained by using the inverse kinematics approach. Moreover, the dynamic balance margin was estimated by using the concept of the zero‐moment point. To enhance the smooth motion of the gait planner and reduce the error between two consecutive joint angles, the authors designed a PID controller for each joint of the biped robot. To design a PID controller, the dynamics of the biped robot are essential, and it was obtained using the Lagrange–Euler formulation. The gains, that is, <jats:italic>K</jats:italic><jats:sub><jats:italic>P</jats:italic></jats:sub>, <jats:italic>K</jats:italic><jats:sub><jats:italic>D</jats:italic></jats:sub>, and <jats:italic>K</jats:italic><jats:sub><jats:italic>I</jats:italic></jats:sub> of the PID controller are tuned with nontraditional optimization algorithms, such as particle swarm optimization (PSO), differential evolution (DE), and compared with modified chaotic invasive weed optimization (MCIWO) algorithms. The result indicates that the MCIWO‐PID controller generates more dynamically balanced gaits when compared with the DE and PSO‐PID controllers.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"108 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaxuan Yan, Haiyang Zhang, Changming Zhao, Xuan Liu, Siyuan Fu
{"title":"LiDAR‐based place recognition for mobile robots in ground/water surface multiple scenes","authors":"Yaxuan Yan, Haiyang Zhang, Changming Zhao, Xuan Liu, Siyuan Fu","doi":"10.1002/rob.22423","DOIUrl":"https://doi.org/10.1002/rob.22423","url":null,"abstract":"LiDAR‐based 3D place recognition is an essential component of simultaneous localization and mapping systems in multi‐scene robotic applications. However, extracting discriminative and generalizable global descriptors of point clouds is still an open issue due to the insufficient use of the information contained in the LiDAR scans in existing approaches. In this paper, we propose a novel spatial‐temporal point cloud encoding network for multiple scenes, dubbed STM‐Net, to fully fuse the multi‐view spatial information and temporal information of LiDAR point clouds. Specifically, we first develop a spatial feature encoding module consisting of the single‐view transformer and multi‐view transformer. The module learns the correlation both within a single view and between two views by utilizing the multi‐layer range images generated by spherical projection and multi‐layer bird's eye view images generated by top‐down projection. Then in the temporal feature encoding module, we exploit the temporal transformer to mine the temporal information in the sequential point clouds, and a NetVLAD layer is applied to aggregate features and generate sub‐descriptors. Furthermore, we use a GeM pooling layer to fuse more information along the time dimension for the final global descriptors. Extensive experiments conducted on unmanned ground/surface vehicles with different LiDAR configurations indicate that our method (1) achieves superior place recognition performance than state‐of‐the‐art algorithms, (2) generalizes well to diverse sceneries, (3) is robust to viewpoint changes, (4) can operate in real‐time, demonstrating the effectiveness and satisfactory capability of the proposed approach and highlighting its promising applications in multi‐scene place recognition tasks.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"80 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Ma, Zhiji Han, Mingge Li, Zhijie Liu, Wei He, Shuzhi Sam Ge
{"title":"Conductive hydrogels‐based self‐sensing soft robot state perception and trajectory tracking","authors":"Jie Ma, Zhiji Han, Mingge Li, Zhijie Liu, Wei He, Shuzhi Sam Ge","doi":"10.1002/rob.22420","DOIUrl":"https://doi.org/10.1002/rob.22420","url":null,"abstract":"Soft robots face significant challenges in proprioceptive sensing and precise control due to their highly deformable and compliant nature. This paper addresses these challenges by developing a conductive hydrogel sensor and integrating it into a soft robot for bending angle measurement and motion control. A quantitative mapping between the hydrogel resistance and the robot's bending gesture is formulated. Furthermore, a nonlinear differentiator is proposed to estimate the angular velocity for closed‐loop control, eliminating the reliance on conventional sensors. Meanwhile, a controller is designed to track both structural and nonstructural trajectories. The proposed approach integrates advanced soft sensing materials and intelligent control algorithms, significantly improving the proprioception and motion accuracy of soft robots. This work bridges the gap between novel material design and practical control applications, opening up new possibilities for soft robots to perform delicate tasks in various fields. The experimental results demonstrate the effectiveness of the proposed sensing and control approach in achieving precise and robust motion control of the soft robot.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"73 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danial Pour Arab, Matthias Spisser, Caroline Essert
{"title":"3D hybrid path planning for optimized coverage of agricultural fields: A novel approach for wheeled robots","authors":"Danial Pour Arab, Matthias Spisser, Caroline Essert","doi":"10.1002/rob.22422","DOIUrl":"https://doi.org/10.1002/rob.22422","url":null,"abstract":"Over the last few decades, the agricultural industry has made significant advances in autonomous systems, such as wheeled robots, with the primary objective of improving efficiency while reducing the impact on the environment. In this context, determining a path for the robot that optimizes coverage while taking into account topography, robot characteristics, and operational requirements, is critical. In this paper, we present H‐CCPP, a novel hybrid method that combines the comprehensive coverage benefits of our previous approach O‐CCPP with the computational efficiency of the Fields2Cover algorithm. Besides optimizing coverage area, overlaps, and overall travel time, it significantly improves the computation process, and enhances the flexibility of trajectory generation. H‐CCPP also considers terrain inclination to address soil erosion and energy consumption. In an effort to support this innovative approach, we have also created and made available a public data set that includes both 2D and 3D representations of 30 agricultural fields. This resource not only allows us to illustrate the effectiveness of our approach but also provides invaluable data for future research in complete coverage path planning (CCPP) for modern agriculture.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"4 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CyberCortex.AI: An AI‐based operating system for autonomous robotics and complex automation","authors":"Sorin Grigorescu, Mihai Zaha","doi":"10.1002/rob.22426","DOIUrl":"https://doi.org/10.1002/rob.22426","url":null,"abstract":"The underlying framework for controlling autonomous robots and complex automation applications is Operating Systems (OS) capable of scheduling perception‐and‐control tasks, as well as providing real‐time data communication to other robotic peers and remote cloud computers. In this paper, we introduce CyberCortex.AI, a robotics OS designed to enable heterogeneous AI‐based robotics and complex automation applications. CyberCortex.AI is a decentralized distributed OS which enables robots to talk to each other, as well as to High Performance Computers (HPC) in the cloud. Sensory and control data from the robots is streamed toward HPC systems with the purpose of training AI algorithms, which are afterwards deployed on the robots. Each functionality of a robot (e.g., sensory data acquisition, path planning, motion control, etc.) is executed within a so‐called DataBlock of Filters shared through the internet, where each filter is computed either locally on the robot itself or remotely on a different robotic system. The data is stored and accessed via a so‐called <jats:italic>Temporal Addressable Memory</jats:italic> (TAM), which acts as a gateway between each filter's input and output. CyberCortex.AI has two main components: (i) the CyberCortex.AI.inference system, which is a real‐time implementation of the DataBlock running on the robots' embedded hardware, and (ii) the CyberCortex.AI.dojo, which runs on an HPC computer in the cloud, and it is used to design, train and deploy AI algorithms. We present a quantitative and qualitative performance analysis of the proposed approach using two collaborative robotics applications: (i) a forest fires prevention system based on an Unitree A1 legged robot and an Anafi Parrot 4K drone, as well as (ii) an autonomous driving system which uses CyberCortex.AI for collaborative perception and motion control.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"3 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gang Chen, Yidong Xu, Chenguang Yang, Xin Yang, Huosheng Hu, Fei Dong, Jingjing Zhang, Jianwei Shi
{"title":"Dynamic modeling and experimental analysis of a novel bionic mantis shrimp robot","authors":"Gang Chen, Yidong Xu, Chenguang Yang, Xin Yang, Huosheng Hu, Fei Dong, Jingjing Zhang, Jianwei Shi","doi":"10.1002/rob.22424","DOIUrl":"https://doi.org/10.1002/rob.22424","url":null,"abstract":"Small carnivorous marine animals have developed agile movement abilities through long‐term natural selection, resulting in excellent maneuverability and high swimming efficiency, making them ideal models for underwater robots. To meet the requirements for exploring narrow underwater zones, this paper designs an underwater robot inspired by mantis shrimp. By analyzing the body structure and swimming mode of the mantis shrimp, we designed a robot structure and hardware system and established a dynamic model for the coupled motion of multiple pleopods. A series of underwater experiments were conducted to verify the dynamic model and assess the performance of the prototype. The experimental results confirmed the accuracy of the dynamic model and demonstrated that the bionic mantis shrimp robot can perform multiangle turns and flexible velocity adjustments and exhibits good motion performance. This approach provides a novel solution for developing robots suitable for detecting complex underwater environments.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"13 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aizun Liu, Chong Liu, Lei Li, Ruchao Wang, Zhiguo Lu
{"title":"An improved fuzzy‐controlled local path planning algorithm based on dynamic window approach","authors":"Aizun Liu, Chong Liu, Lei Li, Ruchao Wang, Zhiguo Lu","doi":"10.1002/rob.22419","DOIUrl":"https://doi.org/10.1002/rob.22419","url":null,"abstract":"With the increasingly complex operating environment of mobile robots, the intelligent requirements of robots are getting higher and higher. Navigation technology is the core of mobile robot intelligent technology research, and path planning is an important function of mobile robot navigation. Dynamic window approach (DWA) is one of the most popular local path planning algorithms nowadays. However, there are also some problems. DWA algorithm is easy to fall into local optimal solution without the guidance of global path. The traditional solution is to use the key nodes of the global path as the temporary target points. However, the guiding ability of the temporary target points will be weakened in some cases, which still leads DWA to fall into local optimal solutions such as being trapped by a “C”‐shaped obstacle or go around outside of a dense obstacle area. In a complex operating environment, if the local path deviates too far from the global path, serious consequences may be caused. Therefore, we proposed a trajectory similarity evaluation function based on dynamic time warping method to provide better guidance. The other problem is poor adaptability to complex environments due to fixed evaluation function weights. And, we designed a fuzzy controller to improve the adaptability of the DWA algorithm in complex environments. Experiment results show that the trajectory similarity evaluation function reduces algorithm execution time by 0.7% and mileage by 2.1%, the fuzzy controller reduces algorithm execution time by 10.8% and improves the average distance between the mobile robot and obstacles at the global path's danger points by 50%, and in simulated complex terrain environment, the finishing rate of experiments improves by 25%.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"1 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Fontana, Federica Di Lauro, Domenico G. Sorrenti
{"title":"Assessing the practical applicability of neural‐based point clouds registration algorithms: A comparative analysis","authors":"Simone Fontana, Federica Di Lauro, Domenico G. Sorrenti","doi":"10.1002/rob.22417","DOIUrl":"https://doi.org/10.1002/rob.22417","url":null,"abstract":"Point cloud registration is a vital task in three‐dimensional (3D) perception, with several different applications in robotics. Recent advancements have introduced neural‐based techniques that promise enhanced accuracy and robustness. In this paper, we thoroughly evaluate well‐known neural‐based point cloud registration methods using the Point Clouds Registration Benchmark, which was developed to cover a large variety of use cases. Our evaluation focuses on the performance of these techniques when applied to real‐complex data, which presents a more challenging and realistic scenario than the simpler experiments typically conducted by the original authors. The results reveal considerable variability in performance across different techniques, highlighting the importance of assessing algorithms in realistic settings. Notably, 3DSmoothNet emerges as a standout solution, demonstrating good and consistent results across various data sets. Its efficacy, coupled with a relatively low graphics processing unit (GPU) memory footprint, makes it a promising choice for robotics applications, even if it is not yet suitable for real‐time applications due to its execution time. Fully Convolutional Geometric Features also performs well, albeit with greater variability among data sets. PREDATOR and GeoTransformer are promising, but demand substantial GPU memory, when handling large point clouds from the Point Clouds Registration Benchmark. A notable finding concerns the performance of Fast Point Feature Histograms, which exhibit results comparable to the best approaches while demanding minimal computational resources. Overall, this comparative analysis provides valuable insights into the strengths and limitations of neural‐based registration techniques, both in terms of the quality of the results and the computational resources required. This helps researchers to make informed decisions for robotics applications.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"9 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}