Xinyang Mu, Long He, Paul Heinemann, James Schupp, Manoj Karkee, Minghui Zhu
{"title":"UGV-Based Precision Spraying System for Chemical Apple Blossom Thinning on Trellis Trained Canopies","authors":"Xinyang Mu, Long He, Paul Heinemann, James Schupp, Manoj Karkee, Minghui Zhu","doi":"10.1002/rob.22435","DOIUrl":"10.1002/rob.22435","url":null,"abstract":"<p>Blossom thinning is one of the key steps in apple crop load management that improves the quality of apples, reduces stress on the trees, and avoids the likelihood of biennial bearing. Conventional chemical blossom thinning such as air-blast spraying can lead to excessive use of chemical thinner to ensure full coverage, which can also cause leaf damage fruit russeting. In addition, a well-trained operator is required to use these chemical spraying systems. To address these challenges, a UGV-based precision spraying system was developed for automated and targeted chemical blossom thinning for apples. The system is capable of automatically driving along the tree row in the orchard environment during blooming stage, locating apple flower clusters to be thinned using a real-time machine vision system, and precisely spraying the chemical thinner to the targeted flower clusters. A set of field tests were conducted to evaluate the performance of the UGV-based target spraying system by comparing it to a conventional air-blast sprayer (ABS) and a previous prototype named the cartesian target sprayer (CTS). Tests showed that the flower cluster detection reached a precision of 93.8%. The UGV-based spraying system used 2.2 L of chemical thinner to finish the chemical thinning for 30 apple trees, followed by the ABS and CTS with 4.2 and 2.4 L usage, respectively. The robotic system also obtained an average fruit set of 2.2 per cluster after thinning, which was comparable to that with the air blast sprayer. The findings indicated that the robotic thinning system demonstrated a 66.7% reduction in chemical usage compared to the ABS and exhibited a 43.0% faster operational pace than the CTS, while attaining a comparable fruit set per cluster. The outcomes of the study provided guidance for developing a full scale robotic chemical thinning system for modern apple orchards.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"1000-1011"},"PeriodicalIF":4.2,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22435","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAMFDet: Acoustic-Optical Fusion for Underwater Multi-Modal Object Detection","authors":"Haojie Chen, Zhuo Wang, Hongde Qin, Xiaokai Mu","doi":"10.1002/rob.22432","DOIUrl":"10.1002/rob.22432","url":null,"abstract":"<div>\u0000 \u0000 <p>Underwater object detection serves as a crucial means for autonomous underwater vehicles (AUVs) to gain awareness of their surroundings. Currently, AUVs predominantly depend on underwater optical cameras or sonar sensing techniques to furnish vital information sources for subsequent tasks such as underwater rescue and mining exploration. However, the influence of underwater light attenuation or significant background noise often leads to the failure of either the acoustic or optical sensor. Consequently, the traditional single-modal object detection network, which relies exclusively on either the optical or acoustic modality, struggles to adapt to the varying complexities of underwater environments. To address this challenge, this paper proposes a novel underwater acoustic-optical fusion-based underwater multi-modal object detection paradigm termed UAMFDet, which fuses highly misaligned acoustic-optical features in the spatial dimension at both the fine-grained level and the instance level. First, we propose a multi-modal deformable self-aligned feature fusion module to adaptively capture feature dependencies between multi-modal targets, and perform self-aligned multi-modal fine-grained feature fusion by differential fusion. Then a multi-modal instance-level feature matching network is designed. It matches multi-modal instance features by a lightweight cross-attention mechanism and performs differential fusion to achieve instance-level feature fusion. In addition, we establish a data set dedicated to underwater acoustic-optical fusion object detection tasks called UAOF, and conduct a large number of experiments on the UAOF data set to verify the effectiveness of UAMFDet.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"970-983"},"PeriodicalIF":4.2,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CLVSim: A comprehensive framework for crewed lunar vehicle simulation—Modeling and applications","authors":"Qingning Lan, Liang Ding, Huaiguang Yang, Lutz Richter, Zhengyin Wang, Haibo Gao, Zongquan Deng","doi":"10.1002/rob.22421","DOIUrl":"10.1002/rob.22421","url":null,"abstract":"<p>Crewed lunar vehicles (CLVs) significantly enhance astronauts’ exploration range and efficiency on the moon, paving the way for more comprehensive scientific research. Utilizing computer simulations offers an effective alternative to conducting experiments in low-gravity conditions if backed up by appropriate model validation. This study introduces a detailed simulation framework CLVSim (Crewed Lunar Vehicle Simulation), including subsystems of smoothed particle hydrodynamics (SPH) soft terrain, suspensions, motors, wheels, fenders, and driver. A high-fidelity instance of CLVSim was modeled and benchmarked based on the Lunar Roving Vehicle (LRV) from the “Apollo” program. Each subsystem was independently modeled and benchmarked based on the information from the Apollo handbook. These subsystems were then integrated to benchmark the overall operation of the CLV with experiment in a simulated lunar environment, with a mean relative error of 8.6%. The mean relative error between simulation and experiment for all subsystems and overall CLV test was less than 10%. Further applications of CLVSim were investigated. For instance, two fender designs were investigated to evaluate their effectiveness in mitigating dust emission from wheels. The vehicles’ performances were examined with four different configurations: a standard CLV on flat terrain, and CLVs with two types of suspension stiffness and torque coordination strategy driveline on rugged terrain. Comparing the maneuverability of CLVs with passive and differential drive to those with stiffer suspensions, there were approximately 9% and 7% savings in steering, respectively. The high fidelity and potential for advanced research of the simulation framework were demonstrated in areas like CLV mechanism design, dust prevention and control strategy design.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 2","pages":"584-603"},"PeriodicalIF":4.2,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Objective Route Outlining and Collision Avoidance of Multiple Humanoid Robots in a Cluttered Environment","authors":"Abhishek Kumar Kashyap, Dayal R. Parhi","doi":"10.1002/rob.22428","DOIUrl":"10.1002/rob.22428","url":null,"abstract":"<div>\u0000 \u0000 <p>In robotics, navigating a humanoid robot through a cluttered environment is challenging. The present study aims to enhance the footstep and determine optimal paths regarding the robot's route length. The objective function for navigation of multiple humanoid robots is presented to optimize the route length and travel time. A hybrid technique using a probabilistic roadmap (PRM) and firefly algorithm (FA) is presented for humanoid robot navigation in a cluttered environment with static and dynamic obstacles. Sensory information, such as barrier range in the left, right, and front directions, is fed into the PRM framework that allows the humanoid robot to walk steadily with an initial steering angle. It finds the shortest path using the Bellman–Ford algorithm. The FA technique is used for efficient guidance and footstep modification in a cluttered environment to find a smooth and optimized path. To avoid static obstacles, the suggested hybrid technique provides optimum steering angles and ensures the minimum route length by taking the output of PRM as its input. A 3D simulator and a real-world environment have been used for simulation and experiment in a cluttered environment utilizing the developed model and standalone methods. The humanoid robot achieves the target in all scenarios, but the FA-tuned PRM technique is advantageous to this purpose, as shown by the convergence curve, route length, and travel duration. Multiple humanoid robot navigation has an additional self-collision issue, which is eliminated by employing a dining philosopher controller as the base technique. In addition, the proposed controller is evaluated in contrast to the existing technique. The developed strategy ensures effectiveness and efficacy depending on these findings.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"952-969"},"PeriodicalIF":4.2,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Analysis of a Push Shovel‐Type Hull‐Cleaning Wall‐Climbing Robot","authors":"Pei Yang, Jidong Jia, Lingyu Sun, Minglu Zhang, Delong Lv","doi":"10.1002/rob.22430","DOIUrl":"https://doi.org/10.1002/rob.22430","url":null,"abstract":"To address the problem of difficulty in removing marine biofouling due to the variable curvature of the ship wall, this study proposed a marine biofouling removal wall‐climbing robot equipped with an adaptive variable curvature wall cleaning module. The robot includes a mobile module, a cleaning module, and a magnetic module. The cleaning module uses push shovel cleaning technology to scrape away marine biofouling. It adopts a rigid‐flexible coupling mechanism design and can passively adapt to ship walls with different curvatures. A barnacle stress model was established, and the front angle of the push shovel was selected to be 60° through numerical simulation. On this basis, a robot adsorption failure model was established, and the minimum magnetic force required by the robot when the safety factor was 1.5 was obtained to be 1084 N. Based on the structure size of the robot, Ansys was used to conduct a comparative analysis on the adsorption efficiency of four Halbach Array magnetic circuit structures and determined that the magnetic force generated by the five‐magnetic circuit structure is relatively higher. Based on this, the structural dimensions of the magnetic module were designed, and the effects of air gap and wall thickness on magnetic force were analyzed. It was found that when the wall thickness exceeds 6 mm, the impact on magnetic force is small, and the air gap should be set within 10 mm. A robot prototype was built, and its performance was tested. The experimental results show that the robot has good motion performance; it can reach about 5 m underwater and move stably, and has good waterproof performance; when the robot moves circumferentially on the wall, the cleaning module can adapt to surfaces with a curvature of 3 m or more, and has good surface self‐adaptation ability; it is effective in removing marine biofouling.","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"10 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MS-SLAM: Memory-Efficient Visual SLAM With Sliding Window Map Sparsification","authors":"Xiaoyu Zhang, Jinhu Dong, Yin Zhang, Yun-Hui Liu","doi":"10.1002/rob.22431","DOIUrl":"10.1002/rob.22431","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 <p>While most visual SLAM systems traditionally prioritize accuracy or speed, the associated memory consumption would also become a concern for robots working in large-scale environments, primarily due to the perpetual preservation of increasing number of redundant map points. Although these redundant map points are initially constructed to ensure robust frame tracking, they contribute little once the robot moves to other locations and are primarily kept for potential loop closure. After continuous optimization, these map points are accurate and actually not all of them are essential for loop closure. Therefore, this paper proposes MS-SLAM, a memory-efficient visual SLAM system with map sparsification aimed at selecting only parts of useful map points to keep in the global map. In MS-SLAM, all local map points are temporarily kept to ensure robust frame tracking and further optimization, while redundant nonlocal map points are removed through the proposed novel sliding window map sparsification, which is efficient and running concurrently with original SLAM tracking. The loop closure still operates well with the selected useful map points. Through exhaustive experiments across various scenes in both public and self-collected data sets, MS-SLAM has demonstrated comparable accuracy with the state-of-the-art visual SLAM while significantly reducing memory consumption by over 70% in large-scale scenes. This facilitates the scalability of visual SLAM in large-scale environments, making it a promising solution for real-world applications. We will release our codes at https://github.com/fishmarch/MS-SLAM.</p>\u0000 </section>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"935-951"},"PeriodicalIF":4.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22431","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Igor Ferreira da Costa, Antonio Candea Leite, Wouter Caarls
{"title":"Data set diversity in crop row detection based on CNN models for autonomous robot navigation","authors":"Igor Ferreira da Costa, Antonio Candea Leite, Wouter Caarls","doi":"10.1002/rob.22418","DOIUrl":"10.1002/rob.22418","url":null,"abstract":"<p>Agricultural automation emerges as a vital tool to increase field efficiency, pest control, and reduce labor burdens. While agricultural mobile robots hold promise for automation, challenges persist, particularly in navigating a plantation environment. Accurate robot localization is already possible, but existing Global Navigation Satellite System with Real-time Kinematic systems are costly, while also demanding careful and precise mapping. In response, onboard navigation approaches gain traction, leveraging sensors like cameras and light detection and rangings. However, the machine learning methods used in camera-based systems are highly sensitive to the training data set used. In this paper, we study the effects of data set diversity on a proposed deep learning-based visual navigation system. Leveraging multiple data sets, we assess the model robustness and adaptability while investigating the effects of data diversity available during the training phase. The system is presented with a range of different camera configurations, hardware, field structures, as well as a simulated environment. The results show that mixing images from different cameras and fields can improve not only system robustness to changing conditions but also its single-condition performance. Real-world tests were conducted which show that good results can be achieved with reasonable amounts of data.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 2","pages":"525-538"},"PeriodicalIF":4.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moh Shahid Khan, Ravi Kumar Mandava, Vijay Panchore
{"title":"Optimizing PID control for enhanced stability of a 16-DOF biped robot during ditch crossing","authors":"Moh Shahid Khan, Ravi Kumar Mandava, Vijay Panchore","doi":"10.1002/rob.22425","DOIUrl":"10.1002/rob.22425","url":null,"abstract":"<p>The current research article discusses the design of a proportional–integral–derivative (PID) controller to obtain the optimal gait planning algorithm for a 16-degrees-of-freedom biped robot while crossing the ditch. The gait planning algorithm integrates an initial posture, position, and desired trajectories of the robot's wrist, hip, and foot. A cubic polynomial trajectory is assigned for wrist, hip, and foot trajectories to generate the motion. The foot and wrist joint angles of the biped robot along the polynomial trajectory are obtained by using the inverse kinematics approach. Moreover, the dynamic balance margin was estimated by using the concept of the zero-moment point. To enhance the smooth motion of the gait planner and reduce the error between two consecutive joint angles, the authors designed a PID controller for each joint of the biped robot. To design a PID controller, the dynamics of the biped robot are essential, and it was obtained using the Lagrange–Euler formulation. The gains, that is, <i>K</i><sub><i>P</i></sub>, <i>K</i><sub><i>D</i></sub>, and <i>K</i><sub><i>I</i></sub> of the PID controller are tuned with nontraditional optimization algorithms, such as particle swarm optimization (PSO), differential evolution (DE), and compared with modified chaotic invasive weed optimization (MCIWO) algorithms. The result indicates that the MCIWO-PID controller generates more dynamically balanced gaits when compared with the DE and PSO-PID controllers.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 2","pages":"559-583"},"PeriodicalIF":4.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaxuan Yan, Haiyang Zhang, Changming Zhao, Xuan Liu, Siyuan Fu
{"title":"LiDAR-based place recognition for mobile robots in ground/water surface multiple scenes","authors":"Yaxuan Yan, Haiyang Zhang, Changming Zhao, Xuan Liu, Siyuan Fu","doi":"10.1002/rob.22423","DOIUrl":"10.1002/rob.22423","url":null,"abstract":"<p>LiDAR-based 3D place recognition is an essential component of simultaneous localization and mapping systems in multi-scene robotic applications. However, extracting discriminative and generalizable global descriptors of point clouds is still an open issue due to the insufficient use of the information contained in the LiDAR scans in existing approaches. In this paper, we propose a novel spatial-temporal point cloud encoding network for multiple scenes, dubbed STM-Net, to fully fuse the multi-view spatial information and temporal information of LiDAR point clouds. Specifically, we first develop a spatial feature encoding module consisting of the single-view transformer and multi-view transformer. The module learns the correlation both within a single view and between two views by utilizing the multi-layer range images generated by spherical projection and multi-layer bird's eye view images generated by top-down projection. Then in the temporal feature encoding module, we exploit the temporal transformer to mine the temporal information in the sequential point clouds, and a NetVLAD layer is applied to aggregate features and generate sub-descriptors. Furthermore, we use a GeM pooling layer to fuse more information along the time dimension for the final global descriptors. Extensive experiments conducted on unmanned ground/surface vehicles with different LiDAR configurations indicate that our method (1) achieves superior place recognition performance than state-of-the-art algorithms, (2) generalizes well to diverse sceneries, (3) is robust to viewpoint changes, (4) can operate in real-time, demonstrating the effectiveness and satisfactory capability of the proposed approach and highlighting its promising applications in multi-scene place recognition tasks.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 2","pages":"539-558"},"PeriodicalIF":4.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142178551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Ma, Zhiji Han, Mingge Li, Zhijie Liu, Wei He, Shuzhi Sam Ge
{"title":"Conductive hydrogels-based self-sensing soft robot state perception and trajectory tracking","authors":"Jie Ma, Zhiji Han, Mingge Li, Zhijie Liu, Wei He, Shuzhi Sam Ge","doi":"10.1002/rob.22420","DOIUrl":"10.1002/rob.22420","url":null,"abstract":"<p>Soft robots face significant challenges in proprioceptive sensing and precise control due to their highly deformable and compliant nature. This paper addresses these challenges by developing a conductive hydrogel sensor and integrating it into a soft robot for bending angle measurement and motion control. A quantitative mapping between the hydrogel resistance and the robot's bending gesture is formulated. Furthermore, a nonlinear differentiator is proposed to estimate the angular velocity for closed-loop control, eliminating the reliance on conventional sensors. Meanwhile, a controller is designed to track both structural and nonstructural trajectories. The proposed approach integrates advanced soft sensing materials and intelligent control algorithms, significantly improving the proprioception and motion accuracy of soft robots. This work bridges the gap between novel material design and practical control applications, opening up new possibilities for soft robots to perform delicate tasks in various fields. The experimental results demonstrate the effectiveness of the proposed sensing and control approach in achieving precise and robust motion control of the soft robot.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 2","pages":"510-524"},"PeriodicalIF":4.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}