Jianjun Sun, Zhenwei Niu, Yihao Dong, Fenglin Zhang, Muhayy Ud Din, Lakmal Seneviratne, Defu Lin, Irfan Hussain, Shaoming He
{"title":"An Aerial Transport System in Marine GNSS-Denied Environment","authors":"Jianjun Sun, Zhenwei Niu, Yihao Dong, Fenglin Zhang, Muhayy Ud Din, Lakmal Seneviratne, Defu Lin, Irfan Hussain, Shaoming He","doi":"10.1002/rob.22520","DOIUrl":"https://doi.org/10.1002/rob.22520","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper presents an autonomous aerial system specifically engineered for operation in challenging marine GNSS-denied environments, aimed at transporting small cargo from a target vessel. In these environments, characterized by weakly textured sea surfaces with few feature points, chaotic deck oscillations due to waves, and significant wind gusts, conventional navigation methods often prove inadequate. Leveraging the DJI M300 platform, our system is designed to autonomously navigate and transport cargo while overcoming these environmental challenges. In particular, this paper proposes an anchor-based localization method using ultrawideband and quick-response codes facilities, which decouples the unmanned aerial vehicle's (UAV's) attitude from that of the moving landing platform, thus reducing control oscillations caused by platform movement. Additionally, a motor-driven attachment mechanism for cargo is designed, which enhances the UAV's field of view during descent and ensures a reliable attachment to the cargo upon landing. The system's reliability and effectiveness were progressively enhanced through multiple outdoor experimental iterations and were validated by the successful cargo transport during the Mohamed Bin Zayed International Robotics Challenge 2024 competition. Crucially, the system addresses uncertainties and interferences inherent in maritime transportation missions without prior knowledge of cargo locations on the deck and with strict limitations on intervention throughout the transportation.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2192-2217"},"PeriodicalIF":4.2,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keyan He, Rujie Jia, Huajie Hong, Nan Wang, Yifan Hu
{"title":"LDG-CSLAM: Multi-Robot Collaborative SLAM Based on Curve Analysis, Normal Distribution, and Factor Graph Optimization","authors":"Keyan He, Rujie Jia, Huajie Hong, Nan Wang, Yifan Hu","doi":"10.1002/rob.22509","DOIUrl":"https://doi.org/10.1002/rob.22509","url":null,"abstract":"<div>\u0000 \u0000 <p>In complex, enclosed environments where global positioning system (GPS) failures are common, multi-robot collaborative simultaneous localization and mapping (CSLAM) faces several key challenges, including redundant communication data, low fusion efficiency, and poor system robustness. These issues arise primarily due to inefficiencies in extracting and sharing descriptors of complex 3D environments, weak robustness in relative pose estimation from multiple information sources, and insufficient suppression of highly coupled dynamic estimation errors. The combined effect of these factors often leads to system failure, making it difficult to achieve stable and accurate global localization and mapping. To address these challenges, this paper proposes LDG-CSLAM, a novel multi-robot CSLAM method that integrates curve analysis, normal distribution, and factor graph optimization. LDG-CSLAM improves the efficiency of extracting and sharing global environment descriptors through key frame extraction based on point cloud curvature analysis. It further enhances performance with a distributed global mapping technique based on the normal distribution transform (NDT). Additionally, the method incorporates real-time optimization of both self and relative odometer using factor graph methods, effectively mitigating dynamic errors. This integrated design significantly reduces computational and communication overhead while improving system stability and accuracy. Experimental results, focused on operational stability, communication efficiency, and trajectory accuracy, demonstrate that LDG-CSLAM outperforms existing methods like DisCo-SLAM and DCL-SLAM, providing superior performance in multi-robot SLAM for GPS-denied environments.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2173-2191"},"PeriodicalIF":4.2,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changqing Gao, Leilei He, Yusong Ding, Bryan Gilbert Murengami, Jinyong Chen, Chengquan Zhou, Hongbao Ye, Rui Li, Longsheng Fu
{"title":"A Novel Multinozzle Targeting Pollination Robot for Clustered Kiwifruit Flowers Based on Air–Liquid Dual-Flow Spraying","authors":"Changqing Gao, Leilei He, Yusong Ding, Bryan Gilbert Murengami, Jinyong Chen, Chengquan Zhou, Hongbao Ye, Rui Li, Longsheng Fu","doi":"10.1002/rob.22499","DOIUrl":"https://doi.org/10.1002/rob.22499","url":null,"abstract":"<div>\u0000 \u0000 <p>Manual pollination of kiwifruit flowers is a labor-intensive work that is highly desired to be replaced by robotic operations. In this research, a pollination robot was developed to achieve precision pollination of clustered kiwifruit flowers in the orchard. The pollination robot consists of five systems, including a multinozzle end-effector, a mechanical arm, a vision system, a crawler-type chassis, and a control system. The robot can select preferential flowers and then target their pistil to achieve precision pollination. First, statistical analysis of the dimensions of flower clusters and individual flowers was conducted to fit normal distribution curves, which guided the design of the spray coverage and combination intervals for the multinozzle end-effector. Second, optimal spray parameters were determined based on a three-factor, five-level quadratic orthogonal experiment, that is, air pressure of 70.4 kPa, rate of flow of 86.0 mL/min, and spray distance of 27.8 cm. A targeted pollination strategy was developed based on the preferential flower selection strategy and structure of the multinozzle end-effector. Field experiments were conducted in a commercial kiwifruit orchard to evaluate its feasibility and performance, and an average success targeting rate of 93.4% at an average speed of 1.0 s per flower was achieved. Furthermore, compared with artificial assisted pollination methods, it can improve the utilization rate of kiwifruit pollen with an average consumption of 0.20 g in every 60 flowers with an average fruit set rate of 88.9%. The validations demonstrated that the pollination robot can efficiently pollinate kiwifruit flowers and save pollen.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2136-2150"},"PeriodicalIF":4.2,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Marques Claro, Francisco Soares Pinto Neves, Andry Maykol Gomes Pinto
{"title":"A Multimodal Perception System for Precise Landing of UAVs in Offshore Environments","authors":"Rafael Marques Claro, Francisco Soares Pinto Neves, Andry Maykol Gomes Pinto","doi":"10.1002/rob.22517","DOIUrl":"https://doi.org/10.1002/rob.22517","url":null,"abstract":"<div>\u0000 \u0000 <p>The integration of precise landing capabilities into unmanned aerial vehicles (UAVs) is crucial for enabling autonomous operations, particularly in challenging environments such as the offshore scenarios. This work proposes a heterogeneous perception system that incorporates a multimodal fiducial marker, designed to improve the accuracy and robustness of autonomous landing of UAVs in both daytime and nighttime operations. This work presents ViTAL-TAPE, a visual transformer-based model, that enhance the detection reliability of the landing target and overcomes the changes in the illumination conditions and viewpoint positions, where traditional methods fail. VITAL-TAPE is an end-to-end model that combines multimodal perceptual information, including photometric and radiometric data, to detect landing targets defined by a fiducial marker with 6 degrees-of-freedom. Extensive experiments have proved the ability of VITAL-TAPE to detect fiducial markers with an error of 0.01 m. Moreover, experiments using the RAVEN UAV, designed to endure the challenging weather conditions of offshore scenarios, demonstrated that the autonomous landing technology proposed in this work achieved an accuracy up to 0.1 m. This research also presents the first successful autonomous operation of a UAV in a commercial offshore wind farm with floating foundations installed in the Atlantic Ocean. These experiments showcased the system's accuracy, resilience and robustness, resulting in a precise landing technology that extends mission capabilities of UAVs, enabling autonomous and Beyond Visual Line of Sight offshore operations.</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2151-2172"},"PeriodicalIF":4.2,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huang Hai, Jiang Tao, Bian Xinyu, Zhou Hao, Yang Xu, Wang Gang, Qin Hongde, Han Xinyue
{"title":"Object Detection and Multiple Objective Optimization Manipulation Planning for Underwater Autonomous Capture in Oceanic Natural Aquatic Farm","authors":"Huang Hai, Jiang Tao, Bian Xinyu, Zhou Hao, Yang Xu, Wang Gang, Qin Hongde, Han Xinyue","doi":"10.1002/rob.22507","DOIUrl":"https://doi.org/10.1002/rob.22507","url":null,"abstract":"<div>\u0000 \u0000 <p>Underwater autonomous capture operations offer significant potential for reducing labor and health risks in sea organism industries. This study presents a comprehensive solution for cross-domain underwater object detection and autonomous capture. A novel unsupervised domain adaptive learning method is proposed, integrating multiscale domain adaptive modules and attention mechanisms into a Faster Region-Convolutional Neural Network framework. This approach enhances feature alignment across diverse aquatic domains without parameter tuning. Additionally, an efficient, parameterless constrained multiobjective optimization algorithm is introduced for underwater autonomous mobile capture, integrating parameterized trajectory planning with innovative features, such as adaptive mutation strategies and constraint violation tolerance. The proposed approaches are extensively validated through simulations, tank experiments, and real-world oceanic trials in the Natural Aquatic Farm of Zhangzidao Island. Results demonstrate the system's robustness in complex underwater environments with varying currents, with experimental outcomes validating the accuracy and reliability of detection and capture capabilities. This research significantly advances autonomous underwater systems' capabilities in object detection and capture tasks, addressing complex challenges in realistic organism capture applications across diverse aquatic environments.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2095-2123"},"PeriodicalIF":4.2,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Nordström, Nikolaos Stathoulopoulos, Niklas Dahlquist, Björn Lindqvist, Ilias Tevetzidis, Christoforos Kanellakis, George Nikolakopoulos
{"title":"Safety Inspections and Gas Monitoring in Hazardous Mining Areas Shortly After Blasting Using Autonomous UAVs","authors":"Samuel Nordström, Nikolaos Stathoulopoulos, Niklas Dahlquist, Björn Lindqvist, Ilias Tevetzidis, Christoforos Kanellakis, George Nikolakopoulos","doi":"10.1002/rob.22500","DOIUrl":"https://doi.org/10.1002/rob.22500","url":null,"abstract":"<p>This article presents the first ever fully autonomous UAV (Unmanned Aerial Vehicle) mission to perform gas measurements after a real blast in an underground mine. The demonstration mission was deployed around 40 min after the blast took place, and as such realistic gas levels were measured. We also present multiple field robotics experiments in different mines detailing the development process. The presented novel autonomy stack, denoted as the Routine Inspection Autonomy (RIA) framework, combines a risk-aware 3D path planning <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <msubsup>\u0000 <mstyle>\u0000 <mspace></mspace>\u0000 \u0000 <mi>D</mi>\u0000 <mspace></mspace>\u0000 </mstyle>\u0000 \u0000 <mo>+</mo>\u0000 \u0000 <mo>*</mo>\u0000 </msubsup>\u0000 </mrow>\u0000 </mrow>\u0000 </semantics></math>, with 3D LiDAR-based global relocalization on a known map, and it is integrated on a custom hardware and a sensing stack with an onboard gas sensing device. In the presented framework, the autonomous UAV can be deployed in incredibly harsh conditions (dust, significant deformations of the map) shortly after blasting to perform inspections of lingering gases that present a significant safety risk to workers. We also present a <i>change detection</i> framework that can extract and visualize the areas that were changed in the blasting procedure, a critical parameter for planning the extraction of materials, and for updating existing mine maps. As will be demonstrated, the RIA stack can enable robust autonomy in harsh conditions, and provides reliable and safe navigation behavior for autonomous Routine Inspection missions.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2076-2094"},"PeriodicalIF":4.2,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22500","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nguyen Hung, Eduardo Cunha, Francisco Branco, Antonio Pascoal
{"title":"Target Localization and Pursuit With Networked Robotic Vehicles: Theory, Simulation, and Experiments","authors":"Nguyen Hung, Eduardo Cunha, Francisco Branco, Antonio Pascoal","doi":"10.1002/rob.22513","DOIUrl":"https://doi.org/10.1002/rob.22513","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper addresses the problem of range-based simultaneous localization and pursuit (SLAP) with networked robotic vehicles from a theoretical and practical standpoint. The work presented builds upon and extends previous theoretical research by Hung, Rego, and Pascoal on the subject of range-based SLAP using multiple trackers and a cooperative distributed estimation and control (DEC) strategy. The key novel contributions of the paper are twofold:\u0000\u0000 </p><ul>\u0000 \u0000 <li>\u0000 <p>Event-triggered communication (ETC) mechanisms for the DEC strategy are proposed, with formal guarantees of stability of the multiple vehicle ensemble. In this approach, each tracking vehicle only communicates with its neighbors—both for target estimation and cooperative control purposes when deemed necessary—thus reducing the cost of communications.</p>\u0000 </li>\u0000 \u0000 <li>\u0000 <p>The paper presents experimental results from multiple field trials conducted with three autonomous marine vehicles. These trials assess the effectiveness of the proposed DEC/ETC strategy in a real-world environment.</p>\u0000 </li>\u0000 </ul>Simulation results are also included and analyzed. For the sake of completeness, source codes and links to auxiliary materials are provided, enabling the readers to run simulations and even implement the DEC/ETC strategy using both Matlab and ROS/Gazebo platforms.\u0000 <p>Matlab codes: https://github.com/hungrepo/slap-etc.</p>\u0000 <p>ROS packages: http://github.com/dsor-isr/slap.</p>\u0000 <p>Aerial view of field trials: http://youtu.be/4LR4WSJHyz8.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2124-2135"},"PeriodicalIF":4.2,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Anand Kumar, Md. Khaja Mohiddin, Shashi Kant Mishra, Abhishek Verma, Mousam Sharma, A. Naresh
{"title":"Enhancing Autonomous Vehicle Navigation in Complex Environment With Semantic Proto-Reinforcement Learning","authors":"G. Anand Kumar, Md. Khaja Mohiddin, Shashi Kant Mishra, Abhishek Verma, Mousam Sharma, A. Naresh","doi":"10.1002/rob.22506","DOIUrl":"https://doi.org/10.1002/rob.22506","url":null,"abstract":"<div>\u0000 \u0000 <p>Despite great progress in autonomous vehicle (AV) navigation, the technical challenges within this space are still considerable when it comes to successful integration of AVs into complex real-world environments. To tackle these challenges, this paper presents a new semantic proto-reinforcement learning (SP-RL) method for dynamic path planning and real-time obstacle avoidance of an autonomous car that can be adapted to various weather conditions in localization-deficient environments, while predicting the human intentions of different shapes on road. This approach seeks to improve navigation capability of AV in dynamic and unstructured environments, as well as to address real-time detection and avoidance response for obstacles more promptly while being able to adapt its decision-making system based on weather condition by using the semantic graph network (SGN) within segmentation process therefore enhanced version configured with prototype-based reinforcement learning (PRL). This innovation is new competitive edge compared previous existing approaches. Dynamic SGN is used to segment challenging 3D and free space environments, so that the AV can comprehend highly unintuitive areas like parking lots, construction site conditionals, or off-road scenarios. At the same time, PRL is used to help real-time decision-making so the AV can quickly and precisely respond unexpected obstacles or changing environments. This approach is confirmed to be effective by extensive testing in the CARLA simulation environment, showing substantial improvement of AV navigation capability. This work demonstrates an exciting step towards solving the most fundamental problems faced by autonomous vehicles and could help to ensure that future AV systems are safer, more robust, and adaptable than current ones. It is appliable for urban areas which represent high volumes of pedestrians and vehicles, industrial sites with changing conditions that may be unpredictable at times or the challenging off-road areas in rural geographies where typical terrains are rough, uneven, rugged yet not well-structured. The model is robust to diverse weather conditions that make AV operations more reliable and safer. The model was tested on root mean square error (RMSE), computational time, no crash, obstacles avoidance, and success rate obtaining overall 98%.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2042-2061"},"PeriodicalIF":4.2,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gerrit Polder, Pieter M. Blok, Tim van Daalen, Joseph Peller, Nikos Mylonas
{"title":"A Smart Camera With Integrated Deep Learning Processing for Disease Detection in Open Field Crops of Grape, Apple, and Carrot","authors":"Gerrit Polder, Pieter M. Blok, Tim van Daalen, Joseph Peller, Nikos Mylonas","doi":"10.1002/rob.22510","DOIUrl":"https://doi.org/10.1002/rob.22510","url":null,"abstract":"<p>Downy mildew (<i>Plasmopara</i>), apple scab (<i>Venturia inaequalis</i>), and <i>Alternaria</i> leaf blight are endemic diseases that affect crops worldwide. The diseases can cause severe losses in grapes, apples and carrots when not detected and treated in an early stage. The European Union Horizon 2020 OPTIMA project aimed to improve disease detection in the open field with an automated detection system as part of an integrated pest management (IPM) system. In this research, we investigated the automated detection of downy mildew in grape, apple scab in apple and <i>Alternaria</i> leaf blight in carrot, using a deep convolutional neural network (CNN) on RGB color images. Detections from the CNN served as input to a Decision Support System (DSS), to precisely locate and quantify the disease, so that appropriate and timely application of plant protection products could be recommended. The focus of our study was on a smart camera implementation with integrated deep-learning processing in real-field conditions. The question was whether the deep learning model, when trained on images of disease symptoms recorded in conditioned circumstances, can also perform on images of disease symptoms recorded in field conditions. This type of evaluation is called open-set evaluation, and so far it has received little attention in plant disease detection research. Therefore, the goal of our research was to evaluate the performance of a deep learning model in an open-set evaluation scenario in commercial vineyards, orchards, and open fields. The model's performance in the open-set scenario was compared to its performance in the closed-set scenario, which involved evaluating the trained model on images similar to those used for model training. Our results showed that the model's performance in the closed-set scenario with <i>F</i>1 scores of 66.3% (downy mildew), 45.1% (apple scab), and 42.1% (<i>Alternaria</i>) was notably better than in the open-set scenario, with <i>F</i>1 scores of 34.8% (downy mildew), 5.5% (apple scab) and 4.2% (<i>Alternaria</i>). Uniform Manifold Approximation and Projection (UMAP) analysis proved the significant difference between the open-set and closed-set data sets. Our result should encourage other researchers to carry out similar open-set evaluations to get realistic impressions of their model's performance under field conditions. A subset of our image data set has been made publicly available at https://doi.org/10.5281/zenodo.6778647.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 5","pages":"2062-2075"},"PeriodicalIF":4.2,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22510","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Map Segmentation Method Based on Image Processing for Robot Complete Coverage Operation","authors":"Haojun Si, Zhonghua Miao, Wen Zhang, Teng Sun","doi":"10.1002/rob.22504","DOIUrl":"https://doi.org/10.1002/rob.22504","url":null,"abstract":"<div>\u0000 \u0000 <p>Path planning is crucial for autonomous robot navigation and operation. Tasks like cleaning, inspection, and mining, all require complete coverage operation. For maps of convex regions, a reciprocating coverage method can be used. However, for maps of concave shapes, it is unsuitable. For this purpose, this paper proposes an image-based map segmentation method for complete coverage path planning. Taking the grip map as an image, it is used to divide a concave map into convex subregions. For each convex region, it will generate a batch of waypoints for the robot controller. The subregions are then connected to achieve a complete coverage of the entire region. On the basis of a global path planning, a local path following, and real-time obstacle avoidance methods, the complete coverage operation is achieved. Moreover, a coverage ratio calculation method is proposed and shown real-timely in a visual interface. Extensive experiments in simulations and real-world environments demonstrate the effectiveness of this method, achieving an average coverage ratio of 97.89% and a maximum of 92.19% in the presence of obstacles. Most importantly, this method has been successfully tested on an autonomous mining vehicle, achieving an average coverage ratio of 96% in given maps.</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"916-929"},"PeriodicalIF":4.2,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}