{"title":"A New Hybrid Control Scheme for Tracking Control Problem of AUVs With System Uncertainties and External Disruptions","authors":"Km Shelly Chaudhary, Naveen Kumar","doi":"10.1002/rob.22492","DOIUrl":"https://doi.org/10.1002/rob.22492","url":null,"abstract":"<div>\u0000 \u0000 <p>Autonomous underwater vehicles (AUVs) are highly nonlinear, coupled, uncertain, and time-varying mechatronic systems that inevitably suffer from uncertainties and environmental disturbances. This study presents an intelligent hybrid fractional-order fast terminal sliding mode controller that utilizes the positive aspects of a model-free control approach, designed to enhance the tracking control of AUVs. Using a nonlinear fractional-order fast terminal sliding manifold, the proposed control approach integrates intelligent hybrid sliding mode control with fractional calculus to guarantee finite-time convergence of system states and provide explicit settling time estimates. The nonlinear dynamics of the AUVs is modeled using radial basis function neural networks, while bound on uncertainties, external disturbances, and the reconstruction errors are accommodated by the adaptive compensator. By using a fast terminal-type sliding mode reaching law, the controller exhibits enhanced transient response, resulting in robustness and finite-time convergence of tracking errors. Using fractional-order Barbalat's lemma and the Lyapunov technique, the stability of the control scheme is validated. The effectiveness of the proposed control scheme is validated by a numerical simulation study, which also shows enhanced trajectory tracking performance for AUVs over existing control schemes. This hybrid technique addresses the complicated nature of AUV dynamics in unpredictable circumstances by utilizing the advantages of model-free intelligent control and fractional calculus.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"716-741"},"PeriodicalIF":4.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143827088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huayan Pu, Jun Luo, Gang Wang, Tao Huang, Lang Wu, Dengyu Xiao, Hongliang Liu, Jun Luo
{"title":"Visual Inertial SLAM Based on Spatiotemporal Consistency Optimization in Diverse Environments","authors":"Huayan Pu, Jun Luo, Gang Wang, Tao Huang, Lang Wu, Dengyu Xiao, Hongliang Liu, Jun Luo","doi":"10.1002/rob.22487","DOIUrl":"https://doi.org/10.1002/rob.22487","url":null,"abstract":"<div>\u0000 \u0000 <p>Currently, the majority of robots equipped with visual-based simultaneous mapping and localization (SLAM) systems exhibit good performance in static environments. However, practical scenarios often present dynamic objects, rendering the environment less than entirely “static.” Diverse dynamic objects within the environment pose substantial challenges to the precision of visual SLAM system. To address this challenge, we propose a real-time visual inertial SLAM system that extensively leverages objects within the environment. First, we reject regions corresponding to dynamic objects. Following this, geometric constraints are applied within the stationary object regions to elaborate the mask of static areas, thereby facilitating the extraction of more stable feature points. Second, static landmarks are constructed based on the static regions. A spatiotemporal factor graph is then created by combining the temporal information from the Inertial Measurement Unit (IMU) with the semantic information from the static landmarks. Finally, we perform a diverse set of validation experiments on the proposed system, encompassing challenging scenarios from publicly available benchmarks and the real world. Within these experimental scenarios, we compare with state-of-the-art approaches. More specifically, our system achieved a more than 40% accuracy improvement over baseline method in these data sets. The results demonstrate that our proposed method exhibits outstanding robustness and accuracy not only in complex dynamic environments but also in static environments.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"679-696"},"PeriodicalIF":4.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing Guo, Zihua Guo, Yujie Shi, Zhijie Zhou, Dexiao Ma
{"title":"A Multimodal Agile Land-Air Aircraft (AlAA) That Can Fly, Roll, and Stand","authors":"Qing Guo, Zihua Guo, Yujie Shi, Zhijie Zhou, Dexiao Ma","doi":"10.1002/rob.22491","DOIUrl":"https://doi.org/10.1002/rob.22491","url":null,"abstract":"<div>\u0000 \u0000 <p>The multimodal land-air aircraft combines the advantages of traditional drones and ground unmanned equipment. It can cross obstacles on the ground, such as lakes and mountains, and fly quickly in the air, reaching a wider range. It can also switch to an energy-saving mode based on the characteristics of the surrounding environment and mission requirements, reducing energy consumption and noise while increasing endurance. Based on the idea of reusing the same structure, we have designed a multi-mode agile land-air aircraft, abbreviated as ALAA. ALAA has eight actuators, and it combines propellers, wheels, and gearboxes in different ways to achieve multiple modes of locomotion on the ground and in the air: flight mode, driving mode, and upright mode. In propeller-assisted driving mode, it can climb slopes up to 50°. It can also combine driving and upright modes, demonstrating strong obstacle-crossing capabilities. In addition, ALAA reuses the same components, simplifying the transition between flight and ground movement without the need for deformation, thus enabling fast and rational mode transition suitable for complex environments. Ground modes can extend the endurance time of ALAA, and experimental results show that ALAA can operate 21 times longer than an aerial only system. This paper presents the overall design and mechanical architecture of ALAA, discusses the algorithm and controller design, and verifies the feasibility of the scheme and design through experiments with the physical prototype, showing its performance in different modes.</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"697-715"},"PeriodicalIF":4.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weilong He, Xingjian Li, Zhenghua Zhang, Yuxi Chen, Jianbo Zhang, Dilip R. Panthee, Inga Meadows, Lirong Xiang
{"title":"High-Throughput Robotic Phenotyping for Quantifying Tomato Disease Severity Enabled by Synthetic Data and Domain-Adaptive Semantic Segmentation","authors":"Weilong He, Xingjian Li, Zhenghua Zhang, Yuxi Chen, Jianbo Zhang, Dilip R. Panthee, Inga Meadows, Lirong Xiang","doi":"10.1002/rob.22490","DOIUrl":"https://doi.org/10.1002/rob.22490","url":null,"abstract":"<p>Plant diseases cause an annual global crop loss of 20%–40%, leading to estimated economic losses of 30–50 billion dollars. Tomatoes are susceptible to more than 200 diseases. Breeding disease-resistant cultivars is more cost-effective and environmentally sustainable than the frequent use of pesticides. Traditional breeding methods for disease resistance, relying on direct visual observation to measure disease-related traits, are time-consuming, inaccurate, expensive, and require specific knowledge of tomato diseases. High-throughput disease phenotyping is essential to reduce labor costs, improve measurement accuracy, and expedite the release of new varieties, thereby more effectively identifying disease-resistant crops. Precision agriculture efforts have primarily focused on detecting diseases on individual tomato leaves under controlled laboratory conditions, neglecting the assessment of disease severity of the entire plant in the field. To address this, we created a synthetic data set using existing field and individual leaf data sets, leveraging a game engine to minimize additional data labeling. Consequently, we developed a customized unsupervised domain-adaptive tomato disease segmentation algorithm that monitors the entire tomato plant and determines disease severity based on the proportion of affected leaf areas. The system-derived disease percentages show a high correlation with manually labeled data, evidenced by a correlation coefficient of 0.91. Our research demonstrates the feasibility of using ground robots equipped with deep-learning algorithms to monitor tomato disease severity under field conditions, potentially accelerating the automation and standardization of whole-plant disease severity monitoring in tomatoes. This high-throughput disease phenotyping system can also be adapted to analyze diseases in other crops with similar foliar diseases, such as maize, soybeans, and cotton.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"657-678"},"PeriodicalIF":4.2,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22490","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonzalo Mier, Rick Fennema, João Valente, Sytze de Bruin
{"title":"Continuous Curvature Path Planning for Headland Coverage With Agricultural Robots","authors":"Gonzalo Mier, Rick Fennema, João Valente, Sytze de Bruin","doi":"10.1002/rob.22489","DOIUrl":"https://doi.org/10.1002/rob.22489","url":null,"abstract":"<p>We introduce a methodology for headland coverage planning for autonomous agricultural robot systems, which is a complex problem often overlooked in agricultural robotics. At the corners of the headlands, a robot faces the risk to cross the border of a field while turning. Though potentially dangerous, current papers about corner turns in headlands do not tackle this issue. Moreover, they produce paths with curvature discontinuities, which are not feasible by non-holonomic robots. This paper presents an approach to strictly adhere to field borders during the headland coverage, and three types of continuous curvature turn planners for convex and concave corners. The turning planners are evaluated in terms of path length and uncovered area to assess their effectiveness in headland corner navigation. Through empirical validation, including extensive tests on a coverage path planning benchmark as well as real-field experiments with an autonomous robot, the proposed approach demonstrates its practical applicability and effectiveness. In simulations, the mean coverage area of the fields went from 94.73%, using a constant offset around the field, to 97.29% using the proposed approach. Besides providing a solution to the coverage of headlands in agricultural automation, this paper also extends the covered area on the mainland, thus increasing the overall productivity of the field.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"641-656"},"PeriodicalIF":4.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Twin/MARS-CycleGAN: Enhancing Sim-to-Real Crop/Row Detection for MARS Phenotyping Robot Using Synthetic Images","authors":"David Liu, Zhengkun Li, Zihao Wu, Changying Li","doi":"10.1002/rob.22473","DOIUrl":"https://doi.org/10.1002/rob.22473","url":null,"abstract":"<div>\u0000 \u0000 <p>Robotic crop phenotyping has emerged as a key technology for assessing crops' phenotypic traits at scale, which is essential for developing new crop varieties with the aim of increasing productivity and adapting to the changing climate. However, developing and deploying crop phenotyping robots faces many challenges, such as complex and variable crop shapes that complicate robotic object detection, dynamic and unstructured environments that confound robotic control, and real-time computing and managing big data that challenge robotic hardware/software. This work specifically addresses the first challenge by proposing a novel Digital Twin(DT)/MARS-CycleGAN model for image augmentation to improve our Modular Agricultural Robotic System (MARS)'s crop object detection from complex and variable backgrounds. The core idea is that in addition to the cycle consistency losses in the CycleGAN model, we designed and enforced a new DT/MARS loss in the deep learning model to penalize the inconsistency between real crop images captured by MARS and synthesized images generated by DT/MARS-CycleGAN. Therefore, the synthesized crop images closely mimic real images in terms of realism, and they are employed to fine-tune object detectors such as YOLOv8. Extensive experiments demonstrate that the new DT/MARS-CycleGAN framework significantly boosts crop/row detection performance for MARS, contributing to the field of robotic crop phenotyping. We release our code and data to the research community (https://github.com/UGA-BSAIL/DT-MARS-CycleGAN).</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"625-640"},"PeriodicalIF":4.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Back Cover, Volume 42, Number 1, January 2025","authors":"Qi Shao, Qixing Xia, Zhonghan Lin, Xuguang Dong, Xin An, Haoqi Zhao, Zhangyi Li, Xin-Jun Liu, Wenqiang Dong, Huichan Zhao","doi":"10.1002/rob.22497","DOIUrl":"https://doi.org/10.1002/rob.22497","url":null,"abstract":"<p>The cover image is based on the Article <i>Unearthing the history with A-RHex: Leveraging articulated hexapod robots for archeological pre-exploration</i> by Qi Shao et al., https://doi.org/10.1002/rob.22410\u0000 \u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 1","pages":"ii"},"PeriodicalIF":4.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22497","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cover Image, Volume 42, Number 1, January 2025","authors":"Yifan Gao, Jiangpeng Shu, Zhe Xia, Yaozhi Luo","doi":"10.1002/rob.22496","DOIUrl":"https://doi.org/10.1002/rob.22496","url":null,"abstract":"<p>The cover image is based on the Article <i>From muscular to dexterous: A systematic review to understand the robotic taxonomy in construction and effectiveness</i> by Yifan Gao et al., https://doi.org/10.1002/rob.22409\u0000 \u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 1","pages":"i"},"PeriodicalIF":4.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Experiment of Cantilevered Lotus Seedpod Picking Device","authors":"Yuqi Zeng, Long Xue, Bizhong Tang, Chaoyang Ying, Muhua Liu, Jing Li, Keke Liao, Chengzhi Ruan","doi":"10.1002/rob.22471","DOIUrl":"https://doi.org/10.1002/rob.22471","url":null,"abstract":"<div>\u0000 \u0000 <p>Addressing the issue of the absence of relevant automated harvesting machinery for lotus seedpods, which currently relies solely on manual harvesting, this paper proposes a four-degree-of-freedom cantilever-type lotus seedpod harvesting device. This harvesting device comprises an electric mobile chassis, mobile mechanism, harvesting mechanism, transfer mechanism, image acquisition system, and control system, which make it suitable for harvesting lotus seedpod in standardized lotus fields. The cantilevered lotus seedpod picking device obtains the image of the lotus from the depth camera at an overlooking angle. The YOLOv5-trained lotus-bud recognition model is used to identify and locate the lotus seedpod. Through the calibration using active vision method, the conversion relationship between the ZED camera coordinates and the base coordinates of the picking device is calculated to realize lotus seedpod picking. The results of picking experiments conducted in different time periods show that the picking success rate is higher at night, and in the early morning and evening, with the picking success rates of 89.47%, 85.7%, and 85%, respectively, while the picking success rate at noon is only 42.86%. It can be seen that the ambient light has a great influence on the picking of the lotus seedpods, and it is reasonable to avoid the midday period with strong light for picking. The experimental results show that the clamping coupling method realizes the integration of automatic picking and conveying of lotus seedpods, improves the picking efficiency, and provides a new scheme for fruit picking.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"1550-1563"},"PeriodicalIF":4.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143950282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Ye, Ao Liu, Xiangpeng Yan, Xiangming Yan, Yu Ouyang, Xiangpeng Deng, Xiao Cong, Fan Zhang
{"title":"An Efficient 3D Point Cloud-Based Place Recognition Approach for Underground Tunnels Using Convolution and Self-Attention Mechanism","authors":"Tao Ye, Ao Liu, Xiangpeng Yan, Xiangming Yan, Yu Ouyang, Xiangpeng Deng, Xiao Cong, Fan Zhang","doi":"10.1002/rob.22451","DOIUrl":"https://doi.org/10.1002/rob.22451","url":null,"abstract":"<div>\u0000 \u0000 <p>Existing place recognition methods overly rely on effective geometric features in the data. When directly applied to underground tunnels with repetitive spatial structures and blurry texture features, these methods may result in potential misjudgments, thereby reducing positioning accuracy. Additionally, the substantial computational demands of current methods make it challenging to support real-time feedback of positioning information. To address the challenges mentioned above, we first introduced the Feature Reconstruction Convolution Module, aimed at reconstructing prevalent similar feature patterns in underground tunnels and aggregating discriminative feature descriptors, thereby enhancing environmental discrimination. Subsequently, the Sinusoidal Self-Attention Module was implemented to actively filter local descriptors, allocate weights to different descriptors, and determine the most valuable feature descriptors in the network. Finally, the network was further enhanced with the integration of the Rotation-Equivariant Downsampling Module, designed to expand the receptive field, merge features, and reduce computational complexity. According to experimental results, our algorithm achieves a maximum score of 0.996 on the SubT-Tunnel data set and 0.995 on the KITTI data set. Moreover, the method only consists of 0.78 million parameters, and the computation time for a single point cloud frame is 17.3 ms. These scores surpass the performance of many advanced algorithms, emphasizing the effectiveness of our approach.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"1537-1549"},"PeriodicalIF":4.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143950283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}