Visual navigation and crop mapping of a phenotyping robot MARS-PhenoBot in simulation

IF 6.3 Q1 AGRICULTURAL ENGINEERING
Zhengkun Li , Rui Xu , Changying Li , Longsheng Fu
{"title":"Visual navigation and crop mapping of a phenotyping robot MARS-PhenoBot in simulation","authors":"Zhengkun Li ,&nbsp;Rui Xu ,&nbsp;Changying Li ,&nbsp;Longsheng Fu","doi":"10.1016/j.atech.2025.100910","DOIUrl":null,"url":null,"abstract":"<div><div>Cultivating high-yield and high-quality crops is important for addressing the growing demand for food and fiber from an increasing population. In selective breeding programs, autonomous robotic systems have shown great potential to replace manual phenotypic trait measurements which are time-consuming and labor-intensive. In this paper, we presented a Robot Operating System (ROS)-based phenotyping robot, MARS (Modular Agricultural Robotic System)-PhenoBot, and demonstrated its visual navigation and field mapping capacities in the Gazebo simulation environment. MARS-PhenoBot was a solar-powered modular robotic platform with a four-wheel steering and four-wheel driving configuration. We developed a navigation strategy that fuses multiple cameras to guide the robot to follow crop rows and transition between them, enabling visual navigation across the entire field without relying on global navigation satellite system (GNSS) signals. Three row-detection algorithms, including thresholding-based, detection-based, and segmentation-based methods, were compared and evaluated in simulated crop fields with discontinuous and continuous crop rows, as well as with and without the presence of weeds. The results demonstrated that the segmentation-based method achieved the lowest average cross-track errors of 2.5 cm for discontinuous scenarios and 0.8 cm for continuous scenarios in row detection. Additionally, a field mapping workflow based on RTAB-MAP (Real-Time Appearance-Based Mapping) and V-SLAM (Visual Simultaneous Localization and Mapping) was developed. The workflow produced the 2D maps identifying crop and weed locations, as well as 3D models represented as point clouds for crop shapes and structures. Using this mapping workflow, the average crop localization error was measured at 6.4 cm, primarily caused by the visual odometry drift. The generated point clouds of crops could support further phenotyping analyses, such as crop height/diameter measurements and leaf counting. The methodology developed in this study could be transferred to real-world robots that are capable of automated robotic phenotyping for in-field crops, providing an effective tool for accelerating selective breeding programs.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"11 ","pages":"Article 100910"},"PeriodicalIF":6.3000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525001431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Cultivating high-yield and high-quality crops is important for addressing the growing demand for food and fiber from an increasing population. In selective breeding programs, autonomous robotic systems have shown great potential to replace manual phenotypic trait measurements which are time-consuming and labor-intensive. In this paper, we presented a Robot Operating System (ROS)-based phenotyping robot, MARS (Modular Agricultural Robotic System)-PhenoBot, and demonstrated its visual navigation and field mapping capacities in the Gazebo simulation environment. MARS-PhenoBot was a solar-powered modular robotic platform with a four-wheel steering and four-wheel driving configuration. We developed a navigation strategy that fuses multiple cameras to guide the robot to follow crop rows and transition between them, enabling visual navigation across the entire field without relying on global navigation satellite system (GNSS) signals. Three row-detection algorithms, including thresholding-based, detection-based, and segmentation-based methods, were compared and evaluated in simulated crop fields with discontinuous and continuous crop rows, as well as with and without the presence of weeds. The results demonstrated that the segmentation-based method achieved the lowest average cross-track errors of 2.5 cm for discontinuous scenarios and 0.8 cm for continuous scenarios in row detection. Additionally, a field mapping workflow based on RTAB-MAP (Real-Time Appearance-Based Mapping) and V-SLAM (Visual Simultaneous Localization and Mapping) was developed. The workflow produced the 2D maps identifying crop and weed locations, as well as 3D models represented as point clouds for crop shapes and structures. Using this mapping workflow, the average crop localization error was measured at 6.4 cm, primarily caused by the visual odometry drift. The generated point clouds of crops could support further phenotyping analyses, such as crop height/diameter measurements and leaf counting. The methodology developed in this study could be transferred to real-world robots that are capable of automated robotic phenotyping for in-field crops, providing an effective tool for accelerating selective breeding programs.
模拟表型机器人MARS-PhenoBot的视觉导航和作物定位
培育高产优质作物对于满足日益增长的人口对粮食和纤维日益增长的需求至关重要。在选择性育种计划中,自主机器人系统已经显示出巨大的潜力,可以取代耗时和劳动密集型的人工表型性状测量。本文介绍了一种基于机器人操作系统(ROS)的表型机器人MARS(模块化农业机器人系统)-PhenoBot,并在Gazebo模拟环境中展示了其视觉导航和现场制图能力。MARS-PhenoBot是一个太阳能驱动的模块化机器人平台,具有四轮转向和四轮驱动配置。我们开发了一种导航策略,融合了多个摄像头来引导机器人跟随作物行并在它们之间转换,从而在不依赖全球导航卫星系统(GNSS)信号的情况下实现整个田地的视觉导航。对三种行检测算法,包括基于阈值的、基于检测的和基于分割的方法,在具有不连续和连续作物行以及存在和不存在杂草的模拟作物田中进行了比较和评估。结果表明,在行检测中,基于分割的方法在不连续场景下的平均交叉航迹误差最低,为2.5 cm,在连续场景下为0.8 cm。此外,还开发了基于RTAB-MAP(实时基于外观的制图)和V-SLAM(视觉同步定位和制图)的现场制图工作流程。该工作流生成了识别作物和杂草位置的2D地图,以及以点云表示作物形状和结构的3D模型。使用该制图工作流程,作物定位的平均误差为6.4 cm,主要由视觉里程计漂移引起。生成的作物点云可以支持进一步的表型分析,如作物高度/直径测量和叶片计数。本研究中开发的方法可以转移到现实世界的机器人上,这些机器人能够自动进行田间作物的机器人表型分析,为加速选择性育种计划提供有效的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信