Joint Semantic-geometric Mapping of Unstructured Environment for Autonomous Mobile Robotic Sprayers

IF 5.2 2区 计算机科学 Q2 ROBOTICS
Xubin Lin, Zerong Su, Zhihan Zhu, Pengfei Yuan, Haifei Zhu, Xuefeng Zhou
{"title":"Joint Semantic-geometric Mapping of Unstructured Environment for Autonomous Mobile Robotic Sprayers","authors":"Xubin Lin,&nbsp;Zerong Su,&nbsp;Zhihan Zhu,&nbsp;Pengfei Yuan,&nbsp;Haifei Zhu,&nbsp;Xuefeng Zhou","doi":"10.1002/rob.22553","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Mobile robotic sprayers are expected to be employed in outdoor insecticide applications for mosquito control, epidemic prevention, and disinfection. To achieve this, a comprehensive 3D environmental model integrating both semantic and geometric information is indispensable for supporting mobile robotic sprayers in autonomous navigation, task planning, and adaptive spraying control. However, outdoor environments for insecticide spraying, such as public parks and gardens, are typically unstructured, dynamic and prone to sensor degradation, posing significant challenges to both LiDAR-only and camera-only perception and mapping approaches. In this paper, a visual-LiDAR fusion based joint semantic-geometric mapping framework is proposed, featuring a novel 2D-3D semantic perception module that is robust against complex segmentation conditions and sensor extrinsic drift. To this end, a Multi-scale Vague Boundary Augmented Dual Attention Network (MDANet), incorporating multi-scale 3D attention modules and vague boundary augmented attention modules, is proposed to tackle the image segmentation task involving dense vegetation with overlapping foliage and ambiguous boundaries. Additionally, a seed growth-based visual-LiDAR semantic data association method is proposed to resolve the issue of inaccurate pixel-to-point association in the presence of extrinsic drift, yielding more precise 3D semantic perception results. Furthermore, a semantic-aware SLAM system accounting for dynamic interference and pose estimation drift is presented. Extensive experimental evaluations on public datasets and self-recorded data are conducted. The segmentation results show that MDANet achieves a mean pixel accuracy (mPA) of 90.17%, outperforming competing methods in the vegetation-involved segmentation task. The proposed visual-LiDAR semantic data association method can tolerate a translational disturbance of up to 40 mm and a rotational disturbance of 0.18 rad without compromising 3D segmentation accuracy. Moreover, the evaluation of trajectory error, alongside ablation studies, validates the effectiveness and feasibility of the proposed mapping framework.</p>\n </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":"2952-2967"},"PeriodicalIF":5.2000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Field Robotics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rob.22553","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Mobile robotic sprayers are expected to be employed in outdoor insecticide applications for mosquito control, epidemic prevention, and disinfection. To achieve this, a comprehensive 3D environmental model integrating both semantic and geometric information is indispensable for supporting mobile robotic sprayers in autonomous navigation, task planning, and adaptive spraying control. However, outdoor environments for insecticide spraying, such as public parks and gardens, are typically unstructured, dynamic and prone to sensor degradation, posing significant challenges to both LiDAR-only and camera-only perception and mapping approaches. In this paper, a visual-LiDAR fusion based joint semantic-geometric mapping framework is proposed, featuring a novel 2D-3D semantic perception module that is robust against complex segmentation conditions and sensor extrinsic drift. To this end, a Multi-scale Vague Boundary Augmented Dual Attention Network (MDANet), incorporating multi-scale 3D attention modules and vague boundary augmented attention modules, is proposed to tackle the image segmentation task involving dense vegetation with overlapping foliage and ambiguous boundaries. Additionally, a seed growth-based visual-LiDAR semantic data association method is proposed to resolve the issue of inaccurate pixel-to-point association in the presence of extrinsic drift, yielding more precise 3D semantic perception results. Furthermore, a semantic-aware SLAM system accounting for dynamic interference and pose estimation drift is presented. Extensive experimental evaluations on public datasets and self-recorded data are conducted. The segmentation results show that MDANet achieves a mean pixel accuracy (mPA) of 90.17%, outperforming competing methods in the vegetation-involved segmentation task. The proposed visual-LiDAR semantic data association method can tolerate a translational disturbance of up to 40 mm and a rotational disturbance of 0.18 rad without compromising 3D segmentation accuracy. Moreover, the evaluation of trajectory error, alongside ablation studies, validates the effectiveness and feasibility of the proposed mapping framework.

自主移动喷雾器非结构化环境的联合语义-几何映射
移动机器人喷雾器有望用于室外杀虫剂应用,用于蚊虫控制、流行病预防和消毒。为了实现这一目标,一个综合了语义和几何信息的三维环境模型对于支持移动机器人喷雾器的自主导航、任务规划和自适应喷雾器控制是必不可少的。然而,用于杀虫剂喷洒的室外环境,如公园和花园,通常是非结构化的、动态的,并且容易受到传感器退化的影响,这对仅激光雷达和仅摄像头的感知和制图方法都构成了重大挑战。本文提出了一种基于视觉-激光雷达融合的联合语义-几何映射框架,该框架采用了一种新颖的2D-3D语义感知模块,该模块对复杂分割条件和传感器外部漂移具有鲁棒性。为此,提出了一种多尺度模糊边界增强双注意网络(MDANet),该网络结合了多尺度三维注意模块和模糊边界增强注意模块,以解决植被密集、叶子重叠、边界模糊的图像分割问题。此外,提出了一种基于种子生长的视觉-激光雷达语义数据关联方法,以解决存在外部漂移时像素点关联不准确的问题,从而获得更精确的3D语义感知结果。在此基础上,提出了一种考虑动态干扰和姿态估计漂移的语义感知SLAM系统。对公共数据集和自记录数据进行了广泛的实验评估。分割结果表明,MDANet的平均像素精度(mPA)达到90.17%,在涉及植被的分割任务中优于竞争对手的方法。提出的视觉-激光雷达语义数据关联方法可以承受高达40 mm的平移干扰和0.18 rad的旋转干扰,而不会影响3D分割精度。此外,轨迹误差的评估以及消融研究验证了所提出的制图框架的有效性和可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Field Robotics
Journal of Field Robotics 工程技术-机器人学
CiteScore
15.00
自引率
3.60%
发文量
80
审稿时长
6 months
期刊介绍: The Journal of Field Robotics seeks to promote scholarly publications dealing with the fundamentals of robotics in unstructured and dynamic environments. The Journal focuses on experimental robotics and encourages publication of work that has both theoretical and practical significance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信