Recognition of field roads based on improved U-Net++ Network

IF 2.2 2区 农林科学 Q2 AGRICULTURAL ENGINEERING
Lili Yang, Yuanbo Li, Mengshuai Chang, Yuanyuan Xu, Bingbing Hu, Xinxin Wang, Caicong Wu
{"title":"Recognition of field roads based on improved U-Net++ Network","authors":"Lili Yang, Yuanbo Li, Mengshuai Chang, Yuanyuan Xu, Bingbing Hu, Xinxin Wang, Caicong Wu","doi":"10.25165/j.ijabe.20231602.7941","DOIUrl":null,"url":null,"abstract":": Unmanned driving of agricultural machinery has garnered significant attention in recent years, especially with the development of precision farming and sensor technologies. To achieve high performance and low cost, perception tasks are of great importance. In this study, a low-cost and high-safety method was proposed for field road recognition in unmanned agricultural machinery. The approach of this study utilized point clouds, with low-resolution lidar point clouds as inputs, generating high-resolution point clouds and Bird's Eye View (BEV) images that were encoded with several basic statistics. Using a BEV representation, road detection was reduced to a single-scale problem that could be addressed with an improved U-Net++ neural network. Three enhancements were proposed for U-Net++: 1) replacing the convolutional kernel in the original U-Net++ with an Asymmetric Convolution Block (ACBlock); 2) adding a multi-branch Asymmetric Dilated Convolutional Block (MADC) in the highest semantic information layer; 3) adding an Attention Gate (AG) model to the long-skip-connection in the decoding stage. The results of experiments of this study showed that our algorithm achieved a Mean Intersection Over Union of 96.54% on the 16-channel point clouds, which was 7.35 percentage points higher than U-Net++. Furthermore, the average processing time of the model was about 70 ms, meeting the time requirements of unmanned driving in agricultural machinery. The proposed method of this study can be applied to enhance the perception ability of unmanned agricultural machinery thereby increasing the safety of field road driving.","PeriodicalId":13895,"journal":{"name":"International Journal of Agricultural and Biological Engineering","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Agricultural and Biological Engineering","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.25165/j.ijabe.20231602.7941","RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

: Unmanned driving of agricultural machinery has garnered significant attention in recent years, especially with the development of precision farming and sensor technologies. To achieve high performance and low cost, perception tasks are of great importance. In this study, a low-cost and high-safety method was proposed for field road recognition in unmanned agricultural machinery. The approach of this study utilized point clouds, with low-resolution lidar point clouds as inputs, generating high-resolution point clouds and Bird's Eye View (BEV) images that were encoded with several basic statistics. Using a BEV representation, road detection was reduced to a single-scale problem that could be addressed with an improved U-Net++ neural network. Three enhancements were proposed for U-Net++: 1) replacing the convolutional kernel in the original U-Net++ with an Asymmetric Convolution Block (ACBlock); 2) adding a multi-branch Asymmetric Dilated Convolutional Block (MADC) in the highest semantic information layer; 3) adding an Attention Gate (AG) model to the long-skip-connection in the decoding stage. The results of experiments of this study showed that our algorithm achieved a Mean Intersection Over Union of 96.54% on the 16-channel point clouds, which was 7.35 percentage points higher than U-Net++. Furthermore, the average processing time of the model was about 70 ms, meeting the time requirements of unmanned driving in agricultural machinery. The proposed method of this study can be applied to enhance the perception ability of unmanned agricultural machinery thereby increasing the safety of field road driving.
基于改进U-Net++网络的野外道路识别
近年来,随着精准农业和传感器技术的发展,农业机械的无人驾驶受到了广泛关注。为了实现高性能和低成本,感知任务非常重要。本研究提出了一种低成本、高安全性的无人农机现场道路识别方法。本研究方法利用点云,以低分辨率激光雷达点云为输入,生成高分辨率点云和用几种基本统计编码的鸟瞰(BEV)图像。使用BEV表示,道路检测被简化为一个单尺度问题,可以通过改进的U-Net++神经网络来解决。针对U-Net++提出了三个改进方案:1)用非对称卷积块(ACBlock)取代原始U-Net++中的卷积核;2)在最高语义信息层增加多分支非对称扩展卷积块(MADC);3)在解码阶段为长跳接增加注意门(Attention Gate, AG)模型。本研究的实验结果表明,我们的算法在16通道点云上实现了96.54%的average Intersection Over Union,比U-Net++提高了7.35个百分点。模型的平均处理时间约为70 ms,满足农业机械无人驾驶的时间要求。本文提出的方法可用于增强无人农机的感知能力,从而提高田间道路行驶的安全性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.30
自引率
12.50%
发文量
88
审稿时长
24 weeks
期刊介绍: International Journal of Agricultural and Biological Engineering (IJABE, https://www.ijabe.org) is a peer reviewed open access international journal. IJABE, started in 2008, is a joint publication co-sponsored by US-based Association of Agricultural, Biological and Food Engineers (AOCABFE) and China-based Chinese Society of Agricultural Engineering (CSAE). The ISSN 1934-6344 and eISSN 1934-6352 numbers for both print and online IJABE have been registered in US. Now, Int. J. Agric. & Biol. Eng (IJABE) is published in both online and print version by Chinese Academy of Agricultural Engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信