基于无人机图像深度学习的杂草检测研究进展

IF 5.7 Q1 AGRICULTURAL ENGINEERING
Lucía Sandoval-Pillajo , Iván García-Santillán , Marco Pusdá-Chulde , Adriana Giret
{"title":"基于无人机图像深度学习的杂草检测研究进展","authors":"Lucía Sandoval-Pillajo ,&nbsp;Iván García-Santillán ,&nbsp;Marco Pusdá-Chulde ,&nbsp;Adriana Giret","doi":"10.1016/j.atech.2025.101147","DOIUrl":null,"url":null,"abstract":"<div><div>Weeds are undesirable plants that compete with crops for essential resources such as light, soil, water, and nutrients. Additionally, they can harbor pests that reduce crop yields. In traditional agriculture, weed control is based on applying pesticides throughout the agricultural field, resulting in soil damage, environmental contamination, damage to farm products, and risks to human health. Precision agriculture (PA) has evolved in recent years thanks to sensors, hardware, software, and innovations in unmanned aerial vehicle (UAV) systems. These systems aim to improve the localized application of chemicals in weed control by using advanced image analysis techniques, computer vision, deep learning (DL), and geo-positioning (GPS) to detect and recognize weeds. This subsequently facilitates the implementation of specific control mechanisms in real environments. Recently, automatic weed detection techniques have been developed using UAV imagery. However, these face a significant challenge due to the morphological similarities between weeds and crops, such as color, shape, and texture, which makes their practical and effective differentiation and implementation difficult. This paper presents a systematic literature review (SLR) based on 77 recent and relevant studies on weed detection and classification in UAV imagery using DL architectures. The analysis focuses on key aspects such as using UAVs and sensors, image acquisition and processing, DL architecture, and evaluation metrics. The review covers publications from 2017 to June 2024 from WoS, Scopus, ScienceDirect, SpringerLink, and IEEE Xplore databases. The results allowed the identification of various limitations, trends, gaps, and opportunities for future research. In general, there is a predominant use of multirotor UAVs, particularly the DJI Phantom with RGB sensors, showing a trend towards the integration of multiple sensors (multispectral, LiDAR) operating at heights of around 10 meters, providing good spatial coverage in data acquisition. Likewise, the rapid development of deep learning architectures has driven CNN models such as ResNet for classification, YOLO for detection, U-Net for semantic segmentation, and Mask R-CNN for weed instance segmentation, with a tendency towards new Transformer-based and hybrid architectures. The most common metrics used to evaluate these models include precision, recall, F1-Score, and mAP.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101147"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Weed detection based on deep learning from UAV imagery: A review\",\"authors\":\"Lucía Sandoval-Pillajo ,&nbsp;Iván García-Santillán ,&nbsp;Marco Pusdá-Chulde ,&nbsp;Adriana Giret\",\"doi\":\"10.1016/j.atech.2025.101147\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Weeds are undesirable plants that compete with crops for essential resources such as light, soil, water, and nutrients. Additionally, they can harbor pests that reduce crop yields. In traditional agriculture, weed control is based on applying pesticides throughout the agricultural field, resulting in soil damage, environmental contamination, damage to farm products, and risks to human health. Precision agriculture (PA) has evolved in recent years thanks to sensors, hardware, software, and innovations in unmanned aerial vehicle (UAV) systems. These systems aim to improve the localized application of chemicals in weed control by using advanced image analysis techniques, computer vision, deep learning (DL), and geo-positioning (GPS) to detect and recognize weeds. This subsequently facilitates the implementation of specific control mechanisms in real environments. Recently, automatic weed detection techniques have been developed using UAV imagery. However, these face a significant challenge due to the morphological similarities between weeds and crops, such as color, shape, and texture, which makes their practical and effective differentiation and implementation difficult. This paper presents a systematic literature review (SLR) based on 77 recent and relevant studies on weed detection and classification in UAV imagery using DL architectures. The analysis focuses on key aspects such as using UAVs and sensors, image acquisition and processing, DL architecture, and evaluation metrics. The review covers publications from 2017 to June 2024 from WoS, Scopus, ScienceDirect, SpringerLink, and IEEE Xplore databases. The results allowed the identification of various limitations, trends, gaps, and opportunities for future research. In general, there is a predominant use of multirotor UAVs, particularly the DJI Phantom with RGB sensors, showing a trend towards the integration of multiple sensors (multispectral, LiDAR) operating at heights of around 10 meters, providing good spatial coverage in data acquisition. Likewise, the rapid development of deep learning architectures has driven CNN models such as ResNet for classification, YOLO for detection, U-Net for semantic segmentation, and Mask R-CNN for weed instance segmentation, with a tendency towards new Transformer-based and hybrid architectures. The most common metrics used to evaluate these models include precision, recall, F1-Score, and mAP.</div></div>\",\"PeriodicalId\":74813,\"journal\":{\"name\":\"Smart agricultural technology\",\"volume\":\"12 \",\"pages\":\"Article 101147\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart agricultural technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S277237552500379X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S277237552500379X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

杂草是一种不受欢迎的植物,它与作物争夺光、土壤、水和营养等基本资源。此外,它们还能藏匿害虫,降低作物产量。在传统农业中,杂草控制是基于在整个农业领域施用农药,造成土壤破坏、环境污染、农产品破坏和人类健康风险。近年来,由于传感器、硬件、软件和无人机(UAV)系统的创新,精准农业(PA)得到了发展。这些系统旨在通过使用先进的图像分析技术、计算机视觉、深度学习(DL)和地理定位(GPS)来检测和识别杂草,从而提高化学品在杂草控制中的本地化应用。这有助于在实际环境中实现特定的控制机制。近年来,利用无人机图像的杂草自动检测技术得到了发展。然而,由于杂草与作物在颜色、形状、纹理等形态上的相似性,使其难以实现实际有效的区分和实施。本文对基于深度学习架构的无人机图像杂草检测和分类的77项最新和相关研究进行了系统的文献综述。分析集中在关键方面,如使用无人机和传感器、图像采集和处理、深度学习架构和评估指标。该综述涵盖了2017年至2024年6月来自WoS、Scopus、ScienceDirect、SpringerLink和IEEE explore数据库的出版物。这些结果为未来的研究确定了各种限制、趋势、差距和机会。一般来说,多旋翼无人机的主要用途,特别是带有RGB传感器的大疆幻影,显示出在大约10米高度操作的多个传感器(多光谱,激光雷达)集成的趋势,在数据采集中提供良好的空间覆盖。同样,深度学习架构的快速发展推动了CNN模型的发展,如用于分类的ResNet、用于检测的YOLO、用于语义分割的U-Net和用于weed实例分割的Mask R-CNN,并倾向于采用新的基于transformer的混合架构。用于评估这些模型的最常用指标包括精度、召回率、F1-Score和mAP。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Weed detection based on deep learning from UAV imagery: A review
Weeds are undesirable plants that compete with crops for essential resources such as light, soil, water, and nutrients. Additionally, they can harbor pests that reduce crop yields. In traditional agriculture, weed control is based on applying pesticides throughout the agricultural field, resulting in soil damage, environmental contamination, damage to farm products, and risks to human health. Precision agriculture (PA) has evolved in recent years thanks to sensors, hardware, software, and innovations in unmanned aerial vehicle (UAV) systems. These systems aim to improve the localized application of chemicals in weed control by using advanced image analysis techniques, computer vision, deep learning (DL), and geo-positioning (GPS) to detect and recognize weeds. This subsequently facilitates the implementation of specific control mechanisms in real environments. Recently, automatic weed detection techniques have been developed using UAV imagery. However, these face a significant challenge due to the morphological similarities between weeds and crops, such as color, shape, and texture, which makes their practical and effective differentiation and implementation difficult. This paper presents a systematic literature review (SLR) based on 77 recent and relevant studies on weed detection and classification in UAV imagery using DL architectures. The analysis focuses on key aspects such as using UAVs and sensors, image acquisition and processing, DL architecture, and evaluation metrics. The review covers publications from 2017 to June 2024 from WoS, Scopus, ScienceDirect, SpringerLink, and IEEE Xplore databases. The results allowed the identification of various limitations, trends, gaps, and opportunities for future research. In general, there is a predominant use of multirotor UAVs, particularly the DJI Phantom with RGB sensors, showing a trend towards the integration of multiple sensors (multispectral, LiDAR) operating at heights of around 10 meters, providing good spatial coverage in data acquisition. Likewise, the rapid development of deep learning architectures has driven CNN models such as ResNet for classification, YOLO for detection, U-Net for semantic segmentation, and Mask R-CNN for weed instance segmentation, with a tendency towards new Transformer-based and hybrid architectures. The most common metrics used to evaluate these models include precision, recall, F1-Score, and mAP.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信