Toward resource-efficient UAV systems: Deep learning model compression for onboard-ready weed detection in UAV imagery

IF 5.7 Q1 AGRICULTURAL ENGINEERING
Alwaseela Abdalla , Masara M.A. Mohammed , Oluwatola Adedeji , Peter Dotray , Wenxuan Guo
{"title":"Toward resource-efficient UAV systems: Deep learning model compression for onboard-ready weed detection in UAV imagery","authors":"Alwaseela Abdalla ,&nbsp;Masara M.A. Mohammed ,&nbsp;Oluwatola Adedeji ,&nbsp;Peter Dotray ,&nbsp;Wenxuan Guo","doi":"10.1016/j.atech.2025.101086","DOIUrl":null,"url":null,"abstract":"<div><div>Convolutional neural networks (CNNs) have emerged as a powerful tool for detecting weeds in unmanned aerial vehicle (UAV) imagery. However, the deployment of deep learning models in UAVs intended for onboard processing is hindered by their large size, which demands significant computational resources. To address these challenges, we applied two compression techniques—pruning and quantization—both independently and in combination, to assess their effectiveness in reducing model size with minimal accuracy loss. Using the DeepLab v3+ model with various backbones, including ResNet-18, ResNet-50, MobileNet-v2, and Xception, the study systematically investigates these techniques in the context of in-field weed detection. We developed a pruning technique that gives less important parameters to be reinitialized iteratively before final pruning. The importance of each parameter was evaluated using a Taylor expansion-based criterion. We fine-tuned the pruned model on the UAV dataset to mitigate any performance loss resulting from pruning. We then applied quantization to reduce the precision of numerical parameters and improve computational efficiency. Pruning alone reduced model size by ∼55–65 % with only a 1–3 % accuracy drop, while quantization alone achieved ∼35–50 % reduction with slightly higher degradation. Combined, they yielded up to 75 % model size reduction while maintaining over 90 % accuracy, particularly for ResNet-50 and Xception, which were more resilient than MobileNet-v2. Compressed models were tested on NVIDIA Jetson AGX Xavier and Jetson AGX Orin, achieving 40.7 % and 52.3 % latency reduction respectively. These results confirm the models' efficiency and readiness for edge deployment. These results support future deployment of efficient, site-specific weed detection on UAVs. Future research will focus on deploying the compressed models in actual field operations to evaluate their real-time performance and practical effectiveness.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101086"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525003193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Convolutional neural networks (CNNs) have emerged as a powerful tool for detecting weeds in unmanned aerial vehicle (UAV) imagery. However, the deployment of deep learning models in UAVs intended for onboard processing is hindered by their large size, which demands significant computational resources. To address these challenges, we applied two compression techniques—pruning and quantization—both independently and in combination, to assess their effectiveness in reducing model size with minimal accuracy loss. Using the DeepLab v3+ model with various backbones, including ResNet-18, ResNet-50, MobileNet-v2, and Xception, the study systematically investigates these techniques in the context of in-field weed detection. We developed a pruning technique that gives less important parameters to be reinitialized iteratively before final pruning. The importance of each parameter was evaluated using a Taylor expansion-based criterion. We fine-tuned the pruned model on the UAV dataset to mitigate any performance loss resulting from pruning. We then applied quantization to reduce the precision of numerical parameters and improve computational efficiency. Pruning alone reduced model size by ∼55–65 % with only a 1–3 % accuracy drop, while quantization alone achieved ∼35–50 % reduction with slightly higher degradation. Combined, they yielded up to 75 % model size reduction while maintaining over 90 % accuracy, particularly for ResNet-50 and Xception, which were more resilient than MobileNet-v2. Compressed models were tested on NVIDIA Jetson AGX Xavier and Jetson AGX Orin, achieving 40.7 % and 52.3 % latency reduction respectively. These results confirm the models' efficiency and readiness for edge deployment. These results support future deployment of efficient, site-specific weed detection on UAVs. Future research will focus on deploying the compressed models in actual field operations to evaluate their real-time performance and practical effectiveness.
面向资源高效的无人机系统:无人机图像中机载杂草检测的深度学习模型压缩
卷积神经网络(cnn)已成为无人驾驶飞行器(UAV)图像中检测杂草的强大工具。然而,用于机载处理的深度学习模型在无人机中的部署受到其大尺寸的阻碍,这需要大量的计算资源。为了应对这些挑战,我们应用了两种压缩技术——剪枝和量化——分别单独和组合使用,以评估它们在以最小精度损失减小模型尺寸方面的有效性。该研究使用具有多种主干的DeepLab v3+模型,包括ResNet-18、ResNet-50、MobileNet-v2和Xception,在现场杂草检测的背景下系统地研究了这些技术。我们开发了一种剪枝技术,在最终剪枝之前,将不太重要的参数迭代地重新初始化。使用基于泰勒展开的准则评估每个参数的重要性。我们对无人机数据集上的修剪模型进行了微调,以减轻修剪造成的任何性能损失。然后采用量化方法降低数值参数的精度,提高计算效率。单独修剪使模型尺寸减小了~ 55 - 65%,精度仅下降了1 - 3%,而单独量化使模型尺寸减小了~ 35 - 50%,退化程度略高。结合起来,它们可以减少75%的模型尺寸,同时保持90%以上的准确率,特别是对于ResNet-50和Xception,它们比MobileNet-v2更具弹性。压缩模型在NVIDIA Jetson AGX Xavier和Jetson AGX Orin上测试,延迟分别减少40.7%和52.3%。这些结果证实了模型的有效性和边缘部署的就绪性。这些结果支持未来在无人机上部署高效、特定地点的杂草检测。未来的研究将集中在将压缩模型应用于实际的现场作业中,以评估其实时性能和实际有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信