Alwaseela Abdalla , Masara M.A. Mohammed , Oluwatola Adedeji , Peter Dotray , Wenxuan Guo
{"title":"面向资源高效的无人机系统:无人机图像中机载杂草检测的深度学习模型压缩","authors":"Alwaseela Abdalla , Masara M.A. Mohammed , Oluwatola Adedeji , Peter Dotray , Wenxuan Guo","doi":"10.1016/j.atech.2025.101086","DOIUrl":null,"url":null,"abstract":"<div><div>Convolutional neural networks (CNNs) have emerged as a powerful tool for detecting weeds in unmanned aerial vehicle (UAV) imagery. However, the deployment of deep learning models in UAVs intended for onboard processing is hindered by their large size, which demands significant computational resources. To address these challenges, we applied two compression techniques—pruning and quantization—both independently and in combination, to assess their effectiveness in reducing model size with minimal accuracy loss. Using the DeepLab v3+ model with various backbones, including ResNet-18, ResNet-50, MobileNet-v2, and Xception, the study systematically investigates these techniques in the context of in-field weed detection. We developed a pruning technique that gives less important parameters to be reinitialized iteratively before final pruning. The importance of each parameter was evaluated using a Taylor expansion-based criterion. We fine-tuned the pruned model on the UAV dataset to mitigate any performance loss resulting from pruning. We then applied quantization to reduce the precision of numerical parameters and improve computational efficiency. Pruning alone reduced model size by ∼55–65 % with only a 1–3 % accuracy drop, while quantization alone achieved ∼35–50 % reduction with slightly higher degradation. Combined, they yielded up to 75 % model size reduction while maintaining over 90 % accuracy, particularly for ResNet-50 and Xception, which were more resilient than MobileNet-v2. Compressed models were tested on NVIDIA Jetson AGX Xavier and Jetson AGX Orin, achieving 40.7 % and 52.3 % latency reduction respectively. These results confirm the models' efficiency and readiness for edge deployment. These results support future deployment of efficient, site-specific weed detection on UAVs. Future research will focus on deploying the compressed models in actual field operations to evaluate their real-time performance and practical effectiveness.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101086"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward resource-efficient UAV systems: Deep learning model compression for onboard-ready weed detection in UAV imagery\",\"authors\":\"Alwaseela Abdalla , Masara M.A. Mohammed , Oluwatola Adedeji , Peter Dotray , Wenxuan Guo\",\"doi\":\"10.1016/j.atech.2025.101086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Convolutional neural networks (CNNs) have emerged as a powerful tool for detecting weeds in unmanned aerial vehicle (UAV) imagery. However, the deployment of deep learning models in UAVs intended for onboard processing is hindered by their large size, which demands significant computational resources. To address these challenges, we applied two compression techniques—pruning and quantization—both independently and in combination, to assess their effectiveness in reducing model size with minimal accuracy loss. Using the DeepLab v3+ model with various backbones, including ResNet-18, ResNet-50, MobileNet-v2, and Xception, the study systematically investigates these techniques in the context of in-field weed detection. We developed a pruning technique that gives less important parameters to be reinitialized iteratively before final pruning. The importance of each parameter was evaluated using a Taylor expansion-based criterion. We fine-tuned the pruned model on the UAV dataset to mitigate any performance loss resulting from pruning. We then applied quantization to reduce the precision of numerical parameters and improve computational efficiency. Pruning alone reduced model size by ∼55–65 % with only a 1–3 % accuracy drop, while quantization alone achieved ∼35–50 % reduction with slightly higher degradation. Combined, they yielded up to 75 % model size reduction while maintaining over 90 % accuracy, particularly for ResNet-50 and Xception, which were more resilient than MobileNet-v2. Compressed models were tested on NVIDIA Jetson AGX Xavier and Jetson AGX Orin, achieving 40.7 % and 52.3 % latency reduction respectively. These results confirm the models' efficiency and readiness for edge deployment. These results support future deployment of efficient, site-specific weed detection on UAVs. Future research will focus on deploying the compressed models in actual field operations to evaluate their real-time performance and practical effectiveness.</div></div>\",\"PeriodicalId\":74813,\"journal\":{\"name\":\"Smart agricultural technology\",\"volume\":\"12 \",\"pages\":\"Article 101086\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart agricultural technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772375525003193\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525003193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
Toward resource-efficient UAV systems: Deep learning model compression for onboard-ready weed detection in UAV imagery
Convolutional neural networks (CNNs) have emerged as a powerful tool for detecting weeds in unmanned aerial vehicle (UAV) imagery. However, the deployment of deep learning models in UAVs intended for onboard processing is hindered by their large size, which demands significant computational resources. To address these challenges, we applied two compression techniques—pruning and quantization—both independently and in combination, to assess their effectiveness in reducing model size with minimal accuracy loss. Using the DeepLab v3+ model with various backbones, including ResNet-18, ResNet-50, MobileNet-v2, and Xception, the study systematically investigates these techniques in the context of in-field weed detection. We developed a pruning technique that gives less important parameters to be reinitialized iteratively before final pruning. The importance of each parameter was evaluated using a Taylor expansion-based criterion. We fine-tuned the pruned model on the UAV dataset to mitigate any performance loss resulting from pruning. We then applied quantization to reduce the precision of numerical parameters and improve computational efficiency. Pruning alone reduced model size by ∼55–65 % with only a 1–3 % accuracy drop, while quantization alone achieved ∼35–50 % reduction with slightly higher degradation. Combined, they yielded up to 75 % model size reduction while maintaining over 90 % accuracy, particularly for ResNet-50 and Xception, which were more resilient than MobileNet-v2. Compressed models were tested on NVIDIA Jetson AGX Xavier and Jetson AGX Orin, achieving 40.7 % and 52.3 % latency reduction respectively. These results confirm the models' efficiency and readiness for edge deployment. These results support future deployment of efficient, site-specific weed detection on UAVs. Future research will focus on deploying the compressed models in actual field operations to evaluate their real-time performance and practical effectiveness.