{"title":"Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques","authors":"Zhen Chen , Weiguang Zhai , Qian Cheng","doi":"10.1016/j.aiia.2025.04.008","DOIUrl":null,"url":null,"abstract":"<div><div>The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R<sup>2</sup> ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R<sup>2</sup> ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 482-495"},"PeriodicalIF":8.2000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Agriculture","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589721725000480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R2 ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R2 ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R2 ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R2 ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.