{"title":"Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques","authors":"Zhen Chen , Weiguang Zhai , Qian Cheng","doi":"10.1016/j.aiia.2025.04.008","DOIUrl":"10.1016/j.aiia.2025.04.008","url":null,"abstract":"<div><div>The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R<sup>2</sup> ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R<sup>2</sup> ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 482-495"},"PeriodicalIF":8.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping of soil sampling sites using terrain and hydrological attributes","authors":"Tan-Hanh Pham , Kristopher Osterloh , Kim-Doang Nguyen","doi":"10.1016/j.aiia.2025.04.007","DOIUrl":"10.1016/j.aiia.2025.04.007","url":null,"abstract":"<div><div>Efficient soil sampling is essential for effective soil management and research on soil health. Traditional site selection methods are labor-intensive and fail to capture soil variability comprehensively. This study introduces a deep learning-based tool that automates soil sampling site selection using spectral images. The proposed framework consists of two key components: an extractor and a predictor. The extractor, based on a convolutional neural network (CNN), derives features from spectral images, while the predictor employs self-attention mechanisms to assess feature importance and generate prediction maps. The model is designed to process multiple spectral images and address the class imbalance in soil segmentation.</div><div>The model was trained on a soil dataset from 20 fields in eastern South Dakota, collected via drone-mounted LiDAR with high-precision GPS. Evaluation on a test set achieved a mean intersection over union (mIoU) of 69.46 % and a mean Dice coefficient (mDc) of 80.35 %, demonstrating strong segmentation performance. The results highlight the model's effectiveness in automating soil sampling site selection, providing an advanced tool for producers and soil scientists. Compared to existing state-of-the-art methods, the proposed approach improves accuracy and efficiency, optimizing soil sampling processes and enhancing soil research.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 470-481"},"PeriodicalIF":8.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Zhang , Yu Zhang , Meng Gao , Xinjie Wang , Baisheng Dai , Weizheng Shen
{"title":"Multimodal behavior recognition for dairy cow digital twin construction under incomplete modalities: A modality mapping completion network approach","authors":"Yi Zhang , Yu Zhang , Meng Gao , Xinjie Wang , Baisheng Dai , Weizheng Shen","doi":"10.1016/j.aiia.2025.04.005","DOIUrl":"10.1016/j.aiia.2025.04.005","url":null,"abstract":"<div><div>The recognition of dairy cow behavior is essential for enhancing health management, reproductive efficiency, production performance, and animal welfare. This paper addresses the challenge of modality loss in multimodal dairy cow behavior recognition algorithms, which can be caused by sensor or video signal disturbances arising from interference, harsh environmental conditions, extreme weather, network fluctuations, and other complexities inherent in farm environments. This study introduces a modality mapping completion network that maps incomplete sensor and video data to improve multimodal dairy cow behavior recognition under conditions of modality loss. By mapping incomplete sensor or video data, the method applies a multimodal behavior recognition algorithm to identify five specific behaviors: drinking, feeding, lying, standing, and walking. The results indicate that, under various comprehensive missing coefficients (λ), the method achieves an average accuracy of 97.87 % ± 0.15 %, an average precision of 95.19 % ± 0.4 %, and an average F1 score of 94.685 % ± 0.375 %, with an overall accuracy of 94.67 % ± 0.37 %. This approach enhances the robustness and applicability of cow behavior recognition based on multimodal data in situations of modality loss, resolving practical issues in the development of digital twins for cow behavior and providing comprehensive support for the intelligent and precise management of farms.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 459-469"},"PeriodicalIF":8.2,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao
{"title":"Joint optimization of AI large and small models for surface temperature and emissivity retrieval using knowledge distillation","authors":"Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao","doi":"10.1016/j.aiia.2025.03.009","DOIUrl":"10.1016/j.aiia.2025.03.009","url":null,"abstract":"<div><div>The rapid advancement of artificial intelligence in domains such as natural language processing has catalyzed AI research across various fields. This study introduces a novel strategy, the AutoKeras-Knowledge Distillation (AK-KD), which integrates knowledge distillation technology for joint optimization of large and small models in the retrieval of surface temperature and emissivity using thermal infrared remote sensing. The approach addresses the challenges of limited accuracy in surface temperature retrieval by employing a high-performance large model developed through AutoKeras as the teacher model, which subsequently enhances a less accurate small model through knowledge distillation. The resultant student model is interactively integrated with the large model to further improve specificity and generalization capabilities. Theoretical derivations and practical applications validate that the AK-KD strategy significantly enhances the accuracy of temperature and emissivity retrieval. For instance, a large model trained with simulated ASTER data achieved a Pearson Correlation Coefficient (PCC) of 0.999 and a Mean Absolute Error (MAE) of 0.348 K in surface temperature retrieval. In practical applications, this model demonstrated a PCC of 0.967 and an MAE of 0.685 K. Although the large model exhibits high average accuracy, its precision in complex terrains is comparatively lower. To ameliorate this, the large model, serving as a teacher, enhances the small model's local accuracy. Specifically, in surface temperature retrieval, the small model's PCC improved from an average of 0.978 to 0.979, and the MAE decreased from 1.065 K to 0.724 K. In emissivity retrieval, the PCC rose from an average of 0.827 to 0.898, and the MAE reduced from 0.0076 to 0.0054. This research not only provides robust technological support for further development of thermal infrared remote sensing in temperature and emissivity retrieval but also offers important references and key technological insights for the universal model construction of other geophysical parameter retrievals.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 407-425"},"PeriodicalIF":8.2,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143837982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Fu , Xueguan Zhao , Haoran Tan , Shengyu Zheng , Changyuan Zhai , Liping Chen
{"title":"Effective methods for mitigate the impact of light occlusion on the accuracy of online cabbage recognition in open fields","authors":"Hao Fu , Xueguan Zhao , Haoran Tan , Shengyu Zheng , Changyuan Zhai , Liping Chen","doi":"10.1016/j.aiia.2025.04.002","DOIUrl":"10.1016/j.aiia.2025.04.002","url":null,"abstract":"<div><div>To address the low recognition accuracy of open-field vegetables under light occlusion, this study focused on cabbage and developed an online target recognition model based on deep learning. Using Yolov8n as the base network, a method was proposed to mitigate the impact of light occlusion on the accuracy of online cabbage recognition. A combination of cabbage image filters was designed to eliminate the effects of light occlusion. A filter parameter adaptive learning module for cabbage image filter parameters was constructed. The image filter combination and adaptive learning module were embedded into the Yolov8n object detection network. This integration enabled precise real-time recognition of cabbage under light occlusion conditions. Experimental results showed recognition accuracies of 97.5 % on the normal lighting dataset, 93.1 % on the light occlusion dataset, and 95.0 % on the mixed dataset. For images with a light occlusion degree greater than 0.4, the recognition accuracy improved by 9.9 % and 13.7 % compared to Yolov5n and Yolov8n models. The model achieved recognition accuracies of 99.3 % on the Chinese cabbage dataset and 98.3 % on the broccoli dataset. The model was deployed on an Nvidia Jetson Orin NX edge computing device, achieving an image processing speed of 26.32 frames per second. Field trials showed recognition accuracies of 96.0 % under normal lighting conditions and 91.2 % under light occlusion. The proposed online cabbage recognition model enables real-time recognition and localization of cabbage in complex open-field environments, offering technical support for target-oriented spraying.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 449-458"},"PeriodicalIF":8.2,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian
{"title":"Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture","authors":"Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian","doi":"10.1016/j.aiia.2025.04.003","DOIUrl":"10.1016/j.aiia.2025.04.003","url":null,"abstract":"<div><div>Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 395-406"},"PeriodicalIF":8.2,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue
{"title":"Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour","authors":"Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue","doi":"10.1016/j.aiia.2025.03.006","DOIUrl":"10.1016/j.aiia.2025.03.006","url":null,"abstract":"<div><div>Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 363-376"},"PeriodicalIF":8.2,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigating challenges/opportunities in developing smart agricultural extension platforms: Multi-media data mining techniques","authors":"Josué Kpodo , A. Pouyan Nejadhashemi","doi":"10.1016/j.aiia.2025.04.001","DOIUrl":"10.1016/j.aiia.2025.04.001","url":null,"abstract":"<div><div>Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 426-448"},"PeriodicalIF":8.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi
{"title":"Improving the performance of machine learning algorithms for detection of individual pests and beneficial insects using feature selection techniques","authors":"Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi","doi":"10.1016/j.aiia.2025.03.008","DOIUrl":"10.1016/j.aiia.2025.03.008","url":null,"abstract":"<div><div>To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, <em>Leptinotarsa decemlineata</em>) and green peach aphid (<em>Myzus persicae</em>)—and the beneficial seven-spot ladybird (<em>Coccinella septempunctata</em>). The specialist herbivore CPB was imaged only on potato plants (<em>Solanum tuberosum</em>) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (<em>Vicia faba)</em>, and sugar beet (<em>Beta vulgaris</em> subsp. <em>vulgaris</em>). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 <em>×</em> 10<sup>7</sup> Random forest, 6.23 <em>×</em> 10<sup>3</sup> Support vector machine, 3.64 <em>×</em> 10<sup>4</sup> K-nearest neighbours and 1.88 <em>×</em> 10<sup>2</sup> Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 377-394"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}