{"title":"Navigating challenges/opportunities in developing smart agricultural extension platforms: Multi-media data mining techniques","authors":"Josué Kpodo , A. Pouyan Nejadhashemi","doi":"10.1016/j.aiia.2025.04.001","DOIUrl":"10.1016/j.aiia.2025.04.001","url":null,"abstract":"<div><div>Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 426-448"},"PeriodicalIF":8.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi
{"title":"Improving the performance of machine learning algorithms for detection of individual pests and beneficial insects using feature selection techniques","authors":"Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi","doi":"10.1016/j.aiia.2025.03.008","DOIUrl":"10.1016/j.aiia.2025.03.008","url":null,"abstract":"<div><div>To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, <em>Leptinotarsa decemlineata</em>) and green peach aphid (<em>Myzus persicae</em>)—and the beneficial seven-spot ladybird (<em>Coccinella septempunctata</em>). The specialist herbivore CPB was imaged only on potato plants (<em>Solanum tuberosum</em>) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (<em>Vicia faba)</em>, and sugar beet (<em>Beta vulgaris</em> subsp. <em>vulgaris</em>). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 <em>×</em> 10<sup>7</sup> Random forest, 6.23 <em>×</em> 10<sup>3</sup> Support vector machine, 3.64 <em>×</em> 10<sup>4</sup> K-nearest neighbours and 1.88 <em>×</em> 10<sup>2</sup> Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 377-394"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiang Pin , Tingfeng Guo , Minzi Xv , Xiangjun Zou , Wenwu Hu
{"title":"Fast extraction of navigation line and crop position based on LiDAR for cabbage crops","authors":"Jiang Pin , Tingfeng Guo , Minzi Xv , Xiangjun Zou , Wenwu Hu","doi":"10.1016/j.aiia.2025.03.007","DOIUrl":"10.1016/j.aiia.2025.03.007","url":null,"abstract":"<div><div>This paper describes the design, algorithm development, and experimental verification of a precise spray perception system based on LiDAR were presented to address the issue that the navigation line extraction accuracy of self-propelled sprayers during field operations is low, resulting in wheels rolling over the ridges and excessive pesticide waste. A data processing framework was established for the precision spray perception system. Through data preprocessing, adaptive segmentation of crops and ditches, extraction of navigation lines and crop positioning, which were derived from the original LiDAR point cloud species. Data collection and analysis of the field environment of cabbages in different growth cycles were conducted to verify the stability of the precision spraying system. A controllable constant-speed experimental setup was established to compare the performance of LiDAR and depth camera in the same field environment. The experimental results show that at the self-propelled sprayer of speeds of 0.5 and 1 ms<sup>−1</sup>, the maximum lateral error is 0.112 m in a cabbage ridge environment with inter-row weeds, with an mean absolute lateral error of 0.059 m. The processing speed per frame does not exceed 43 ms. Compared to the machine vision algorithm, this method reduces the average processing time by 122 ms. The proposed system demonstrates superior accuracy, processing time, and robustness in crop identification and navigation line extraction compared to the machine vision system.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 686-695"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unveiling the drivers contributing to global wheat yield shocks through quantile regression","authors":"Srishti Vishwakarma , Xin Zhang , Vyacheslav Lyubchich","doi":"10.1016/j.aiia.2025.03.004","DOIUrl":"10.1016/j.aiia.2025.03.004","url":null,"abstract":"<div><div>Sudden reductions in crop yield (i.e., yield shocks) severely disrupt the food supply, intensify food insecurity, depress farmers' welfare, and worsen a country's economic conditions. Here, we study the spatiotemporal patterns of wheat yield shocks, quantified by the lower quantiles of yield fluctuations, in 86 countries over 30 years. Furthermore, we assess the relationships between shocks and their key ecological and socioeconomic drivers using quantile regression based on statistical (linear quantile mixed model) and machine learning (quantile random forest) models. Using a panel dataset that captures spatiotemporal patterns of yield shocks and possible drivers in 86 countries, we find that the severity of yield shocks has been increasing globally since 1997. Moreover, our cross-validation exercise shows that quantile random forest outperforms the linear quantile regression model. Despite this performance difference, both models consistently reveal that the severity of shocks is associated with higher weather stress, nitrogen fertilizer application rate, and gross domestic product (GDP) per capita (a typical indicator for economic and technological advancement in a country). While the unexpected negative association between more severe wheat yield shocks and higher fertilizer application rate and GDP per capita does not imply a direct causal effect, they indicate that the advancement in wheat production has been primarily on achieving higher yields and less on lowering the possibility and magnitude of sharp yield reductions. Hence, in the context of growing extreme weather stress, there is a critical need to enhance the technology and management practices that mitigate yield shocks to improve the resilience of the world food systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 564-572"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Carraro , Mattia Pravato , Francesco Marinello , Francesco Bordignon , Angela Trocino , Gerolamo Xiccato , Andrea Pezzuolo
{"title":"A new tool to improve the computation of animal kinetic activity indices in precision poultry farming","authors":"Alberto Carraro , Mattia Pravato , Francesco Marinello , Francesco Bordignon , Angela Trocino , Gerolamo Xiccato , Andrea Pezzuolo","doi":"10.1016/j.aiia.2025.03.005","DOIUrl":"10.1016/j.aiia.2025.03.005","url":null,"abstract":"<div><div>Precision Livestock Farming (PLF) emerges as a promising solution for revolutionising farming by enabling real-time automated monitoring of animals through smart technologies. PLF provides farmers with precise data to enhance farm management, increasing productivity and profitability. For instance, it allows for non-intrusive health assessments, contributing to maintaining a healthy herd while reducing stress associated with handling. In the poultry sector, image analysis can be utilised to monitor and analyse the behaviour of each hen in real time. Researchers have recently used machine learning algorithms to monitor the behaviour, health, and positioning of hens through computer vision techniques. Convolutional neural networks, a type of deep learning algorithm, have been utilised for image analysis to identify and categorise various hen behaviours and track specific activities like feeding and drinking. This research presents an automated system for analysing laying hen movement using video footage from surveillance cameras. With a customised implementation of object tracking, the system can efficiently process hundreds of hours of videos while maintaining high measurement precision. Its modular implementation adapts well to optimally exploit the GPU computing capabilities of the hardware platform it is running on. The use of this system is beneficial for both real-time monitoring and post-processing, contributing to improved monitoring capabilities in precision livestock farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 659-670"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehdi Fasihi , Mirko Sodini , Alex Falcon , Francesco Degano , Paolo Sivilotti , Giuseppe Serra
{"title":"Boosting grapevine phenological stages prediction based on climatic data by pseudo-labeling approach","authors":"Mehdi Fasihi , Mirko Sodini , Alex Falcon , Francesco Degano , Paolo Sivilotti , Giuseppe Serra","doi":"10.1016/j.aiia.2025.03.003","DOIUrl":"10.1016/j.aiia.2025.03.003","url":null,"abstract":"<div><div>Predicting grapevine phenological stages (GPHS) is critical for precisely managing vineyard operations, including plant disease treatments, pruning, and harvest. Solutions commonly used to address viticulture challenges rely on image processing techniques, which have achieved significant results. However, they require the installation of dedicated hardware in the vineyard, making it invasive and difficult to maintain. Moreover, accurate prediction is influenced by the interplay of climatic factors, especially temperature, and the impact of global warming, which are difficult to model using images. Another problem frequently found in GPHS prediction is the persistent issue of missing values in viticultural datasets, particularly in phenological stages. This paper proposes a semi-supervised approach that begins with a small set of labeled phenological stage examples and automatically generates new annotations for large volumes of unlabeled climatic data. This approach aims to address key challenges in phenological analysis. This novel climatic data-based approach offers advantages over common image processing methods, as it is non-intrusive, cost-effective, and adaptable for vineyards of various sizes and technological levels. To ensure the robustness of the proposed Pseudo-labelling strategy, we integrated it into eight machine-learning algorithms. We evaluated its performance across seven diverse datasets, each exhibiting varying percentages of missing values. Performance metrics, including the coefficient of determination (R<sup>2</sup>) and root-mean-square error (RMSE), are employed to assess the effectiveness of the models. The study demonstrates that integrating the proposed Pseudo-labeling strategy with supervised learning approaches significantly improves predictive accuracy. Moreover, the study shows that the proposed methodology can also be integrated with explainable artificial intelligence techniques to determine the importance of the input features. In particular, the investigation highlights that growing degree days are crucial for improved GPHS prediction.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 550-563"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu
{"title":"TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis","authors":"Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu","doi":"10.1016/j.aiia.2025.03.002","DOIUrl":"10.1016/j.aiia.2025.03.002","url":null,"abstract":"<div><div>China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 266-279"},"PeriodicalIF":8.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin
{"title":"A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture","authors":"Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin","doi":"10.1016/j.aiia.2025.02.008","DOIUrl":"10.1016/j.aiia.2025.02.008","url":null,"abstract":"<div><div>This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 232-251"},"PeriodicalIF":8.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}