Victor Enmanuel Rodas Arano , Lara Mota Corinto , Jean Marcos Pereira dos Santos Reis , Milson Evaldo Serafim , Sérgio Henrique Godinho Silva , Samara Martins Barbosa , Bruno Montoani Silva
{"title":"Proximal sensors fusion and machine learning algorithm combined to improve soil compaction prediction","authors":"Victor Enmanuel Rodas Arano , Lara Mota Corinto , Jean Marcos Pereira dos Santos Reis , Milson Evaldo Serafim , Sérgio Henrique Godinho Silva , Samara Martins Barbosa , Bruno Montoani Silva","doi":"10.1016/j.compag.2025.110609","DOIUrl":"10.1016/j.compag.2025.110609","url":null,"abstract":"<div><div>Predicting adverse factors in agricultural production, like excessive soil compaction, is crucial for taking preventive measures that reduce costs, drying time, environmental contamination from chemical analyses, and the need for destructive sampling methods. Therefore, our objective was to predict soil compaction by evaluating regression and classification models using Random Forest algorithms based on the integration of a wide range of proximal sensors. A total of 56 undisturbed soil samples were collected in PVC cylinders from two soil types: Anionic Acrudox (LVdf) and Typical Hapludox (LVAd), and subjected to five compaction levels (70 %, 80 %, 90 %, 100 %, 110 %) under laboratory conditions. During the experiment, 475 measurements were performed using one X-ray emission sensor, three electrical property sensors, and volumetric water content was estimated from saturation to drying. This process generated 9,025 observations across 19 sensor-derived variables. By integrating the sensors, robust and accurate regression models were developed using Random Forest algorithms to predict compaction degree, with R<sup>2</sup> = 0.93 when combining both soils and LVdf (R<sup>2</sup> = 0.79; RMSE = 7.18) and LVAd (RMSE = 6.35). Excluding water content did not significantly reduce model accuracy but altered the importance of certain variables such as Fe, Si, Ti, and Zn. The pXRF was better at predicting compaction compared to electrical sensors, achieving an R<sup>2</sup> = 0.78 for LVdf and LVAd. Classification models also performed well, reaching an overall accuracy of 0.92 (Kappa = 0.89), and Kappa values of 0.86 for LVdf and 0.74 for LVAd. Sensor fusion allowed variable analysis without disturbing soil structure, supporting potential large-scale spatial modeling for broader applications.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110609"},"PeriodicalIF":7.7,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144280887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phoenix Yu , Tilo Burghardt , Andrew W. Dowsey , Neill W. Campbell
{"title":"Holstein-Friesian re-identification using multiple cameras and self-supervision on a working farm","authors":"Phoenix Yu , Tilo Burghardt , Andrew W. Dowsey , Neill W. Campbell","doi":"10.1016/j.compag.2025.110568","DOIUrl":"10.1016/j.compag.2025.110568","url":null,"abstract":"<div><div>We present MultiCamCows2024, a farm-scale image dataset filmed across multiple cameras for the biometric identification of individual Holstein-Friesian cattle exploiting their unique black and white coat-patterns. Captured by three ceiling-mounted visual sensors covering adjacent barn areas over seven days on a working dairy farm, the dataset comprises 101,329 images of 90 cows, plus underlying original CCTV footage. The dataset is provided with full computer vision recognition baselines, that is both a supervised and self-supervised learning framework for individual cow identification trained on cattle tracklets. We report a performance above <strong>96%</strong> single image identification accuracy from the dataset and demonstrate that combining data from multiple cameras during learning enhances self-supervised identification. We show that our framework enables automatic cattle identification, barring only the simple human verification of tracklet integrity during data collection. Crucially, our study highlights that multi-camera, supervised and self-supervised components in tandem not only deliver highly accurate individual cow identification, but also achieve this efficiently with no labelling of cattle identities by humans. We argue that this improvement in efficacy has practical implications for livestock management, behaviour analysis, and agricultural monitoring. For reproducibility and practical ease of use, we publish all key software and code including re-identification components and the species detector with this paper, available at <span><span>https://tinyurl.com/MultiCamCows2024</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110568"},"PeriodicalIF":7.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comprehensive review of advances in sensing and monitoring technologies for precision hydroponic cultivation","authors":"Md Shamim Ahamed , Milon Chowdhury , A.K.M. Sarwar Inam , Krishna Aindrila Kar , Md Najmul Islam , Saeed Karimzadeh , Shawana Tabassum , Md Sazzadul Kabir , Nazmin Akter , Abdul Momin","doi":"10.1016/j.compag.2025.110601","DOIUrl":"10.1016/j.compag.2025.110601","url":null,"abstract":"<div><div>Hydroponic crop cultivation systems are a key component of controlled environment agriculture (CEA), where precision nutrient management is essential for sustainable plant growth and optimal yields, particularly in recycled hydroponic systems. Traditional methods, such as visual diagnosis of nutrient deficiencies or toxicities, are often delayed and prone to misinterpretation due to overlapping symptoms. Moreover, similar symptoms caused by different nutrient deficiencies can lead to confusion and result in incorrect nutrient replenishment. Although electrical conductivity (EC) based nutrient management techniques can be applied for online nutrient management, they only provide information about the overall ion concentration, limiting individual ion identification and quantification. Moreover, fluctuations in pH levels affect the availability of several ions by inducing precipitation or dissolution reactions. Ion-specific sensing techniques can play a vital role in overcoming these limitations. This article aims to provide a comprehensive overview of various sensing/monitoring technologies for precision nutrient management from an application perspective. Nowadays, ion-selective electrodes (ISEs) are widely investigated in hydroponic applications due to their sensing capabilities, real-time functionality, robustness, low cost, and calibration needs. This study discusses the factors affecting the sensing performance of different sensors, especially ion-based sensing, and commercial tools available in hydroponic operations. The review identifies future research priorities to enhance nutrient monitoring and decision-support systems for precision hydroponic nutrient management. This work aims to serve as a valuable resource for researchers and practitioners in advancing hydroponic sensing technologies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110601"},"PeriodicalIF":7.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaqi Yao , Shichao Jin , Jingrong Zang , Ruinan Zhang , Yu Wang , Yanjun Su , Qinghua Guo , Yanfeng Ding , Dong Jiang
{"title":"SenNet: A dual-branch image semantic segmentation network for wheat senescence evaluation and high-yielding variety screening","authors":"Jiaqi Yao , Shichao Jin , Jingrong Zang , Ruinan Zhang , Yu Wang , Yanjun Su , Qinghua Guo , Yanfeng Ding , Dong Jiang","doi":"10.1016/j.compag.2025.110632","DOIUrl":"10.1016/j.compag.2025.110632","url":null,"abstract":"<div><div>Wheat is one of the three primary staple crops globally, with the senescence of its leaves having a direct effect on yield. However, conventional senescence evaluation methods are mainly based on visual scoring, which are subjective, time-consuming, and hamper the investigation of mechanisms between senescence process and yield formation. High-throughput image-based plant phenotyping techniques offer a promising approach. However, extracting senescence-related semantic information from images presents challenges, including blurred edge segmentation, inadequate characterization of senescence features, and interference from complex field environments. Therefore, this study proposes a dual-branch image senescence segmentation model (<em>SenNet</em>), which integrates edge priors and local–global attention mechanisms, including local–global hierarchical attention mechanisms, gated convolution, and positional encoding modules. First, a wheat senescence dynamics image dataset (19530 images) was constructed, comprising 509 wheat varieties from a two-year and two-replicate field experiments. Then, the <em>SenNet</em> model achieved senescence image segmentation for various wheat varieties, enabling senescence dynamics analysis and high-yielding variety screening. The results showed that: 1) The mean Intersection over Union (mIoU) of the <em>SenNet</em> model was 95.41 %, which represented a 4.01 % improvement over the average mIoU of seven state-of-the-art models. 2) The contributions of the local–global hierarchical attention mechanism, gated convolution, and positional encoding module to the accuracy improvement of <em>SenNet</em> were 3.15 %, 1.62 %, and 1.03 %, respectively. 3) <em>SenNet</em> can be transferred across years and locations. The mIoU accuracy of the <em>SenNet</em> across locations is 96.01 %. Furthermore, the model trained in 2023 can be transferred to 2022 and 2024, achieving mIoU accuracies of 93.75 % and 93.27 %. 4) High-yielding varieties typically experience a later onset of senescence and faster senescence in later stages. Based on the senescence law, this study further constructed new dynamic traits of senescence (e.g., <em>AreaUnderCurve</em>). Leveraging the random forest-based yield prediction (R<sup>2</sup> = 0.68) from the dynamic traits, high-yielding varieties were screened with an average precision, recall, F1 score, and accuracy of 81 %, 79 %, 80 %, and 87 %, respectively. This study provides an efficient method for monitoring senescence dynamics and predicting yield, offering new insights into the screening of high-yielding varieties.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110632"},"PeriodicalIF":7.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144280888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georg Goldenits , Thomas Neubauer , Sebastian Raubitzek , Kevin Mallinger , Edgar Weippl
{"title":"Tabular reinforcement learning for reward robust, explainable crop rotation policies matching deep reinforcement learning performance","authors":"Georg Goldenits , Thomas Neubauer , Sebastian Raubitzek , Kevin Mallinger , Edgar Weippl","doi":"10.1016/j.compag.2025.110571","DOIUrl":"10.1016/j.compag.2025.110571","url":null,"abstract":"<div><div>Digital Twins are often intertwined with machine learning and, more recently, deep reinforcement learning methods in their architecture to process data and predict future outcomes based on input data. However, concerns about the trustworthiness of the output from deep learning models persist due to neural networks generally being regarded as a black box model. In our work, we developed crop rotation policies using explainable tabular reinforcement learning techniques. We compared these policies to those generated by a deep Q-learning approach by generating five-step rotations, i.e. producing a series of five consecutive crops. The aim of the rotations is to maximise crop yields while maintaining a healthy nitrogen level in the soil and adhering to established planting rules. Crop yields may vary due to external factors such as weather patterns or changes in market prices, so perturbations have been added to the reward signal to account for those influences. The deployed explainable tabular reinforcement learning methods collect, on average, at least as much reward over 100 crop rotation plans when randomly starting with any crop compared to the deep learning model. For the perturbed case, robust tabular reinforcement learning methods collect similar amounts of reward across 100 crop rotation plans compared to the non-random reward setting, whereas the deep reinforcement learning agent collects even fewer rewards compared to learning on non-perturbed rewards. Thus, we contribute a novel random rewards approach and a corresponding robustification to increase the resilience of the proposed crop rotation planning methodology. By consulting with farmers and crop rotation experts, we demonstrate that the derived policies are reasonable to use and more resilient towards external perturbations. Furthermore, the use of interpretable and explainable reinforcement learning techniques increases confidence in resulting policies, thereby increasing the likelihood that farmers will adopt the suggested policies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110571"},"PeriodicalIF":7.7,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing agroecology and sustainability with agricultural robots at field level: A scoping review","authors":"M. Naim , D. Rizzo , L. Sauvée , M. Medici","doi":"10.1016/j.compag.2025.110650","DOIUrl":"10.1016/j.compag.2025.110650","url":null,"abstract":"<div><div>Agricultural robots show a growing potential to improve resource management and reduce the environmental impacts of farming. However, the evaluation of robots’ contribution to support sustainable farming is still lacking. This study specifically reviewed the operationalization of four agroecological principles at the field level: recycling, soil health, biodiversity and synergy. To this aim, a scoping review was conducted on the Scopus database, with a query within titles, abstracts, and author keywords mentioning robots, and agroecology or sustainability. The body of literature was screened to include only open field robots. The resulting 78 documents were coded inductively on three macro areas: (1) academic background, (2) robot operations, (3) contribution to agroecology principles, whether explicitly or implicitly mentioned. The results highlight that robots operationalize agroecology principles through non-chemical and selective weeding to preserve diversity and soil health, lighter designs that reduce soil compaction, and advanced data collection systems to optimize resource use and synergy. Solar-powered robots represent early steps toward recycling, but this principle remains understudied. The discussion expands on the potential of robotics in other innovative approaches for sustainable agriculture, such as agroforestry, conservation agriculture, and novel farming system design. Key challenges include ensuring farmers are enabled to master data collection and management, as well as integrating high-tech robotics with low-tech solutions. These efforts are critical for leveraging agricultural robotics to advance agroecology and sustainability across diverse farming systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110650"},"PeriodicalIF":7.7,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayi Su , Qian Xie , Yuankun Deng , Chengming Wang , Shuai Xie , Ning Gao , Xiaokang Ma , Sung Woo Kim , Charles Martin Nyachoti , Yulong Yin , Bie Tan , Jing Wang
{"title":"Data-driven modeling of reproductive performance: a cohort study for elevated sow efficiency and sustainability in livestock farming","authors":"Jiayi Su , Qian Xie , Yuankun Deng , Chengming Wang , Shuai Xie , Ning Gao , Xiaokang Ma , Sung Woo Kim , Charles Martin Nyachoti , Yulong Yin , Bie Tan , Jing Wang","doi":"10.1016/j.compag.2025.110641","DOIUrl":"10.1016/j.compag.2025.110641","url":null,"abstract":"<div><div>The slow development of the Internet of Things (IoT) in pig production, due to the lack of high-quality data, limited large-scale models, and low hardware coverage, has hindered the widespread adoption of precision feeding practices. This study aimed to address these challenges by providing a standardized dataset as a foundation for IoT development and constructing predictive models focused on birth litter weight (BLW) and weaned litter weight (WLW). To achieve these objectives, two comprehensive datasets consisting of 10,089 sow characteristics were collected. By comparing eight different algorithms, GBDT algorithm was selected as the optimal algorithm for modeling of BLW and WLW. The datasets were divided into a 90 % sample for model derivation, with the remaining 10 % used for model validation. The models for both BLW and WLW datasets exhibited consistent performance between main and validation cohorts, with low error magnitudes and high relative accuracy (MAE: 1.8–2.5, MAPE: 2.55 %–18.41 %, R > 60 %), indicating robustness and generalizability to unseen data. Delving deeper, the SHAP summary plots illustrated that in the model for BLW, G.ADFIp2, G.ADFIp3, G.ADFIp4, G.ADF and parity had a significant impact on the prediction. In the WLW model, the key influencing factors were weaned litter size, duration of lactation, parity, and birth litter weight. SHAP force and dependence plots had uncovered intricate effects of various features on the model’s outcomes. To enhance accessibility, we developed a user-friendly visualization and prediction website using the Streamlit Python framework. These critical research findings provide decision-makers with invaluable insights, fostering advancements in precision feeding models and IoT technologies in the swine industry. Ultimately, this contributes to the overarching goal of enhancing the comprehensive sustainability of livestock farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110641"},"PeriodicalIF":7.7,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced geometric properties prediction for carrots in motion using a Multi-task YOLO-based Linked Network (YL-Net)","authors":"Yi-Liang Wu , Sze-Teng Liong , Gen-Bing Liong , Jun-Hui Liang , Y.S. Gan","doi":"10.1016/j.compag.2025.110583","DOIUrl":"10.1016/j.compag.2025.110583","url":null,"abstract":"<div><div>With a growing shortage of manual labor in agriculture, there is an urgent need for efficient and automated solutions to address the complex challenges of inspecting and classifying agricultural products, such as carrots, which exhibit irregular shapes, occlusions, and high variability in geometric properties. This study introduces a novel Multi-task YOLO-based Linked Network (YL-Net) designed to estimate the geometric properties of carrots in motion, including width, length, volume, and mass. The proposed network integrates RGB-D input with decoupled multi-task learning to simultaneously perform instance segmentation and regression. Building upon our previous work, the enhanced framework presented herein achieves outstanding performance, with MAPE values below 2.5% for all estimated properties. When aggregating multi-view data from a rolling conveyor system, the accuracy further improves, yielding MAPE values below 2%. In terms of detection, the model demonstrates excellent performance, achieving a mean F1-score of 98.78% and an instance segmentation IoU of 89.25%. To evaluate its scalability, the system was deployed on an NVIDIA Jetson Orin Nano, where it achieved a real-time processing speed of 80 FPS. Beyond carrots, the proposed approach can be extended to inspect other agricultural products, such as potatoes and sweet potatoes, where geometric properties are essential for sorting and grading. This work provides a scalable and transferable solution for automated agricultural inspection, laying a robust foundation for broader applications in smart farming, industrial automation, and food quality control.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110583"},"PeriodicalIF":7.7,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongxing Peng , Shangkun Guo , Xiangjun Zou , Hongjun Wang , Juntao Xiong , Qijun Liang
{"title":"UAVO-NeRF: 3D reconstruction of orchards and semantic segmentation of fruit trees based on neural radiance field in UAV images","authors":"Hongxing Peng , Shangkun Guo , Xiangjun Zou , Hongjun Wang , Juntao Xiong , Qijun Liang","doi":"10.1016/j.compag.2025.110631","DOIUrl":"10.1016/j.compag.2025.110631","url":null,"abstract":"<div><div>In precision agriculture, accurate 3D reconstruction of orchard environments is essential for crop health monitoring and automating agricultural tasks. This paper introduces UAVO-NeRF, a novel method using Unmanned Aerial Vehicles (UAVs) for high-fidelity 3D reconstruction and semantic segmentation of orchard scenes. To address inefficiencies in large-scale outdoor environments, we employ a nonlinear scene parameterization that compresses the unbounded scene into a cubic space, enabling denser sampling of distant points. We implement multi-resolution hash encoding to capture both global context and local details, significantly enhancing reconstruction speed and quality. To handle lighting variability, we incorporate appearance embeddings that adaptively encode lighting conditions, increasing the model’s robustness under diverse illumination. Our network’s output layer includes a 3D semantic segmentation module that distinguishes fruit trees from background elements, using a cross-entropy loss function to measure the difference between predicted and actual semantic labels. Depth prediction accuracy is improved using depth maps generated by a pre-trained monocular depth estimation model, refined through a composite loss function that combines reconstruction, depth, semantic, visibility, and interlevel losses to minimize artifacts and enhance geometric representation. Experimental results demonstrate that UAVO-NeRF achieves a Peak Signal-to-Noise Ratio (PSNR) of 23.82, outperforming state-of-the-art models like Instant-NGP and Mip-NeRF 360 across metrics such as PSNR, Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). Additionally, UAVO-NeRF achieves a mean Intersection over Union (mIoU) of 0.891 for fruit tree semantic segmentation from novel viewpoints, exceeding traditional 2D models by over 5%. This approach offers a robust technological solution for digital twin applications in agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110631"},"PeriodicalIF":7.7,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andres Montes de Oca , Troy Magney , Stavros G. Vougioukas , Dario Racano , Alejandro Torrez-Orozco , Steven A. Fennimore , Frank N. Martin , Mason Earles
{"title":"Strawberry fruit yield forecasting using image-based time-series plant phenological stages sequences","authors":"Andres Montes de Oca , Troy Magney , Stavros G. Vougioukas , Dario Racano , Alejandro Torrez-Orozco , Steven A. Fennimore , Frank N. Martin , Mason Earles","doi":"10.1016/j.compag.2025.110516","DOIUrl":"10.1016/j.compag.2025.110516","url":null,"abstract":"<div><div>Yield forecasting is crucial for growers, enabling efficient resource management and informed decision-making. Such decisions impact storage, product processing, and logistics, leading to increased productivity and cost savings. However, this heavily relies on accurate yield forecasts. This work addresses such a need by presenting the development and testing of a reliable method for yield forecasting. The proposed methodology combines high-resolution object detection with a multi-variate input forecasting model that accurately computes the yield for incoming harvests. The forecasting approach incorporates a physically-constrained model based on a Long Short-Term Memory (LSTM) network. This model dynamically applies weights to the time-series data composed of counts for the phenological stages: flower, green, small white, large white, pink, and red (ripe fruit). These counts are obtained from detections made by a YOLOv10s, achieving an mAP@50 of 0.74 for all classes. As a result, the forecasting model’s capacity to interpret input data is enhanced, translating it into a valid ripe count forecast. To validate the proposed approach, the forecasting model was trained and evaluated using (a) untreated count sequences and (b) weighted count sequences. The results indicate that phenologically-weighted input sequences outperform untreated sequences, with the following evaluation metrics: R<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> = 0.74, Root Mean Square Error (RMSE) = 12.67, Mean Absolute Error (MAE) = 10.95, and Mean Absolute Percentage Error (MAPE) = 39.4, improving 15%, 19.26%, 17.13%, and 11.3%, respectively.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110516"},"PeriodicalIF":7.7,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}