Jinxin Chen , Luo Liu , Peng Li , Wen Yao , Mingxia Shen , Qi-an Ding , Longshen Liu
{"title":"PKL-Track: A Keypoint-Optimized approach for piglet tracking and activity measurement","authors":"Jinxin Chen , Luo Liu , Peng Li , Wen Yao , Mingxia Shen , Qi-an Ding , Longshen Liu","doi":"10.1016/j.compag.2025.110578","DOIUrl":"10.1016/j.compag.2025.110578","url":null,"abstract":"<div><div>This study proposes an efficient and accurate multi-object tracking method for piglets (Piglet Keypoints and L2 Distance Tracking, PKL-Track) to achieve piglet state monitoring and activity quantification. The proposed method employs the improved YOLOv11s-Pose model for target and keypoint detection, utilizing the relative positions of piglet bounding boxes to refine keypoint regression while optimizing the detection head to enhance model efficiency. To address challenges such as occlusion and target crowding, the BoT-SORT algorithm was improved by incorporating keypoint and bounding box information to refine matching distances, supplemented by normalized Euclidean distance to expand matching range. Experiments were conducted using video data from 31 piglet pens, constructing a dataset containing targets, keypoints, and tracking annotations for testing. Results demonstrated that the improved YOLOv11s-Pose model achieved an average precision of 98.5 % for object detection and 98.0 % for keypoint detection, with a detection time of 5.0 ms per frame. For multi-object tracking tasks, short frame intervals (5 frames) achieved 84.3 % HOTA, 99.1 % MOTA, and 91.5 % IDF1, significantly reducing ID switches. Activity quantification experiments based on tracking results revealed a relative error of only 2.36 % in group activity measurement, accurately reflecting piglet activity levels. The proposed method demonstrates excellent performance in multi-object tracking and activity quantification, providing key technological support for behavior monitoring and piglet health assessment in precision livestock farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110578"},"PeriodicalIF":7.7,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144146909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojing Zhu , Xin Liu , Qian Wu , Mengshi Liu , Xueli Hu , Hui Deng , Yun Zhang , Yunfeng Qu , Baoqi Wang , Xiaoman Gou , Qiongge Li , Changsheng Han , Junhao Tu , Xiaolong Qiu , Ge Hu , Jian Zhang , Lin Hu , Yun Zhou , Zhen Zhang
{"title":"Utilizing UAV-based high-throughput phenotyping and machine learning to evaluate drought resistance in wheat germplasm","authors":"Xiaojing Zhu , Xin Liu , Qian Wu , Mengshi Liu , Xueli Hu , Hui Deng , Yun Zhang , Yunfeng Qu , Baoqi Wang , Xiaoman Gou , Qiongge Li , Changsheng Han , Junhao Tu , Xiaolong Qiu , Ge Hu , Jian Zhang , Lin Hu , Yun Zhou , Zhen Zhang","doi":"10.1016/j.compag.2025.110602","DOIUrl":"10.1016/j.compag.2025.110602","url":null,"abstract":"<div><div>Wheat is a staple crop that suffers significant yield reductions under drought conditions, especially during the critical reproductive stages. Traditional methods for assessing drought resistance in wheat are often destructive, labor-intensive, and fail to capture the multi-faceted nature of drought tolerance. Vegetation indices serve as effective non-destructive indicators of physiological and biochemical traits. However, the potential of high-throughput spectral indices for quantifying drought resistance traits in wheat have not yet been thoroughly investigated. In this study, we employed an unmanned aerial vehicle (UAV) platform combined with machine learning to assess 206 spectral indices across 52 wheat genotypes at various growth stages under both well-watered and drought conditions. We also evaluated 11 traditional traits to examine their correlations with UAV-based traits. Our study identified 127 spectral indices as drought-related traits and revealed significant correlations between traditional and UAV-based traits. We identified three novel drought-related traits-the Color Index of Vegetation (CIVE), Red-Green-Blue Index (RGBI), and Excess Green Minus Excess Red Index (ExG_ExR)-derived from RGB images and correlated with chlorophyll content, showing strong associations with kernel-related traits. Additionally, we developed an advanced prediction model for yield stability under drought conditions using 17 spectral indices selected through machine learning. A comprehensive evaluation value (D) based on these 17 indices enabled the identification of one highly drought-resistant genotype and 13 drought-resistant genotypes, further validated through field experiments. Our study not only confirms the effectiveness of UAV-based traits in indicating drought tolerance but also provides valuable germplasm for the genetic improvement of drought-resistant wheat.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110602"},"PeriodicalIF":7.7,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating daily reference evapotranspiration with reduced data input using ensemble learning models in arid and humid regions of China","authors":"Qi Wei , Qi Wei , Junzeng Xu , Peng Chen , Shengyu Chen , Zihao Liu , Wenhao Qian , Zhiheng Huang , Jingyi Ren , Haoxuan Wang , Yimin Ding , Chao Lei , Zhiming Qi","doi":"10.1016/j.compag.2025.110548","DOIUrl":"10.1016/j.compag.2025.110548","url":null,"abstract":"<div><div>Accurate estimation of reference evapotranspiration (ET<sub>o</sub>) is key to irrigation system design and agricultural water management. Utilizing meteorological data (1960–2019) from 20 stations in China’s humid and arid regions, a reference ET<sub>o</sub> value was calculated using the FAO56-Penman-Monteith (PM) method. The accuracy of 6 ensemble learning models [<em>e.g</em>., Adaptive boosting (AdaBoost), Gradient Boosting Decision Tree (GBDT), Categorical boosting (CatBoost), Extreme gradient boosting (XGBoost), Extra trees, and Light Gradient Boosting Method (LightGBM)] in estimating daily ET<sub>o</sub> using all available inputs was investigated. The performance of the best three models (CatBoost, GBDT and XGBoost) was then evaluated under 7 input combinations [<em>i.e</em>., complete and incomplete combinations of maximum and minimum temperature (T<sub>max</sub> and T<sub>min</sub>), relative humidity (RH), wind speed (U<sub>2</sub>), total and extra-terrestrial solar radiation (R<sub>s</sub> and R<sub>a</sub>)], and 4 dataset sizes (20, 30, 40 and 60 years). CatBoost showed the highest estimation accuracy (average R<sup>2</sup> = 0.93), stability, and robustness. Using incomplete combinations based on temperature and other indicators to estimate daily ET<sub>o</sub> also achieved satisfactory results (R<sup>2</sup> > 0.91), and the key indicators contributing to a difference in ET<sub>o</sub> prediction accuracy between humid and arid regions were RH and R<sub>a</sub>. Different models’ accuracy in estimating daily ET<sub>o</sub> was not affected by dataset size (the difference of RMSE<0.025), but its stability improves with the increase of the dataset. This study evaluated the models’ performance under different data constraints and different regional applications, which provides a methodological reference for ET<sub>o</sub> simulation in global multiclimatic zones, takes into account accuracy and practicality.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110548"},"PeriodicalIF":7.7,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144146842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xudong Li , Yutong Wang , Happy Nkanta Monday , Grace Ugochi Nneji
{"title":"A novel residual learning of multi-scale feature extraction model for the classification of rice grain varieties","authors":"Xudong Li , Yutong Wang , Happy Nkanta Monday , Grace Ugochi Nneji","doi":"10.1016/j.compag.2025.110491","DOIUrl":"10.1016/j.compag.2025.110491","url":null,"abstract":"<div><div>Rice serves as a fundamental food source for 50% of the world’s population highlighting its crucial role in ensuring food security. Deep learning has become crucial tools in automating the labor-intensive task of rice grain classification, using digital image processing to evaluate quality and grain variety. This work utilized a large dataset consisting of 75,000 images of five different rice grain varieties. There are 15,000 images for each type, which capture distinct texture, form, and color features. Image augmentation approaches, such as normalization and transformations, are utilized to enhance model robustness and mitigate overfitting. The study presented a novel ensemble model that combined a customized attention mechanism with modified residual learning and multi-scale feature learning of parallel filters networks to improve the ability of features extraction and classification of rice grain varieties. A wide range of performance criteria is employed to assess the effectiveness of the model. The ensemble model demonstrated outstanding competence in classification tasks, achieving accuracy values close to 99%. The Grad-CAM visualization validates the model’s attention towards pertinent characteristics among different rice grain varieties. The ensemble model outperformed pre-trained models and other works in terms of loss, accuracy, and F1-score, as shown by comparative analysis. This study enhances the field of agricultural informatics by boosting the accuracy of rice grain classification and food quality in general.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110491"},"PeriodicalIF":7.7,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144146910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating future climate change impacts on wheat yield and water demand in Xinjiang, Northwest China using the DSSAT-CERES-Wheat model","authors":"Xuehui Gao, Jian Liu, Yue Wen, Haixia Lin, Yonghui Liang, Mengjie Liu, Zhenpeng Zhou, Jinzhu Zhang, Zhenhua Wang","doi":"10.1016/j.compag.2025.110604","DOIUrl":"10.1016/j.compag.2025.110604","url":null,"abstract":"<div><div>Climate change is challenging to maintain and increase crop production in environmentally sensitive regions. The assessment of climate change’s impact on Chinese wheat production is needed for irrigated farming to maintain wheat self-sufficiency and assure future food demand. We assessed future trends in wheat yield, biomass, and crop evapotranspiration (ET<sub>c</sub>) in arid northwest China using the calibrated DSSAT-CERES-Wheat model and daily climate data based on projections made by six global climate models under two representative concentration pathways (SSP245 and SSP585) of greenhouse gas emissions. Forecasts indicated a gradual increase in both temperature and precipitation for the region, depicting a discernible shift towards a warmer and wetter climate. Subsequent findings suggested that, in comparison with the baseline period (1991–2020), climate change was anticipated to shorten the winter wheat growing season. The anthesis date was expected to come earlier by an average of 1–20 days under SSP245 and 2–34 days under SSP585. Similarly, the date of physiological maturity under SSP245 and SSP585 was expected to come earlier by an average of 1–13 days and 2–23 days, respectively. Irrigated winter wheat grain yield and aboveground biomass were projected to increase over time, with increases ranging from 12 % to 32 % and from 14 % to 25 %, respectively. The modeling results further suggested that the optimum irrigation amount for the study area would be 329 mm during the baseline period, and that irrigation demand in the future could be reduced by 18.9–27.7 % compared with the baseline period. Our findings will help policymakers and agricultural stakeholders adapt to climate change, ensuring optimal wheat production from this region’s irrigated cropping systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110604"},"PeriodicalIF":7.7,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous prediction of multiple soil components using Mid-Infrared Spectroscopy and the GADF-Swin Transformer model","authors":"Wenqi Guo , Shichen Gao , Yaohui Ding , Daming Dong","doi":"10.1016/j.compag.2025.110507","DOIUrl":"10.1016/j.compag.2025.110507","url":null,"abstract":"<div><div>Accurate characterization and monitoring of soil components are essential for optimizing agricultural practices and enhancing soil management strategies. Mid-infrared (MIR) spectroscopy has shown unique value in soil analysis due to its ability to provide rich molecular information. However, past research typically focuses on single-component prediction and struggles with the high dimensionality of MIR spectral data. This paper presents a novel approach for the simultaneous prediction of multiple soil components using MIR spectroscopy, leveraging Gramian Angular Difference Fields (GADF) and the Swin Transformer model. By transforming high-dimensional MIR spectral data into two-dimensional images and utilizing the Swin Transformer for multi-scale feature extraction and fusion, we achieve superior accuracy in simultaneous multi-component prediction. The experimental results indicate that the Swin Transformer model significantly improves overall predictive performance by effectively capturing intricate interdependencies among different soil components. This approach provides valuable insights into the application of advanced data transformation and deep learning techniques in soil analysis, particularly for simultaneous multi-component prediction, and supports more informed decisions in environmental management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110507"},"PeriodicalIF":7.7,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144146911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dihua Wu, Yibin Ying, Mingchuan Zhou, Jinming Pan, Di Cui
{"title":"DCDNet: A deep neural network for dead chicken detection in layer farms","authors":"Dihua Wu, Yibin Ying, Mingchuan Zhou, Jinming Pan, Di Cui","doi":"10.1016/j.compag.2025.110492","DOIUrl":"10.1016/j.compag.2025.110492","url":null,"abstract":"<div><div>Detecting deceased layers is vital in farm inspections. Manual inspections are inefficient and pose bio-security risks. Deep learning excels on large datasets, yet accurate dead chicken detection is challenging due to data scarcity, imbalance, visual similarity, and irregular morphology. To achieve desirable performance in distinguishing dead and normal chickens, a novel deep neural network, DCDNet, was proposed in this study. The pipeline consisted of the following three modules: Poisson fusion-based data augmentation (PFDA) module seamlessly integrated the area of the deceased layer into the new background, generating a more realistic image that alleviates sample scarcity; the designed DCDNet was utilized to accurately identify dead and normal layers by extracting and fusing features more efficiently, thus better suiting their irregular body shapes; the non-monotonic dynamic focusing (NDF) sliding weight loss function was proposed to enhance the contribution of difficult samples in model training flexibly, reducing bias caused by unbalanced data. Extensive experiments have been conducted on our dead-chicken dataset constructed on a commercial farm. The results revealed that the proposed method achieved a mean average precision (mAP) of 97.5%, outperforming the state-of-the-art methods reported thus far. Moreover, the average precision (AP) difference between dead and normal chickens is only 0.1%. The proposed dead chicken detection approach, based on DCDNet, was effective in dealing with sample scarcity and dataset imbalance. This may provide some reference for other researchers on other similar tasks.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110492"},"PeriodicalIF":7.7,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Sun , Yamin Han , Xilong Feng , Hongming Zhang , Jie Wu , Yu Zhang , PeiJie Qin , Taoping Zhang
{"title":"Efficient feature selection and fusion for real-time beef cattle detection","authors":"Yang Sun , Yamin Han , Xilong Feng , Hongming Zhang , Jie Wu , Yu Zhang , PeiJie Qin , Taoping Zhang","doi":"10.1016/j.compag.2025.110510","DOIUrl":"10.1016/j.compag.2025.110510","url":null,"abstract":"<div><div>Real-time and accurate beef cattle detection is essential for effective livestock management. Traditional manual observation methods are labor-intensive and inefficient. Recent studies have shown that deep learning has significantly improved beef cattle detection accuracy. However, achieving robust beef cattle detection remains challenging due to single farming scenarios, occlusions, and dense cattle groups. As an effective solution, this paper proposes a novel method for efficient feature selection and fusion for real-time beef cattle detection (EFSF-RBCD). Specifically, we begin by developing a feature extraction network based on multipath cooperative and poly kernel inception (MPCPKI), which is designed to optimize the feature extraction capabilities. The network includes an efficient P4 feature-layer selection module based on the multipath cooperative gating mechanism (EP4MCGM), which integrates low-level features from shallow layers and enhances fine detail detection. Additionally, the P5 feature layer selection module, based on the cross-stage partial poly kernel inception network (CSPPKINetP5), enables efficient target feature extraction while reducing the computational load. Furthermore, we propose a frequency-domain context feature fusion network (FDCFN), a novel framework that integrates the frequency-domain branch (FDB) and context feature fusion branch (CFFB) to capture local and global contextual information better. Additionally, to enhance detection accuracy, a novel bounding box regression loss function, SIoU, was introduced, which improves bounding box position and size estimation by incorporating orientation information between the ground truth and predicted boxes. Experimental results show that EFSF-RBCD achieves an [email protected] of 90.3% and an [email protected]–0.95 of 59.6%, with 26.4M parameters, a computational cost of 50.8 GFLOPs, and a processing speed of 100.3 FPS. The proposed method outperforms existing state-of-the-art methods in terms of [email protected] and [email protected]–0.95 while maintaining a low parameter count and computational load. Additionally, it demonstrated competitive performance in terms of FPS. This study provides a new approach for beef cattle detection in complex environments and lays a theoretical foundation for the development of technologies related to smart-farm deployment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110510"},"PeriodicalIF":7.7,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ehsan Asali , Guoming Li , Chongxiao Chen , Oluyinka A Olukosi , Iyabo Oluseyifunmi , Nicolas Mejia Abaunza , Tongshuai Liu , Mahtab Saeidifar , Venkat Umesh Chandra Bodempudi , Aravind Mandiga , Sai Akshitha Reddy Kota , Ahmad Banakar
{"title":"Integration and evaluation of a low-cost intelligent system and its parameters for monitoring three-dimensional features of broiler chickens","authors":"Ehsan Asali , Guoming Li , Chongxiao Chen , Oluyinka A Olukosi , Iyabo Oluseyifunmi , Nicolas Mejia Abaunza , Tongshuai Liu , Mahtab Saeidifar , Venkat Umesh Chandra Bodempudi , Aravind Mandiga , Sai Akshitha Reddy Kota , Ahmad Banakar","doi":"10.1016/j.compag.2025.110553","DOIUrl":"10.1016/j.compag.2025.110553","url":null,"abstract":"<div><div>Effective monitoring systems are crucial for improving poultry management and welfare. However, despite the enhanced analytics provided by 3D systems over 2D, affordable options remain limited due to unresolved design and algorithm challenges. The objective of this study was to develop a low-cost intelligent system to monitor the 3D features of poultry. The system consisted of data storage, a mini-computer, and electronics housed in a plastic box, with an RGB-D camera externally connected via USB for flexible installation. Python scripts on a Linux-based Robotic Operating System (ROS Noetic) were developed to automatically capture 3D data, transfer it to storage, and notify a manager when storage is full. Various 3D cameras, installation heights (2.25, 2.50, 2.75, and 3.00 m), image resolutions, and data compression settings were tested using a robotic vehicle in a 1.2 m × 3.0 m pen to simulate broiler movement in controlled environments. Optimal configurations, based on the quality of 3D point clouds, were tested in several broiler trials including one containing 1,776 Cobb 500 male broiler chickens. Results showed that the integrated L515 camera provided clearer features and superior 3D point cloud quality at 2.25 m, capturing an average of 1641 points per frame. Additionally, data compression reduced RGB frame storage by 75%, enabling efficient long-term storage without compromising data quality. During broiler house testing with 1,776 Cobb 500 male broilers, the system demonstrated stable and reliable operation, recording 1.65 TB of data daily at 15 FPS with a 20 TB hard drive, allowing for 12 consecutive days of uninterrupted monitoring. Among object detection models tested, YOLOv8m (a medium-sized version of the YOLO version 8 model) outperformed other models by achieving a precision of 89.2% and an accuracy of 84.8%. Depth-enhanced modalities significantly improved detection and tracking performance, especially under challenging conditions. YOLOv8m achieved 88.2% detection accuracy in darkness compared to 0% with RGB-only data, highlighting the advantage of integrating depth information in low-light environments. Further evaluations showed that incorporating depth modalities also improved object detection in extreme lighting scenarios, such as overexposure and noisy color channels, enhancing the system's robustness to environmental variations. These results demonstrated that the system was well-suited for accurately capturing 3D data across diverse conditions, providing reliable detection, tracking, and trajectory extraction. The system effectively extracted 3D walking trajectories of individual chickens, enabling detailed behavioral analysis to monitor health and welfare indicators. The system, costing approximately $1,221, integrates cost-effective hardware with a scalable software architecture, enabling precision monitoring in large-scale operations. By reducing storage costs to $28 per day and compressing data without losing ","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110553"},"PeriodicalIF":7.7,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrés Gersnoviez , Francisco J. Rodriguez-Lozano , María Brox , José Moreno-Carbonell , Manuel Ortiz-Lopez , José M. Flores
{"title":"Determination of flowering stage based on artificial intelligence and the daily weight of bee hives","authors":"Andrés Gersnoviez , Francisco J. Rodriguez-Lozano , María Brox , José Moreno-Carbonell , Manuel Ortiz-Lopez , José M. Flores","doi":"10.1016/j.compag.2025.110508","DOIUrl":"10.1016/j.compag.2025.110508","url":null,"abstract":"<div><div>Honey bee plays a very important role in pollination and is essential for the balance of terrestrial ecosystems and in the pollination of important crops. The success of honey bee hives and beekeeping depends on the flowering period, and good hive management during this period is essential for beekeepers. The use of new technologies in beekeeping can help this farming activity enormously. Based on a monitoring system of several hives located in the south of Spain, this work presents a study of the data obtained to find out if there is a relationship between these data and the flowering stage of the hives. In this study, it is determined that the evolution of the weight of the hive throughout the day is crucial to determine the flowering stage. By testing the behavior of several machine learning algorithms, a highly efficient classifier is obtained, capable of determining which stage of flowering the hives are in. It is able not only to determine whether the hives are before, during or after flowering, but also to distinguish between an initial and final stage of flowering. This is important because it can enable beekeepers to effectively plan apiary visits, hive maintenance work and honey harvesting, making beekeeping more profitable.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110508"},"PeriodicalIF":7.7,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}