Ran Huang , Yuanjun Xiao , Shengcheng Li , Jianing Li , Wei Weng , Qi Shao , Jingcheng Zhang , Yao Zhang , Lingbo Yang , Chao Huang , Weiwei Sun , Weiwei Liu , Hongwei Jin , Jingfeng Huang
{"title":"A novel framework for dynamic and quantitative mapping of damage severity due to compound Drought–Heatwave impacts on tea Plantations, integrating Sentinel-2 and UAV images","authors":"Ran Huang , Yuanjun Xiao , Shengcheng Li , Jianing Li , Wei Weng , Qi Shao , Jingcheng Zhang , Yao Zhang , Lingbo Yang , Chao Huang , Weiwei Sun , Weiwei Liu , Hongwei Jin , Jingfeng Huang","doi":"10.1016/j.compag.2024.109688","DOIUrl":"10.1016/j.compag.2024.109688","url":null,"abstract":"<div><div>In 2022, China experienced a historically rare compound drought–heatwave (CDH) event, which had more severe impacts on vegetation compared with individual extreme events. However, quantitatively mapping the damage severity of CDH on tea tree using satellite data remains a significant challenge. Here we proposed a novel framework for dynamic and quantitative mapping of tea trees damage severity caused by CDH in 2022 using Sentinel-2 and Unmanned Aerial Vehicle (UAV) data. The Extreme Gradient Boosting (XGBoost) was selected as the optimal machine learning algorithm to extract tea plantations using Sentinel-2 data from XGBoost, Random Forest (RF), Logistic regression (LR), and Naive Bayes. The User’s Accuracy and Producer’s Accuracy for the extraction of tea plantations are 92.20 % and 93.51 %, respectively. UAV images with 2.5 cm spatial resolution were utilized to detect the tea trees damaged caused by the CDH in 2022. A new index, named the CDH damage severity index (CDH_DSI), was proposed to quantitatively evaluate the damage severity of CDH on tea trees at pixel level, with a spatial resolution of 10 m x 10 m. Based on the results of tea plantations and damaged tea trees detection, UAV-derived CDH_DSI was calculated and used as ground truth data. Then, The XGBoost was selected as the optimal CDH_DSI prediction model from XGBoost, RF, and LR with the Sentnel-2 derived vegetation indices and spectral reflectance as predictors. The coefficient of determination was 0.81 and root mean squared error was 7.61 %. Finally, dynamic and quantitative CDH_DSI maps were generated with the optimal CDH_DSI prediction model. The results show that 50 percent of tea plantations in Wuyi were damaged by the prolonged CDH event in 2022. These results can be attributed to precipitation deficits and heatwaves. Given that more severe CDH events are projected for the future, quantifying their impacts can provide decision-making support for disaster mitigation and prevention.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109688"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pankaj Pal , Juan Landivar-Bowles , Jose Landivar-Scott , Nick Duffield , Kevin Nowka , Jinha Jung , Anjin Chang , Kiju Lee , Lei Zhao , Mahendra Bhandari
{"title":"Unmanned aerial system and machine learning driven Digital-Twin framework for in-season cotton growth forecasting","authors":"Pankaj Pal , Juan Landivar-Bowles , Jose Landivar-Scott , Nick Duffield , Kevin Nowka , Jinha Jung , Anjin Chang , Kiju Lee , Lei Zhao , Mahendra Bhandari","doi":"10.1016/j.compag.2024.109589","DOIUrl":"10.1016/j.compag.2024.109589","url":null,"abstract":"<div><div>In the past decade, Unmanned Aerial Systems (UAS) have made a significant impact on various sectors, including precision agriculture, by enabling remote monitoring of crop growth and development. Monitoring and managing crops effectively throughout the growing season are crucial for optimizing crop yield. The integration of UAS-monitored data and machine learning has greatly advanced crop production management, resulting in improvements in key areas such as irrigation scheduling, crop termination analysis, and predicting yield. This study presents the development of a Digital Twin (DT) for cotton crops using UAS captured RGB data. The primary objective of this DT is to forecast various cotton crop features during the growing season, including Canopy Cover (CC), Canopy Height (CH), Canopy Volume (CV), and Excess Greenness (EXG). Predictive analytics as part of DT development employs machine learning regression to extract crop feature growth patterns from UAS data collected from 2020 to 2023. During the current season, real-time UAS data and historical growth patterns are combined to generate growth patterns using a novel hybrid model generation strategy for forecasting. Comparisons of the DT-based forecasts to actual data demonstrated low RMSE for CC, CH, CV, and EXG. The proposed DT framework, which accurately forecasts cotton crop features up to 30 days into the future starting 80 days after sowing, was found to outperform existing forecasting methods. Notably, the RRMSE for CC, CH, CV, and EXG was measured to be 9, 13, 14, and 18 percent, respectively. Furthermore, the potential applications of forecasted data in biomass estimation and yield prediction are highlighted, emphasizing their significance in optimizing agricultural practices.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109589"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajay John Alex , Chloe M. Barnes , Pedro Machado , Isibor Ihianle , Gábor Markó , Martin Bencsik , Jordan J. Bird
{"title":"Enhancing pollinator conservation: Monitoring of bees through object recognition","authors":"Ajay John Alex , Chloe M. Barnes , Pedro Machado , Isibor Ihianle , Gábor Markó , Martin Bencsik , Jordan J. Bird","doi":"10.1016/j.compag.2024.109665","DOIUrl":"10.1016/j.compag.2024.109665","url":null,"abstract":"<div><div>In an era of rapid climate change and its adverse effects on food production, technological intervention to monitor pollinator conservation is of paramount importance for environmental monitoring and conservation for global food security. The survival of the human species depends on the conservation of pollinators. This article explores the use of Computer Vision and Object Recognition to autonomously track and report bee behaviour from images. A novel dataset of 9664 images containing bees is extracted from video streams and annotated with bounding boxes. With training, validation and testing sets (6722, 1915, and 997 images, respectively), the results of the COCO-based YOLO model fine-tuning approaches show that YOLOv5 m is the most effective approach in terms of recognition accuracy. However, YOLOv5s was shown to be the most optimal for real-time bee detection with an average processing and inference time of 5.1 ms per video frame at the cost of slightly lower ability. The trained model is then packaged within an explainable AI interface, which converts detection events into timestamped reports and charts, with the aim of facilitating use by non-technical users such as expert stakeholders from the apiculture industry towards informing responsible consumption and production.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109665"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pieter M. Blok , Federico Magistri , Cyrill Stachniss , Haozhou Wang , James Burridge , Wei Guo
{"title":"High-throughput 3D shape completion of potato tubers on a harvester","authors":"Pieter M. Blok , Federico Magistri , Cyrill Stachniss , Haozhou Wang , James Burridge , Wei Guo","doi":"10.1016/j.compag.2024.109673","DOIUrl":"10.1016/j.compag.2024.109673","url":null,"abstract":"<div><div>Potato yield is an important metric for farmers to further optimize their cultivation practices. Potato yield can be estimated on a harvester using an RGB-D camera that can estimate the three-dimensional (3D) volume of individual potato tubers. A challenge, however, is that the 3D shape derived from RGB-D images is only partially completed, underestimating the actual volume. To address this issue, we developed a 3D shape completion network, called CoRe++, which can complete the 3D shape from RGB-D images. CoRe++ is a deep learning network that consists of a convolutional encoder and a decoder. The encoder compresses RGB-D images into latent vectors that are used by the decoder to complete the 3D shape using the deep signed distance field network (DeepSDF). To evaluate our CoRe++ network, we collected partial and complete 3D point clouds of 339 potato tubers on an operational harvester in Japan. On the 1425 RGB-D images in the test set (representing 51 unique potato tubers), our network achieved a completion accuracy of 2.8 mm on average. For volumetric estimation, the root mean squared error (RMSE) was 22.6 ml, and this was better than the RMSE of the linear regression (31.1 ml) and the base model (36.9 ml). We found that the RMSE can be further reduced to 18.2 ml when performing the 3D shape completion in the center of the RGB-D image. With an average 3D shape completion time of 10 ms per tuber, we can conclude that CoRe++ is both fast and accurate enough to be implemented on an operational harvester for high-throughput potato yield estimation. CoRe++’s high-throughput and accurate processing allows it to be applied to other tuber, fruit and vegetable crops, thereby enabling versatile, accurate and real-time yield monitoring in precision agriculture. Our code, network weights and dataset are publicly available at <span><span>https://github.com/UTokyo-FieldPhenomics-Lab/corepp.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109673"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei Wang , Pengxin Wu , Chao Wang , Xiaofeng Huang , Lihong Wang , Chengsong Li , Qi Niu , Hui Li
{"title":"Chicken body temperature monitoring method in complex environment based on multi-source image fusion and deep learning","authors":"Pei Wang , Pengxin Wu , Chao Wang , Xiaofeng Huang , Lihong Wang , Chengsong Li , Qi Niu , Hui Li","doi":"10.1016/j.compag.2024.109689","DOIUrl":"10.1016/j.compag.2024.109689","url":null,"abstract":"<div><div>Severe diseases in chickens present substantial risks to poultry husbandry industry. Notably, alterations in body temperature serve as critical clinical indicators of these diseases. Consequently, timely and accurate monitoring of body temperature is essential for the early detection of severe health issues in chickens. This study presents a novel method for simultaneous body temperature detection of multiple chickens in caged poultry environments. A dataset of 2896 chicken head images was developed. The YOLOv8n-mvc model was created to accurately detect chicken head positions and extracted temperature data and distance information through the fusion of RGB, thermal infrared, and depth images. The chicken head temperature was calibrated using distance information. The YOLOv8n-mvc model established in this study achieved a precision of 91.6 %, recall of 92.5 %, F1 score of 92.0 %, and [email protected] of 96.0 %. The model was successfully deployed on an edge computing device for validation tests, demonstrating its feasibility for chicken body temperature detection. This study provides a reference for developing a chicken health monitoring system based on body temperature.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109689"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaorong Wang , Jianping Zhou , Yan Xu , Chao Cui , Zihe Liu , Jinrong Chen
{"title":"Location of safflower filaments picking points in complex environment based on improved Yolov5 algorithm","authors":"Xiaorong Wang , Jianping Zhou , Yan Xu , Chao Cui , Zihe Liu , Jinrong Chen","doi":"10.1016/j.compag.2024.109463","DOIUrl":"10.1016/j.compag.2024.109463","url":null,"abstract":"<div><div>Mechanized safflower harvesting is prone to inaccurate recognition and positioning of safflower filaments, which is influenced by complex environmental factors such as occlusion, lighting, and challenges related to small targets and small samples. To solve this problem, we improved on the Yolov5 algorithm model and developed a two-stage recognition and positioning approach named Yolov5-ABBM. A safflower dataset was established to classify safflower filaments based on their maturity levels. The Swin Transformer attention mechanism was incorporated to improve the feature-extraction capability of the algorithm model, particularly for small samples and small targets. A geometric operation algorithm based on Bbox and Mask (ABBM) was developed to enhance the positioning speed and minimize missed recognition when locating safflower-filament picking points. Experimental results show that the improved model achieved a recognition precision improvement of 5.8% and 7.9% based on Bbox and Mask, respectively, and exhibited a significant enhancement of 15.3% and 19.4% for small samples. The positioning precision reached 98.19%, with an average positioning running time of 0.018 s per frame image. The improved model demonstrated superior accuracy and positioning speed compared with other algorithm models. The results show that the improved model could accurately identify and locate safflower-filament picking points, particularly for small samples, thereby offering technical support for efficient mechanized safflower harvesting.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109463"},"PeriodicalIF":7.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformer-Based hyperspectral image analysis for phenotyping drought tolerance in blueberries","authors":"Md. Hasibur Rahman , Savannah Busby , Sushan Ru , Sajid Hanif , Alvaro Sanz-Saez , Jingyi Zheng , Tanzeel U. Rehman","doi":"10.1016/j.compag.2024.109684","DOIUrl":"10.1016/j.compag.2024.109684","url":null,"abstract":"<div><div>Drought-induced stress significantly impacted blueberry production due to the plants’ inefficient water regulation mechanisms to maintain yield and fruit quality under drought stress. Traditional methods of manual phenotyping for drought stress are not only time-consuming but also labor-intensive. To address the need for accurate and large-scale assessment of drought tolerance, we developed a high-throughput phenotyping (HTP) system to capture hyperspectral images of blueberry plants under drought conditions. A novel transformer-based model, LWC-former was introduced to predict leaf water content (LWC) utilizing spectral reflectance from hyperspectral images obtained from the developed HTP system. The LWC-former transformed the spectral reflectance into patch representations and embedded these patches into a lower dimensional to address multicollinearity issues. These patches were then passed to the transformer encoder to learn distributed features, followed by a regression head to predict LWC. To train the model, spectral reflectance data were extracted from hyperspectral images and pre-processed using log(1/R), mean scatter correction (MSC), and mean centering (MC). The results showed that our model achieved a coefficient of determination (R<sup>2</sup>) of 0.81 on the test dataset. The performance of the proposed model was also compared with TabTransformer, DeepRWC, multilayer perceptron (MLP), partial least squares regression (PLSR), support vector regression (SVR), and random forest (RF), achieving R<sup>2</sup> values of 0.65, 0.73, 0.71, 0.47, and 0.58, respectively. The results demonstrated that LWC-former outperformed other deep learning and statistical-based models. The high-throughput phenotyping system effectively facilitated large-scale data collection, while the LWC-former model addressed multicollinearity issues, significantly improving the prediction of LWC. These results demonstrate the potential of our approach for large-scale drought tolerance assessment in blueberries.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109684"},"PeriodicalIF":7.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neamat Karimi, Sara Sheshangosht, Maryam Rashtbari, Omid Torabi, Amirhossein Sarbazvatan, Masoumeh Lari, Hossein Aminzadeh, Sina Abolhoseini, Mortaza Eftekhari
{"title":"An advanced high resolution land use/land cover dataset for Iran (ILULC-2022) by focusing on agricultural areas based on remote sensing data","authors":"Neamat Karimi, Sara Sheshangosht, Maryam Rashtbari, Omid Torabi, Amirhossein Sarbazvatan, Masoumeh Lari, Hossein Aminzadeh, Sina Abolhoseini, Mortaza Eftekhari","doi":"10.1016/j.compag.2024.109677","DOIUrl":"10.1016/j.compag.2024.109677","url":null,"abstract":"<div><div>This study presents the first high-resolution Land Use/Land Cover dataset for Iran in 2022 (ILULC-2022), with a particular emphasis on the agricultural areas. This research employed a two-level Decision Tree Object-Oriented Image Analysis (OBIA-DT) model which incorporated segmentation of the study area derived from Google Earth images, and classification using multi-temporal information derived from Sentinel-2 satellite imagery. After segmentation of fine resolution images, the first level of the OBIA-DT model established based on the collected field datasets (about 52,000 field data were collected) to build a light LULC map which broadly identified agricultural land components without differentiating between irrigated and non-irrigated cultivations. The second level used multi-temporal indices derived from Sentinel-2 imagery and supplementary data layers to produce a complete LULC map wherein cropland areas was distinguished further into irrigated and rainfed lands, with four distinctive sub-classifications for irrigated lands. By employing this approach, a LULC map of all basins of Iran were classified into sixteen distinct classes, with different agricultural lands divided into two rainfed croplands (rainfed farming and agroforestry) and five irrigated lands (orchards, fall crops, spring crops, multiple crops, and fallow crops). According to the collected field data, the overall accuracy of ILULC-2022 maps exhibited a range from 85 to 97 % for basins with varying climates ranging from cold and temperate to hot and dry, respectively. Results reveal that the major irrigated crop classes had a user’s accuracy and producer’s accuracy ranging from 91 % to 96 %. Based on the findings of this study, the total area of agricultures in Iran encompasses 20.9 ± 2.1 million ha, constituting approximately 13 % of the Iran’s total land area. Within this agricultural expanse, irrigated (comprising irrigated lands and orchards) and rainfed agricultural lands are delineated as 10.2 ± 1.08 and 10.7 × ± 1.02 million ha, respectively, with most agricultural areas located in basins with moderate to humid climates. The ILULC-2022 dataset serves as a benchmark for future LULC change detection and is a valuable reference for efforts aimed at achieving sustainable development goals in Iran.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109677"},"PeriodicalIF":7.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cécile M. Levrault , Nico W.M. Ogink , Jan Dijkstra , Peter W.G. Groot Koerkamp , Kelly Nichols , Fred A. van Eeuwijk , Carel F.W. Peeters
{"title":"Modelling methane production of dairy cows: A hierarchical Bayesian stochastic approach","authors":"Cécile M. Levrault , Nico W.M. Ogink , Jan Dijkstra , Peter W.G. Groot Koerkamp , Kelly Nichols , Fred A. van Eeuwijk , Carel F.W. Peeters","doi":"10.1016/j.compag.2024.109683","DOIUrl":"10.1016/j.compag.2024.109683","url":null,"abstract":"<div><div>Monitoring methane production from individual cows is required for evaluating the success of greenhouse gas reduction strategies. However, converting non-continuous measurements of methane production into daily methane production rates (MPR) remains challenging due to the general non-linearity of the methane production curve. In this paper, we propose a Bayesian hierarchical stochastic kinetic equation approach to address this challenge, enabling the sharing of information across cows for improved modelling. We fit a non-linear curve on climate respiration chamber (CRC) data of 28 dairy cows before computing an area under the curve, thereby providing an estimate of MPR from individual cows, yielding a monitored and predicted population mean of 416.7 ± 36.2 g/d and 407.2 ± 35.0 g/d respectively. The shape parameters of this model were pooled across cows (population-level), while the scale parameter varied between individuals. This allowed for the characterization of variation in MPR within and between cows. Model fit was thoroughly investigated through posterior predictive checking, which showed that the model could reproduce this CRC data accurately. Comparison with a fully pooled model (all parameters constant across cows) was evaluated through cross-validation, where the Hierarchical Methane Rate (HMR) model performed better (difference in expected log predictive density of 1653). Concordance between the values observed in the CRC and those predicted by HMR was assessed with R<sup>2</sup> (0.995), root mean square error (10.0 g/d), and Lin’s concordance correlation coefficient (0.961). Overall, the predictions made by the HMR model appeared to reflect individual MPR levels and variation between cows as well as the standard analytical approach taken by scientists with CRC data.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109683"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jae-Woo Song , Mingyung Lee , Hyunjin Cho , Dae-Hyun Lee , Seongwon Seo , Wang-Hee Lee
{"title":"Development of individual models for predicting cow milk production for real-time monitoring","authors":"Jae-Woo Song , Mingyung Lee , Hyunjin Cho , Dae-Hyun Lee , Seongwon Seo , Wang-Hee Lee","doi":"10.1016/j.compag.2024.109698","DOIUrl":"10.1016/j.compag.2024.109698","url":null,"abstract":"<div><div>Daily milk yield serves as a physiological indicator in dairy cows and is a primary target for prediction and real-time monitoring in smart livestock farming. This study attempted to develop an individual model for predicting daily milk yield and applied it to monitor the health status of dairy cows by designing a real-time monitoring algorithm. A total of 580 datasets were used for model development after data preprocessing and screening, which were subsequently used to develop the model by modifying the existing models based on nonlinear regression analysis. The developed model was then applied to short-term real-time monitoring of abnormal daily milk yields. The optimal model was able to predict the daily milk yield, with an R<sup>2</sup> value of 0.875 and a root mean squared error of 2.192. Real-time monitoring was designed to detect abnormal daily milk yields by collectively considering a 90% confidence interval and the difference between predicted values and expected trends. This study is the first to design a monitoring algorithm for daily milk yield from dairy cows based on an individual model capable of predicting the daily milk yield. This study expects that a platform will be necessary for highly efficient smart livestock farming, enabling high productivity with minimal inputs.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"228 ","pages":"Article 109698"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}