Computers and Electronics in Agriculture最新文献

筛选
英文 中文
DenseDFFNet: Dense connected dual-stream feature fusion network for calf manure segmentation and diarrhea recognition
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110328
Liuru Pu , Yongjie Zhao , Haoyu Kang , Xiangfeng Kong , Xiaopeng Du , Huaibo Song
{"title":"DenseDFFNet: Dense connected dual-stream feature fusion network for calf manure segmentation and diarrhea recognition","authors":"Liuru Pu ,&nbsp;Yongjie Zhao ,&nbsp;Haoyu Kang ,&nbsp;Xiangfeng Kong ,&nbsp;Xiaopeng Du ,&nbsp;Huaibo Song","doi":"10.1016/j.compag.2025.110328","DOIUrl":"10.1016/j.compag.2025.110328","url":null,"abstract":"<div><div>Neonatal calf diarrhea is a globally prevalent disease, accounting for 57% of pre-weaning calf mortality. Early detection and intervention of diarrhea symptoms are critical for reducing morbidity and mortality while improving breeding efficiency. In intensive farming environments, it is challenging for staff to identify diarrhea symptoms in calves timely and effectively, as automated recognition methods for calf diarrhea remain underdeveloped. To address this issue, a non-contact calf diarrhea recognition method based on DenseDFFNet has been developed. By employing the multi-modal segmentation model Grounded-Segment-Anything (G-SAM) for manure segmentation, the difficulty of data annotation was significantly reduced and a fecal diarrhea segmentation accuracy of 96.45% was achieved in complex backgrounds. After segmentation, to mitigate abrupt pixel value transitions at image boundaries, a Parallel Convolutional Squeeze-and-Excitation (ParallelConvSE) module was designed, effectively integrating local and global features through parallel standard convolution and Squeeze-and-Excitation (SE) attention mechanisms. And the overall performance and generalization capability of the model was enhanced. For diarrhea classification, the DenseDFFNet module was introduced. In fecal classification tasks, the model achieved a test accuracy of 95.87%. When validated with video data, recognition accuracies for diarrhea and normal states reached 93.92% and 91.21%, respectively. Additionally, a self-propelled data collection system has been developed to enable efficient diarrhea recognition in complex commercial farming scenarios, offering a novel solution for calf health monitoring and early diagnosis. With its non-contact, efficient, and objective characteristics, the proposed method significantly reduces labor intensity and provides a robust technical solution for the recognition of calf diarrhea.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110328"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143724241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Star-YOLO: A lightweight and efficient model for weed detection in cotton fields using advanced YOLOv8 improvements
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110306
Zheng Lu , Zhu Chengao , Liu Lu , Yang Yan , Wang Jun , Xia Wei , Xu Ke , Tie Jun
{"title":"Star-YOLO: A lightweight and efficient model for weed detection in cotton fields using advanced YOLOv8 improvements","authors":"Zheng Lu ,&nbsp;Zhu Chengao ,&nbsp;Liu Lu ,&nbsp;Yang Yan ,&nbsp;Wang Jun ,&nbsp;Xia Wei ,&nbsp;Xu Ke ,&nbsp;Tie Jun","doi":"10.1016/j.compag.2025.110306","DOIUrl":"10.1016/j.compag.2025.110306","url":null,"abstract":"<div><div>Effective weed management in cotton fields is crucial for preventing crop loss and maintaining agricultural productivity. However, the complexity and high computational demands of deep-learning models pose challenges when deployed in resource-constrained devices. Hence, this study proposes a lightweight deep-learning model based on an improved YOLOv8 architecture. First, the backbone and C2f modules are restructured using Star Blocks, along with a designed lightweight detection head, i.e., the lightweight shared convolutional separable BN detection head, thus effectively reducing the model’s parameters and computational overhead. To better capture the global weed information, the LSK attention mechanism expands the receptive field, thus enhancing the detection performance of the model. Additionally, a dynamic upsampling technique, DySample, is employed to replace conventional upsampling operators, thereby further improving the detection speed. Compared with YOLOv8, the proposed model reduces the parameters, computation, and model size by 50.0%, 39.0%, and 47.0%, respectively, while achieving mAP@50 and mAP@50–95 scores of 98.0% and 95.4%, respectively. Furthermore, the model optimally balances accuracy, lightweight design, and detection speed compared with mainstream lightweight backbone networks and architectures, thus demonstrating its superior performance on public datasets CottonWeedDet12 and CottonWeedDet3. By integrating TensorRT technology, the model’s detection speed increases by nine times, thus providing significant advancements toward the development of an efficient weed-detection system for real-world agricultural applications. As this model can be integrated into automated weeding equipment, fully automated weed detection and weeding operations are realizable, thereby enhancing the efficiency and precision of agricultural tasks.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110306"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biomass phenotyping of oilseed rape through UAV multi-view oblique imaging with 3DGS and SAM model
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110320
Yutao Shen , Hongyu Zhou , Xin Yang , Xuqi Lu , Ziyue Guo , Lixi Jiang , Yong He , Haiyan Cen
{"title":"Biomass phenotyping of oilseed rape through UAV multi-view oblique imaging with 3DGS and SAM model","authors":"Yutao Shen ,&nbsp;Hongyu Zhou ,&nbsp;Xin Yang ,&nbsp;Xuqi Lu ,&nbsp;Ziyue Guo ,&nbsp;Lixi Jiang ,&nbsp;Yong He ,&nbsp;Haiyan Cen","doi":"10.1016/j.compag.2025.110320","DOIUrl":"10.1016/j.compag.2025.110320","url":null,"abstract":"<div><div>Biomass estimation of oilseed rape is crucial for optimizing crop productivity and breeding strategies. While UAV-based imaging has advanced high-throughput phenotyping, current methods often rely on orthophoto images, which struggle with overlapping leaves and incomplete structural information in complex field environments. This study integrates 3D Gaussian Splatting (3DGS) with the Segment Anything Model (SAM) for precise 3D reconstruction and biomass estimation of oilseed rape. UAV multi-view oblique images from 36 angles were used to perform 3D reconstruction, with the SAM module enhancing point cloud segmentation. The segmented point clouds were then converted into point cloud volumes, which were fitted to ground-measured biomass using linear regression. The results showed that 3DGS (7 k and 30 k iterations) provided high accuracy, with peak signal-to-noise ratios (PSNR) of 27.43 and 29.53 and training times of 7 and 49 min, respectively. This performance exceeded that of structure from motion (SfM) and mipmap Neural Radiance Fields (Mip-NeRF), demonstrating superior efficiency. The SAM module achieved high segmentation accuracy, with a mean intersection over union (mIoU) of 0.961 and an F1-score of 0.980. Additionally, a comparison of biomass extraction models found the point cloud volume model to be the most accurate, with an determination coefficient (R<sup>2</sup>) of 0.976, root mean square error (RMSE) of 2.92 g/plant, and mean absolute percentage error (MAPE) of 6.81 %, outperforming both the plot crop volume and individual crop volume models. This study highlights the potential of combining 3DGS with multi-view UAV imaging for improved biomass phenotyping.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110320"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aggressive behavior recognition and welfare monitoring in yellow-feathered broilers using FCTR and wearable identity tags 利用 FCTR 和可穿戴身份标签识别黄羽肉鸡的攻击行为并进行福利监测
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110284
Hongcheng Xue , Jie Ma , Yakun Yang , Hao Qu , Longhe Wang , Lin Li
{"title":"Aggressive behavior recognition and welfare monitoring in yellow-feathered broilers using FCTR and wearable identity tags","authors":"Hongcheng Xue ,&nbsp;Jie Ma ,&nbsp;Yakun Yang ,&nbsp;Hao Qu ,&nbsp;Longhe Wang ,&nbsp;Lin Li","doi":"10.1016/j.compag.2025.110284","DOIUrl":"10.1016/j.compag.2025.110284","url":null,"abstract":"<div><div>Aggressive behavior and individual identification in chickens have long attracted widespread attention in animal welfare farming and scientific genetic breeding. Existing methods predominantly rely on manual observation, which is limited by subjectivity and slow response times. The matching of chicken identity with behavior requires considerable human and material resources. To address these challenges, we propose a <strong>F</strong>ast <strong>C</strong>hicken aggressive behavior recognition model based on <strong>TR</strong>ansformer (FCTR) model and introduce a wearable identity tag for chickens. FCTR demonstrates robust recognition performance on the free-range Yellow-Feathered Broiler dataset and establishes an identity matching verification method, refining behavioral quantification analysis at the individual level for precise farming. To evaluate this approach, the ChickenFight-2024 dataset was collected and constructed. Multiple experiments confirm that the method can effectively identify both chicken identities and aggressive behaviors using video surveillance images. The proposed model achieved mAP values of 89.81%, 85.76%, 90.14%, 93.19%, and 87.27% for fight, tread, peck, eat, and drink behaviors, respectively, with an mAP of 77.39% for identity information. The identity matching verification method achieved a 94.88% matching rate, highlighting the model’s significant potential for application in commercial farming scenarios and offering new insights and solutions for efficient genetic breeding.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110284"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A point-supervised algorithm with multiscale semantic enhancement for counting multiple crop plants from aerial imagery
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110289
Huibin Li , Huaiyang Liu , Wenbo Wang , Haozhou Wang , Qiangyi Yu , Jianping Qian , Wenbin Wu , Yun Shi , Changxing Geng
{"title":"A point-supervised algorithm with multiscale semantic enhancement for counting multiple crop plants from aerial imagery","authors":"Huibin Li ,&nbsp;Huaiyang Liu ,&nbsp;Wenbo Wang ,&nbsp;Haozhou Wang ,&nbsp;Qiangyi Yu ,&nbsp;Jianping Qian ,&nbsp;Wenbin Wu ,&nbsp;Yun Shi ,&nbsp;Changxing Geng","doi":"10.1016/j.compag.2025.110289","DOIUrl":"10.1016/j.compag.2025.110289","url":null,"abstract":"<div><div>Counting crop plants is important for agricultural activities such as crop breeding and yield prediction. Numerous studies have developed methods for counting individual crop plants or those with similar morphological characteristics. However, these methods often face challenges of low accuracy and poor generalization when counting multiple crop plants with significant scale variations in complex backgrounds. Hence, we proposed MCPCNet, a point-supervised algorithm that enhances multiscale semantics for counting multiple crop plants from aerial imagery. We also constructed the first dataset of multicategory crop plant counting (MCPC-Dataset). We developed a concurrent spatial group enhancement module, a residual dynamic dilated convolution module, and introduced the contextual transformer module with self-attention mechanism. These modules can reduce the impact of background, adapt to scale variations of multiple crops, and enhance the robustness of our algorithm, respectively. The experiment results on the MCPC-Dataset indicate that MCPCNet achieves excellent performance, with a mean absolute error (MAE) of 2.577, a mean square error (MSE) of 14.289, and a coefficient of determination (<span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span>) of 0.991. MCPCNet also has a clear advantage over the state-of-the-art (SOTA) point-supervised counting algorithm. In conclusion, MCPCNet provides a robust solution for high-precision counting of multiple crop plants and is a vital reference for future related research.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110289"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing biomass estimation in hydroponic lettuce using RGB-depth imaging and morphometric descriptors with machine learning
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110299
Jonathan S. Cardenas-Gallegos , Lorena Nunes Lacerda , Paul M. Severns , Alicia Peduzzi , Pavel Klimeš , Rhuanito Soranz Ferrarezi
{"title":"Advancing biomass estimation in hydroponic lettuce using RGB-depth imaging and morphometric descriptors with machine learning","authors":"Jonathan S. Cardenas-Gallegos ,&nbsp;Lorena Nunes Lacerda ,&nbsp;Paul M. Severns ,&nbsp;Alicia Peduzzi ,&nbsp;Pavel Klimeš ,&nbsp;Rhuanito Soranz Ferrarezi","doi":"10.1016/j.compag.2025.110299","DOIUrl":"10.1016/j.compag.2025.110299","url":null,"abstract":"<div><div>By capturing the intricate structural and spectral variations of the plant canopy, we can enhance our ability to model and predict dynamic parameters such as biomass with greater precision. This method not only preserves the plants for continuous monitoring but also provides a scalable and efficient alternative to traditional destructive techniques. The objective of this study was to examine the potential of using image-derived color and geometric plant features to output accurate predictions of three plant biomass accumulation parameters − leaf fresh weight, leaf dry weight, and leaf area for single plant monitoring. Top-view images of a hydroponic ‘Chicarita’ romaine lettuce (<em>Lactuca sativa</em>) crop captured with a color and depth sensor were used as the input of a multiple plants image processing workflow that extracted plant height, canopy morphometric, and color traits at an individual plant level. Two destructive harvest rounds were performed across the plant cycle to measure the observed values for each biomass response given by leaf fresh weight, leaf dry weight and leaf area from two crop cycles. The image-derived traits were used as potential predictors for a simple linear regression used as a baseline model and for two supervised machine learning models (random forest and least absolute shrinkage and selection operator or LASSO regression) to estimate each response. Using extracted depth information, vertical height per plant was estimated with a mean absolute error of 1.51 cm. Random Forest regression models yielded the most accurate predictions on a first harvest round for all three biomass parameters with R<sup>2</sup> values of 0.74, 0.80, and 0.67 and mean absolute percentage error (MAPE) of 11.77%, 10.16%, and 12.50%. LASSO regression outperformed the other models in a second harvest round with R<sup>2</sup> values of 0.72, 0.65, and 0.79 and MAPE of 7.79%, 7.76%, and 7.06% for leaf fresh weight, leaf dry weight, and leaf area, respectively. These results suggest that using a selection of canopy descriptors may improve the non-destructive biomass estimation along a lettuce crop cycle, enabling remote monitoring and real-time harvest projections.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110299"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cotton3DGaussians: Multiview 3D Gaussian Splatting for boll mapping and plant architecture analysis
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110293
Lizhi Jiang , Jin Sun , Peng W. Chee , Changying Li , Longsheng Fu
{"title":"Cotton3DGaussians: Multiview 3D Gaussian Splatting for boll mapping and plant architecture analysis","authors":"Lizhi Jiang ,&nbsp;Jin Sun ,&nbsp;Peng W. Chee ,&nbsp;Changying Li ,&nbsp;Longsheng Fu","doi":"10.1016/j.compag.2025.110293","DOIUrl":"10.1016/j.compag.2025.110293","url":null,"abstract":"<div><div>Cotton is an economically important crop cultivated worldwide for textile production. Breeding programs focus on selecting genotypes with favorable traits for high yields. This study introduced 3D Gaussian Splatting (3DGS) to reconstruct high-fidelity three-dimensional (3D) models and developed a segmentation workflow, Cotton3DGaussians, to analyze cotton bolls and extract architectural traits from single plants. Cotton plants were scanned 360° using a smartphone, and photogrammetry was used to estimate camera parameters and reconstruct a sparse point cloud, which was then optimized into a 3DGS model. In Cotton3DGaussians, 2D masks of bolls segmented from four views were mapped to 3D space, and redundant bolls were removed through cross-view clustering. YOLOv11x and a foundation model, segment anything model (SAM), were compared to obtain 2D masks, with YOLOv11x achieving an F1-score 5.9 % higher than SAM. Phenotypic traits such as boll number, volume, plant height, and canopy size were estimated. The 3DGS model exhibited superior rendering quality, achieving a peak signal-to-noise ratio (PSNR) that was 6.91 higher than NeRF. Cotton3DGaussians effectively segmented 3D bolls from multiple views, with mean absolute percentage errors (MAPE) of 9.23 % for boll number, 3.66 % for canopy size, 2.38 % for plant height, and 8.17 % for boll volume compared to LiDAR ground truth. The regression analysis between convex boll volume and boll weight showed a 19.3 % weight error per plant. This study demonstrates the potential of 3DGS for low-cost, high-fidelity 3D modeling, enabling high-resolution phenotyping and advancing cotton breeding programs. The methodology can also be applied to other crops for improved 3D trait measurement research and enhanced productivity.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110293"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing olive tree (Olea europaea L.) responses to water shortage through radio frequency sensors
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110303
Valeria Lazzoni , Claudia Cocozza , Danilo Brizi , Marco Moriondo , Cristiana Giordano , Giovanni Argenti , Angelica Masi , Nicolina Staglianò , Marco Bindi , Alberto Maltoni , Monica Anichini , Camilla Dibari , Agostino Monorchio , Riccardo Rossi
{"title":"Assessing olive tree (Olea europaea L.) responses to water shortage through radio frequency sensors","authors":"Valeria Lazzoni ,&nbsp;Claudia Cocozza ,&nbsp;Danilo Brizi ,&nbsp;Marco Moriondo ,&nbsp;Cristiana Giordano ,&nbsp;Giovanni Argenti ,&nbsp;Angelica Masi ,&nbsp;Nicolina Staglianò ,&nbsp;Marco Bindi ,&nbsp;Alberto Maltoni ,&nbsp;Monica Anichini ,&nbsp;Camilla Dibari ,&nbsp;Agostino Monorchio ,&nbsp;Riccardo Rossi","doi":"10.1016/j.compag.2025.110303","DOIUrl":"10.1016/j.compag.2025.110303","url":null,"abstract":"<div><div>This study presents the application of advanced radio frequency (RF) sensors for non-invasive, plant structure-specific water stress monitoring in olive trees (<em>Olea europaea</em> L.), focusing on the cultivars Frantoio and Leccino, known for their differing water-use strategies. The sensing system comprises circular and double-layer rectangular spiral RF sensors, optimised to maximise the quality factor (Q-factor) for enhanced sensitivity. The double-layer design, where one layer is “left-handed” and the other “right-handed,” allows for an increased magnetic field and detection reliability, especially on small branches where signal stability can be challenging. Throughout an 88-day experimental period, olive trees were subjected to full irrigation (FI) and deficit irrigation (DI) treatments. RF sensors were placed on the olive plants trunks and branches to capture plant structure-specific stress responses, with measurements recorded weekly. In the Frantoio cultivar, resonance frequency shifts were pronounced under DI, especially in the trunk and large branches, where notable physiological changes were observed. Correlations were established between resonance frequency data and morpho-physiological indicators such as trunk diameter increment (SDI) and fresh water content (FWC), validating the sensor’s sensitivity to dielectric property variations due to water stress. Anatomical analyses further revealed tissue adaptations in Frantoio under DI, including increased bark and cortex thickness and intensified sclerenchyma fibre formation, indicative of structural changes to support water transport. In contrast, the Leccino cultivar showed minimal frequency variations and lacked significant anatomical alterations, reflecting its conservative water-use strategy and limited sensitivity to stress. This research confirms RF sensors’ potential as precise tools for early water stress detection in olive trees, with an emphasis on sensor placement on main plant structures and sensitivity optimization to enhance accuracy. These findings support the use of RF sensing systems in precision agriculture for sustainable irrigation management, especially in water-limited environments and conditions.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110303"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of estrous ewes’ tail-wagging behavior in group-housed environments using Temporal-Boost 3D convolution
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110283
Jinru Shi , Xinwen Chen , Yanli Zhang , Ping Gong , Yingjun Xiong , Mingxia Shen , Tomas Norton , Xingjian Gu , Mingzhou Lu
{"title":"Detection of estrous ewes’ tail-wagging behavior in group-housed environments using Temporal-Boost 3D convolution","authors":"Jinru Shi ,&nbsp;Xinwen Chen ,&nbsp;Yanli Zhang ,&nbsp;Ping Gong ,&nbsp;Yingjun Xiong ,&nbsp;Mingxia Shen ,&nbsp;Tomas Norton ,&nbsp;Xingjian Gu ,&nbsp;Mingzhou Lu","doi":"10.1016/j.compag.2025.110283","DOIUrl":"10.1016/j.compag.2025.110283","url":null,"abstract":"<div><div>In the early estrous stage, ewes exhibit characteristic behaviors, including frequent movement or repetitive tail-wagging. Accurately identifying tail-wagging behavior is essential for determining whether ewes are in estrus, which is crucial for optimizing breeding timing and enhancing productivity in sheep farming. However, detecting ewes’ tail-wagging behavior in group-housed environments remains challenging, because the movements of the sheep make it difficult to analyze the body parts of each ewe individually. This study aims to propose a method for detecting tail-wagging behavior of estrous ewes in group-housed environments. The proposed method consists of three main modules: keypoint detection of the sheep skeletal, localization of the tail regions, and detection of tail-wagging behavior using a Temporal-Boost 3D convolutional network. Firstly, YOLOv8-pose is employed to obtain tail skeleton keypoints of the ewes. Secondly, tolerance expansion techniques are used to determine the tail locations of all ewes. Finally, the Temporal-Boost 3D convolutional network extracts features from both RGB and optical flow sequences. To improve classification accuracy, dynamic weighted fusion is then applied to the softmax outputs from both the RGB and optical flow data streams, producing the final classification result. To evaluate the practical effectiveness of this method, a video was selected for tail-wagging behavior detection, which contained 39 actual tail-wagging segments. The proposed method successfully detected 40 continuous tail-wagging segments, capturing all actual segments and achieving an accuracy rate of 97.5%. These results indicate that the method can effectively detect tail-wagging behavior in ewes within group-housed environments, meeting the intelligent detection needs of sheep farms.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110283"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Waveform analysis for short time domain reflectometry (TDR) probes to obtain calibrated moisture measurements from partial vertical sensor insertions
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110233
Hemanth Narayan Dakshinamurthy , Scott B. Jones , Robert C. Schwartz , Sierra N. Young
{"title":"Waveform analysis for short time domain reflectometry (TDR) probes to obtain calibrated moisture measurements from partial vertical sensor insertions","authors":"Hemanth Narayan Dakshinamurthy ,&nbsp;Scott B. Jones ,&nbsp;Robert C. Schwartz ,&nbsp;Sierra N. Young","doi":"10.1016/j.compag.2025.110233","DOIUrl":"10.1016/j.compag.2025.110233","url":null,"abstract":"<div><div>Time Domain Reflectometry (TDR) probes are extensively used for measuring soil moisture in agricultural and environmental water management applications. Short (<span><math><mo>&lt;</mo></math></span> 15 cm) commercial TDR probes provide accurate soil moisture measurements when installed correctly in the soil. Ground and aerial robots have recently been designed to measure soil moisture autonomously, but ensuring proper sensor insertion is challenging. Incomplete sensor insertions in the soil can lead to air gaps and underestimation of soil moisture due to differences in the dielectric permittivity between air (ε <span><math><mo>=</mo></math></span> 1) and water (ε <span><math><mo>=</mo></math></span> 80). A new TDR waveform calibration methodology was developed by modifying the waveform analysis approach by Schwartz et al. (2014). The objective was to calculate the apparent permittivity of the soil knowing the length exposed to air during automated sensor insertions. This method was validated using different liquid media and soil with different moisture conditions (ranging from air-dry to full saturation), showing accurate permittivity calculations within the manufacturer-specified accuracy range. The method was applied to field data collected from an aerial vehicle payload designed to measure soil moisture. The model successfully reduced the percent error in 8 out of 11 incomplete robotic sensor insertions and validated the values obtained by the drone payload in three complete sensor insertions. The study demonstrated that the sensor must be inserted at least 50% into the medium for reliable waveform analysis. This research enhances the reliability of automated soil moisture measurements using robotic technologies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110233"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143681676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信