Computers and Electronics in Agriculture最新文献

筛选
英文 中文
3D plant phenotyping from a single image: learning fine-scale organ morphology with monocular depth estimation 3D植物表型从一个单一的图像:学习细尺度器官形态与单目深度估计
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-25 DOI: 10.1016/j.compag.2025.110925
Yue Zhuo , Fengqi You
{"title":"3D plant phenotyping from a single image: learning fine-scale organ morphology with monocular depth estimation","authors":"Yue Zhuo ,&nbsp;Fengqi You","doi":"10.1016/j.compag.2025.110925","DOIUrl":"10.1016/j.compag.2025.110925","url":null,"abstract":"<div><div>Three-dimensional (3D) reconstruction is transforming plant science by enabling accurate phenotypic analysis and detailed morphological understanding. However, the scalability and accessibility of existing 3D reconstruction methods are limited by expensive imaging systems and constrained environments. Here we present PlantMDE, the first generalizable monocular depth estimation (MDE) model tailored for plant phenotyping. PlantMDE reconstructs 3D plant structures using only a single RGB image, eliminating the need for multi-view inputs and enabling cost-effective, scalable, and non-invasive phenotypic analysis on small-scale plants. To address a key limitation of general-purpose MDE models in capturing detailed object geometry, PlantMDE incorporates a novel organ-wise metric to explicitly estimate the 3D morphology of individual plant organs. PlantMDE is trained and evaluated on PlantDepth, a new large-scale plant RGB-D dataset comprising data from eight sources across various plant species and growing conditions. Across multiple evaluation datasets, PlantMDE significantly outperforms state-of-the-art MDE models, improving the similarity to the ground truth over Depth Anything and Marigold under both zero-shot and fine-tuning settings. Beyond reconstruction, depth features extracted by PlantMDE substantially enhance downstream phenotyping tasks, reducing error by 10.2 %–44.8 % in image-based trait estimation, including plant height, biomass, leaf area, and stress level. These results establish PlantMDE as a generalizable, scalable solution for high-throughput plant phenotyping, with broad implications for precise plant research and agriculture monitoring.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110925"},"PeriodicalIF":8.9,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal detection method for caged diseased hens integrating behavioral and thermal features via instance segmentation 基于实例分割的鸡笼病鸡行为和热特征多模态检测方法
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-25 DOI: 10.1016/j.compag.2025.110926
Rongqian Sun , Qiaohua Wang , Chengyu Yu , Zheng Yang , Jiaquan Wu , Wei Fan
{"title":"A multimodal detection method for caged diseased hens integrating behavioral and thermal features via instance segmentation","authors":"Rongqian Sun ,&nbsp;Qiaohua Wang ,&nbsp;Chengyu Yu ,&nbsp;Zheng Yang ,&nbsp;Jiaquan Wu ,&nbsp;Wei Fan","doi":"10.1016/j.compag.2025.110926","DOIUrl":"10.1016/j.compag.2025.110926","url":null,"abstract":"<div><div>The health status of laying hens has an important impact on the productivity and economic efficiency of the farming industry. In high-density caged production, diseased hens are often difficult to detect in time due to mutual occlusion and the labor intensity of manual inspection. To address this issue, this study proposes a multimodal detection method for identifying diseased laying hens in caged systems, by integrating behavioral patterns and infrared thermal features. First, the behavioral postures and surface temperature data of healthy and diseased hens were analyzed to provide a basis for various indicators in the diseased-hen recognition algorithm. Second, an improved FCA-YOLO model is proposed in this study by introducing Gaussian smoothing convolution, an FCA attention mechanism, and feature fusion optimization, along with a streamlined design of the detection head. The model achieves an average precision of 99.3 % and 96.1 % for the head and leg of laying hens in thermal infrared images. Compared to the original model, overall precision increased by 2.3 %, and average precision for leg segmentation improved by 1.3 %. Meanwhile, the number of parameters was reduced by 58.1 %, with head segmentation accuracy maintained. Finally, based on head behavioral change features extracted from consecutive multi-frame images, and incorporating the relative temperature differences between the head and legs among individual hens, a multimodal fusion algorithm for diseased hen identification was proposed. On the test set, the detection accuracies for the healthy group and the diseased group were 92.7 % and 88.7 %, respectively. These findings demonstrate the effectiveness of the proposed approach as a reliable tool for detecting diseased laying hens, facilitating timely intervention and significantly enhancing health management in intensive poultry farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110926"},"PeriodicalIF":8.9,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSNet: A deep learning framework following hierarchical yield level concept for crop yield estimation PSNet:一个深度学习框架,遵循分层产量水平概念,用于作物产量估计
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-24 DOI: 10.1016/j.compag.2025.110917
Renhai Zhong , Xingguo Xiong , Qiyu Tian , Jingfeng Huang , Linchao Zhu , Yi Yang , Tao Lin
{"title":"PSNet: A deep learning framework following hierarchical yield level concept for crop yield estimation","authors":"Renhai Zhong ,&nbsp;Xingguo Xiong ,&nbsp;Qiyu Tian ,&nbsp;Jingfeng Huang ,&nbsp;Linchao Zhu ,&nbsp;Yi Yang ,&nbsp;Tao Lin","doi":"10.1016/j.compag.2025.110917","DOIUrl":"10.1016/j.compag.2025.110917","url":null,"abstract":"<div><div>Accurate crop yield estimation is critical for global food security. Data-driven machine learning approaches have shown great potential for agricultural system monitoring, but are limited by their out-of-sample prediction failure and low interpretability. How to embed knowledge into deep learning models to address the above challenges remains an open question. In this study, we developed a deep learning model named PSNet following the concept of hierarchical yield levels to estimate county-level crop yield. The PSNet model mainly consists of PotentialNet, StressNet, and cross-attention to capture the interactions among crop, environment, and technological trend. The PotentialNet is developed to capture the spatiotemporal pattern of the crop yield potential based on environmental conditions and local technological trends. The StressNet is designed to capture the negative impact of climate stresses, which caused the yield gap between potential and actual yields. We applied the model to analyze the county-level rice yield in the middle and lower reaches of the Yangtze River (MLRYR) of China from 2001 to 2015 and corn yield in the US Corn Belt from 2006 to 2020. Results showed that the PSNet model achieved better yield estimation accuracies for irrigated rice and rainfed corn than baselines under normal and stressful climate conditions. Ablation results indicated that PotentialNet contributed to the yield estimation under normal climate conditions, while the StressNet was better at capturing the yield losses under climate stresses. This study provides a promising approach for assessing the impacts of climate change on food security.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110917"},"PeriodicalIF":8.9,"publicationDate":"2025-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abnormal turkey behaviour detection using self-supervised video anomaly detection and multiple object tracking 基于自监督视频异常检测和多目标跟踪的火鸡异常行为检测
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-23 DOI: 10.1016/j.compag.2025.110856
Yubo Zhang , Liying Zheng , Piter Bijma , Zhuoshi Wang , Peter H.N. de With , Patrick P.J.H. Langenhuizen
{"title":"Abnormal turkey behaviour detection using self-supervised video anomaly detection and multiple object tracking","authors":"Yubo Zhang ,&nbsp;Liying Zheng ,&nbsp;Piter Bijma ,&nbsp;Zhuoshi Wang ,&nbsp;Peter H.N. de With ,&nbsp;Patrick P.J.H. Langenhuizen","doi":"10.1016/j.compag.2025.110856","DOIUrl":"10.1016/j.compag.2025.110856","url":null,"abstract":"<div><div>Recently, researchers have shown an increased interest in the automated visual monitoring of on-farm animal behaviour because of its improved objectivity and efficiency compared to human observations. However, the involved annotation time is a major challenge with video camera data. To reduce the annotation time and automatically detect abnormal behaviours, we develop and train a self-supervised video anomaly detection model based on (optical) flow reconstruction and frame prediction, in order to select frames with abnormal behaviour and identify turkeys by multi-object tracking (MOT). The proposed algorithm first detects turkeys using a you-only-look-once-X detection model and extracts the optical flow in each frame by FlowNet. To track and identify each individual turkey, we use an MOT model based on the ByteTrack algorithm, where we include a third association step based on the turkey head area. Afterwards, the self-supervised anomaly detection model HF<sup>2</sup>-VAD is employed to detect instances of abnormal behaviour in turkeys. To evaluate the proposed abnormal behaviour detection model, it is tested on 7 videos which result in an area under the curve of 92.1%. Additionally, the proposed MOT model is tested with four 2.5-minute videos and one 5-minute video, obtaining 87.7% MOTA, 80.4% MOTP, 90.8% IDF1 and 72.0% HOTA scores on average for the 2.5-minute videos, and 85.4% MOTA, 82.6% MOTP, 89.1% IDF1 and 72.5% HOTA scores for the 5-minute video, respectively. The results show that the proposed model can successfully detect abnormal behaviour and identify turkeys in new unseen data.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110856"},"PeriodicalIF":8.9,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved estimation of irrigated field soil water (SWC) and salt content (SSC) from Sentinel-2 imagery by combining multi-dimensional spectra decomposition with ensemble learning 基于多维光谱分解和集合学习的Sentinel-2遥感灌溉田土壤水分和盐分估算方法的改进
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-23 DOI: 10.1016/j.compag.2025.110910
Ruiqi Du , Xianghui Lu , Yue Zhang , Xiaoying Feng , Youzhen Xiang , Fucang Zhang
{"title":"Improved estimation of irrigated field soil water (SWC) and salt content (SSC) from Sentinel-2 imagery by combining multi-dimensional spectra decomposition with ensemble learning","authors":"Ruiqi Du ,&nbsp;Xianghui Lu ,&nbsp;Yue Zhang ,&nbsp;Xiaoying Feng ,&nbsp;Youzhen Xiang ,&nbsp;Fucang Zhang","doi":"10.1016/j.compag.2025.110910","DOIUrl":"10.1016/j.compag.2025.110910","url":null,"abstract":"<div><div>Multispectral satellite imagery is an indispensable tool for understanding irrigated soil water (SWC) and salt content (SSC) processes in agricultural areas. However, the water-salt interaction effect on crop growth obscures the original mapping relationship between spectra and water-salt, introducing uncertainty into diagnosis results. To address this issue, a hybrid model of interaction effect decomposition and ensemble learning was proposed for evaluating the irrigated field SWC and SSC dynamic assessment. Firstly, the water-salt interaction effect on spectra index was quantified by nonlinear regression equations. Then, SWC and SSC was derived from the linear decomposition of two/three-dimensional spectra index combination. Finally, the derived results from chosen combination were used to predict SWC and SSC dynamics using ensemble learning. The result shows that: (1) The water-salt interaction effect on spectra index was significant (<em>p</em> &lt; 0.05); (2) By the linear decomposition, two/three-dimensional spectra index combinations perform close relationship with SWC and SSC (SWC:R<sup>2</sup> = 0.28–0.62;SSC:R<sup>2</sup> = 0.31–0.61). ENDVI-GRVI-CIG and SI5-SI10-NDVI<sub>gb</sub> were the optimal combinations for SWC and SSC, respectively; (3) Compared to decomposition result from single multiple-dimensional spectra index combination, the stacking ensemble learning enables reliable SWC and SSC estimation (SWC:R<sup>2</sup> = 0.76, RMSE = 1.08 %, MAE = 8 %; SSC:R<sup>2</sup> = 0.71,RMSE = 0.07 %,MAE = 14 %). In conclusion, this study demonstrated the potential of proposed method on SWC and SSC estimation in saline-affected irrigation areas, providing a new insight for precision agriculture management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110910"},"PeriodicalIF":8.9,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Round-the-clock accurate sheep face recognition via frequency enhancement and cross-modal embedding generation 基于频率增强和交叉模态嵌入生成的全天候准确羊脸识别
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-23 DOI: 10.1016/j.compag.2025.110918
Xingshi Xu , Huaibo Song , Haowen Pan , Diyi Chen , Shuming Yang
{"title":"Round-the-clock accurate sheep face recognition via frequency enhancement and cross-modal embedding generation","authors":"Xingshi Xu ,&nbsp;Huaibo Song ,&nbsp;Haowen Pan ,&nbsp;Diyi Chen ,&nbsp;Shuming Yang","doi":"10.1016/j.compag.2025.110918","DOIUrl":"10.1016/j.compag.2025.110918","url":null,"abstract":"<div><div>Accurate identity recognition of sheep is essential for precision livestock farming. As an efficient and contact-less identification approach, sheep face recognition aligns with the needs of modern farming, and has garnered significant attention. However, existing studies have paid limited attention to accurate recognition under low natural light and to the efficient extraction of identity-related information from wool texture. In this study, a novel sheep face recognition model called SheepNet was proposed. SheepNet recognized identities through NIR-RGB cross-modal matching, where the NIR query images are available round-the-clock, and the identity-labeled gallery images are in RGB. First, the model employed a Shallow Dual-Stream (SDS) architecture to process cross-modal inputs, extracting modality-specific features with separate weights in shallow layers and modality-shared features with shared weights in deep layers to mitigate modality discrepancies. Second, a Frequency Feature Decoupling and Enhancement (FFDE) module was introduced to explicitly modulate features by leveraging high-frequency information, such as wool texture, and low-frequency information, such as patterns and structures, thereby enhancing identity discrimination. Finally, an Embedded Diversity Generation (EDG) module was incorporated to optimize the feature embedding space by generating diverse feature representations, improving cross-modal retrieval capability. Experimental results demonstrated the effectiveness of the proposed method. Under a closed-set setting with 80 sheep, the proposed method achieved CMC-1 and mAP scores of 97.22 % and 88.55 %, respectively. In an open-set setting with 40 sheep, the CMC-1 and mAP scores reached 97.12 % and 83.83 %, respectively. This approach is expected to further advance the development of precision livestock farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110918"},"PeriodicalIF":8.9,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An advanced approach for multiscale aboveground biomass estimation by integrating UAV, backpack LiDAR and high-resolution imageries: A case study in Liriodendron sino-americanum mixed forests 基于无人机、背囊式激光雷达和高分辨率影像的多尺度地上生物量估算方法——以中美鹅尾楸混交林为例
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-23 DOI: 10.1016/j.compag.2025.110898
Yonglei Shi , Xin Shen , Miao Hu , Aihong Yang , Kai Zhou , Faxin Yu , Yang Tao , Lin Cao
{"title":"An advanced approach for multiscale aboveground biomass estimation by integrating UAV, backpack LiDAR and high-resolution imageries: A case study in Liriodendron sino-americanum mixed forests","authors":"Yonglei Shi ,&nbsp;Xin Shen ,&nbsp;Miao Hu ,&nbsp;Aihong Yang ,&nbsp;Kai Zhou ,&nbsp;Faxin Yu ,&nbsp;Yang Tao ,&nbsp;Lin Cao","doi":"10.1016/j.compag.2025.110898","DOIUrl":"10.1016/j.compag.2025.110898","url":null,"abstract":"<div><div>Precise measurement of individual tree aboveground biomass (AGB) is essential for effective plantation management and tree cultivars development. However, the complex structural and species mixture of <em>Liriodendron sino-americanum</em> forests pose challenges on accurate AGB estimation. This study developed an advanced approach for extracting individual <em>Liriodendron sino-americanum</em> trees from mixed forests using LiDAR-derived Canopy Height Model (CHM) metrics and high-resolution imagery, UAV and backpack LiDAR data were registered using seed points neighborhood features and Euclidean distance, the registered seed points of the backpack LiDAR were used for individual tree segmentation in UAV LiDAR, and AGB was then calculated at multi-scale (i.e., tree- and plot-level) by the synergetic implementations of UAV-based metrics, backpack LiDAR-derived diameter at breast height (DBH) (the DBH was extracted by developing a novel algorithm to integrate continuum elements, normal vector angles, KD Tree smoothing and least-squares circle fitting) and the fitted models. Results showed that 1) UAV and backpack LiDAR data matching based on seed points was effective, with an RMSE of 9.72 cm, and improved the UAV LiDAR segmentation accuracy (from 0.76 to 0.87). 2) The novel DBH extraction algorithm achieved an R<sup>2</sup> of 0.89 and an RMSE of 1.78 cm. 3) UAV LiDAR canopy volume metric accurately estimated AGB for individual <em>Liriodendron sino-americanum</em> trees, with an R<sup>2</sup> of 0.91, RMSE of 8.38 kg. Additionally, the LAI_pc (from LiDAR) metric was more suitable for predicting AGB in pure forests, while the D<sub>6</sub> (canopy return density) metric provided superior estimates for intermediate proportions. This integrated method demonstrates high potential for precise, scalable AGB estimation in structurally complex and mixed-species forests, contributing to enhanced forest inventory, carbon accounting, and ecological research.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110898"},"PeriodicalIF":8.9,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-invasive diagnosis of nutrient deficiencies in winter wheat and winter rye using UAV-based RGB images 基于无人机的RGB图像无创诊断冬小麦和冬黑麦营养缺乏症
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-22 DOI: 10.1016/j.compag.2025.110865
Jinhui Yi , Gina Lopez , Sofia Hadir , Jan Weyler , Lasse Klingbeil , Marion Deichmann , Juergen Gall , Sabine J. Seidel
{"title":"Non-invasive diagnosis of nutrient deficiencies in winter wheat and winter rye using UAV-based RGB images","authors":"Jinhui Yi ,&nbsp;Gina Lopez ,&nbsp;Sofia Hadir ,&nbsp;Jan Weyler ,&nbsp;Lasse Klingbeil ,&nbsp;Marion Deichmann ,&nbsp;Juergen Gall ,&nbsp;Sabine J. Seidel","doi":"10.1016/j.compag.2025.110865","DOIUrl":"10.1016/j.compag.2025.110865","url":null,"abstract":"<div><div>Better matching of the timing and amount of fertilizer inputs to plant requirements will improve nutrient use efficiency and crop yields and could reduce negative environmental impacts. Deep learning can be a powerful digital tool for on-site, real-time, non-invasive diagnosis of crop nutrient deficiencies. A drone-based RGB image dataset was generated together with ground truthing data in winter wheat (2020) and in winter rye (2021) during tillering and booting in the long-term fertilizer experiment (LTFE) Dikopshof. In this LTFE, the crops were fertilized with the same amounts for decades. The selected treatments included full fertilization including manure (NPKCa+m+s), mineral fertilization (NPKCa), mineral fertilization but no nitrogen (N) application (_PKCa), no phosphorus (P) application (N_KCa), no potassium (K) application (NP_Ca), or no liming (Ca) (NPK_), as well as an unfertilized treatment. The image dataset consisting of more than 3600 UAV-based RGB images was used to train and evaluate in total of eight CNN-based and transformer-based models as baselines within each crop-year and across the two crop-year combinations, aiming to detect the specific fertilizer treatments, including the specific nutrient deficiencies. The field observations showed a strong biomass decline in the case of N omission and no fertilization, though the effects were lower in the case of P, K, and lime omission. The mean detection accuracy within one year was 75% (winter wheat) and 81% (winter rye) across models and treatments. Hereby, the detection accuracy for winter wheat was highest for the NPKCa+m+s (100%) and the unfertilized (96%) treatments as well as the _PKCa treatment (92%), whereas for treatments N_KCa and NPKCa the accuracy was lowest (about 50%). The results were similar for winter rye. In the cross-year and cross-cereal species transfer (training on winter wheat, application on winter rye, and vice versa), the mean accuracy was about 18%. The results highlight the potential of deep learning as a digital tool for decision-making in smart farming but also the difficulties of transferring models across years and crops.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110865"},"PeriodicalIF":8.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICFMNet: an automated segmentation and 3D phenotypic analysis pipeline for plant, spike, and flag leaf type of wheat ICFMNet:一个自动分割和3D表型分析管道的植物,穗,旗叶类型的小麦
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-22 DOI: 10.1016/j.compag.2025.110893
Pengliang Xiao , Sheng Wu , Shiqing Gao , Weiliang Wen , Chuanyu Wang , Xianju Lu , Xiaofen Ge , Wenrui Li , Linsheng Huang , Dong Liang , Xinyu Guo
{"title":"ICFMNet: an automated segmentation and 3D phenotypic analysis pipeline for plant, spike, and flag leaf type of wheat","authors":"Pengliang Xiao ,&nbsp;Sheng Wu ,&nbsp;Shiqing Gao ,&nbsp;Weiliang Wen ,&nbsp;Chuanyu Wang ,&nbsp;Xianju Lu ,&nbsp;Xiaofen Ge ,&nbsp;Wenrui Li ,&nbsp;Linsheng Huang ,&nbsp;Dong Liang ,&nbsp;Xinyu Guo","doi":"10.1016/j.compag.2025.110893","DOIUrl":"10.1016/j.compag.2025.110893","url":null,"abstract":"<div><div>Three-dimensional high-throughput plant phenotyping technology offers an opportunity for simultaneous acquisition of plant organ traits at the scale of plant breeders. Wheat, as a multi-tiller crop with narrow leaves and diverse spikes, poses challenges for organ segmentation and measurement due to issues such as occlusion and adhesion. Therefore, building on previous research, this paper establishes a phenotyping pipeline and develops a 3D phenotypic automated analysis system for individual wheat plants at different growth stages. This system enables automated and precise three-dimensional phenotypic acquisition and analysis of wheat plant architecture, spike morphology, and flag leaf traits. To address the challenges posed by the significant structural differences among wheat spikes, leaves, and stems, as well as their compact spatial distribution, we propose a point cloud segmentation model based on deep learning called ICFMNet. ICFMNet relies on an instance center feature matching module, which extracts features from each instance’s central region and matches them with global point-wise features by computing feature similarity. This approach enables precise instance mask generation independent of the spatial structure of the point cloud. In the analysis of wheat phenotypes, we introduce a contour-based method to accurately extract the barren segment from 3D-scale wheat spikes. Furthermore, we perform the analysis of a total of 19 phenotypes, including flag leaf phenotypes and whole-plant phenotypes. In the organ point cloud segmentation tests for wheat spikes, stems, and leaves, the semantic segmentation achieves mPrec, mRec, and mIoU values of 95.9 %, 96.0 %, and 92.3 %, respectively. The instance segmentation attains mAP and mAR scores of 81.7 % and 83.0 %, respectively. Moreover, in comparison to five other segmentation network models, ICFMNet demonstrates superior segmentation performance. To better assess barren segment localization accuracy, additional evaluations are conducted using two metrics: interval overlap and interval error, achieving values of 92.33 % and 0.1123 cm, respectively. Experimental results indicate that our method excels in terms of accuracy, efficiency, and robustness, providing a reliable systematic platform for precise identification and breeding research of wheat plant types. The source code and trained models for ICFMNet are available at <span><span><em>https://github.com/xiao-pl/ICFMNet</em></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110893"},"PeriodicalIF":8.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cattle-ES3D: A spatiotemporal feature fusion method for detecting tachypnea and salivation behaviors in beef cattle 牛- es3d:一种用于检测肉牛呼吸急促和流涎行为的时空特征融合方法
IF 8.9 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-08-22 DOI: 10.1016/j.compag.2025.110907
Fuyang Tian , Liyin Zhang , Ji Zhang , Shuaiyang Zhang , Shakeel Ahmed Soomro , Benhai Xiong , Weizheng Shen , Zhanhua Song , Yinfa Yan , Zhenwei Yu
{"title":"Cattle-ES3D: A spatiotemporal feature fusion method for detecting tachypnea and salivation behaviors in beef cattle","authors":"Fuyang Tian ,&nbsp;Liyin Zhang ,&nbsp;Ji Zhang ,&nbsp;Shuaiyang Zhang ,&nbsp;Shakeel Ahmed Soomro ,&nbsp;Benhai Xiong ,&nbsp;Weizheng Shen ,&nbsp;Zhanhua Song ,&nbsp;Yinfa Yan ,&nbsp;Zhenwei Yu","doi":"10.1016/j.compag.2025.110907","DOIUrl":"10.1016/j.compag.2025.110907","url":null,"abstract":"<div><div>Accurate and efficient detection of tachypnea and salivation behavior plays a key role in improving the health management of beef cattle. To address the challenges of low detection accuracy and high computational redundancy in existing algorithms within complex breeding environments, the Cattle-ES3D algorithm was proposed for detecting tachypnea and salivation behaviors in beef cattle. First, a hybrid architecture was proposed, which integrated the Embedded Spatial Pyramid Network (ESP-Net) for multi-scale extraction and SlowFast dual-pathway network to realize the extraction of spatiotemporal features in beef cattle. Second, the Adaptive Spatiotemporal Feature Fusion Synchronization Module (AST-Sync) was designed to achieve adaptive fusion of spatiotemporal features. Finally, a lightweight dynamic detection branch was designed to achieve classification-regression feature spatial alignment and temporal association constraints through a multi-dimensional parameter optimization mechanism driven by spatiotemporal dynamic label assignment. The experimental results showed that the method utilized resulted with spatial and temporal features to enhance the detection accuracy of tachypnea and salivation behaviors in beef cattle, with reduced computational cost. The Cattle-ES3D model achieved a mean Average Precision (mAP) of 93.4 %, GFLOPs of 39.6 and FPS of 33.2. Compared to C3D, I3D, P3D and R(2 + 1)D, Cattle-ES3D improved mAP by 8.2 %, 6.8 %, 3.3 %, and 18.4 % respectively, while reducing GFLOPs by 0.7, 11.8, 7.0, and 8.2 respectively. These results demonstrated that the proposed model provided a robust and high-performance technical solution for intelligent livestock farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110907"},"PeriodicalIF":8.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信