Jie Huang , Fangxuan Yi , Yingjun Cui , Xiangyou Wang , Chengqian Jin , Fernando Auat Cheein
{"title":"Design and implementation of a seed potato cutting robot using deep learning and delta robotic system with accuracy and speed for automated processing of agricultural products","authors":"Jie Huang , Fangxuan Yi , Yingjun Cui , Xiangyou Wang , Chengqian Jin , Fernando Auat Cheein","doi":"10.1016/j.compag.2025.110716","DOIUrl":"10.1016/j.compag.2025.110716","url":null,"abstract":"<div><div>Potatoes, along with rice and soy, are among the most widely consumed staple crops worldwide. Seed potatoes are traditionally manually cut, affecting the consistency and efficiency of the process given ever-increasing demand. To address this problem, we developed and evaluated an automated potato cutting robot system. The system employs a Potato Orientation Detection You Only Look Once (POD-YOLO) deep learning model to identify the pose, boundaries, and key eye locations of seed potatoes. Intelligent cutting path planning is achieved through a strategy that combines clustering analysis with objective function optimization, and cutting is performed by a Delta parallel robot. Precise visual guidance is enabled through camera-robot calibration based on a homography matrix. Performance evaluation reveals that static visual guidance positioning errors are mostly within ±0.5 mm. The selected cutting strategy demonstrates strong performance in terms of cutting uniformity and coverage rate. A maximum cutting success rate of 85 % is achieved for round potatoes, and the system’s average cycle time is approximately 2.14 s, resulting in a throughput of about 418.8 kg/h, roughly three times that of a skilled manual labor. While the results validate the technical feasibility of the system, several challenges remain, including incomplete visual data due to a single viewpoint, dynamic positioning errors from the conveyor, and limitations of using a single-cutting tool. This research presents a comprehensive solution and empirical evidence, highlighting directions for optimization including multi-sensor fusion, dynamic error compensation, and advanced cutting mechanisms. The source codes are at: <span><span>https://github.com/Jie-Huangi/seed-potato-cutting-robot</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110716"},"PeriodicalIF":7.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144513963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Fernandes , Juan D. Gamba , Francesco Pelusi , Angelo Bratta , Darwin Caldwell , Stefano Poni , Matteo Gatti , Claudio Semini
{"title":"Grapevine winter pruning: Merging 2D segmentation and 3D point clouds for pruning point generation","authors":"Miguel Fernandes , Juan D. Gamba , Francesco Pelusi , Angelo Bratta , Darwin Caldwell , Stefano Poni , Matteo Gatti , Claudio Semini","doi":"10.1016/j.compag.2025.110589","DOIUrl":"10.1016/j.compag.2025.110589","url":null,"abstract":"<div><div>Grapevine winter pruning is a labor-intensive and repetitive process that significantly influences grape yield and quality at harvest and produced wine. Due to its complexity and repetitive nature, the task demands skilled labor that needs to be trained, as in many other agricultural sectors. This paper encompasses an approach that targets using a robotic system to perform autonomous grapevine winter pruning using a vision system and artificial intelligence. In our previous work, we presented a 2D neural network that segmented images of grapevines into 5 different classes of plant organs during their dormant season. In this paper, we expand into the third dimension, introducing point clouds into our algorithm. The 3D approach creates instance-segmented point clouds using depth images and segmentation masks obtained with our 2D neural network. After the 3D reconstruction, the system extracts thickness measurement and uses agronomic knowledge to place pruning points for balanced pruning. The study not only delineates the integration of 2D and 3D methods but also scrutinizes their efficacy in pruning point identification. The real-world performance of the created system was evaluated and statistically analyzed on data collected during field trials in the winter pruning season 2022/2023, where the system was used in a potted vineyard to prune a set of test vines, where the positive success rate is 54.2%. Moreover, as one of the main contributions, the paper underscores a unique facet of adaptability, presenting a customizable framework that empowers end-users to fine-tune parameters according to the expected balanced pruning. This adaptability extends to variables such as the number of nodes to retain on pruned spurs and the preferred cane thickness, encapsulating the versatility of the 3D approach.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110589"},"PeriodicalIF":7.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144518864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feifei Cheng , Bingwen Qiu , Peng Yang , Wenbin Wu , Qiangyi Yu , Jianping Qian , Bingfang Wu , Jin Chen , Xuehong Chen , Francesco N. Tubiello , Piotr Tryjanowski , Viktoria Takacs , Yuanlin Duan , Lihui Lin , Laigang Wang , Jianyang Zhang , Zhanjie Dong
{"title":"Crop sample prediction and early mapping based on historical data: Exploration of an explainable FKAN framework","authors":"Feifei Cheng , Bingwen Qiu , Peng Yang , Wenbin Wu , Qiangyi Yu , Jianping Qian , Bingfang Wu , Jin Chen , Xuehong Chen , Francesco N. Tubiello , Piotr Tryjanowski , Viktoria Takacs , Yuanlin Duan , Lihui Lin , Laigang Wang , Jianyang Zhang , Zhanjie Dong","doi":"10.1016/j.compag.2025.110689","DOIUrl":"10.1016/j.compag.2025.110689","url":null,"abstract":"<div><div>Accurate and timely crop mapping is essential for food security assessment, and high-quality feature factors are the core foundation for accurate mapping. However, deep learning model crop classification algorithms have achieved some success, while the models themselves struggle to explain the specific contribution and impact of different features on the results. In this study, a self-adaptive Feature-attention Kolmogorov-Arnold Network (FKAN) is proposed for interpretable and scalable crop mapping. The model integrated the adaptive weighted feature attention module (AWFA) and the interpretable KAN network, which can visualize the complex associations between features and target crops and automatically capture and filter effective key spatiotemporal features, thus enhancing the interpretability of the model. Experimental results demonstrate that integrating optical, radar, and terrain features yields superior performance in both sample prediction and crop mapping, surpassing existing methods. The proposed FKAN achieves an overall accuracy and F1 score exceeding 0.90. Optical and radar features contribute the most significantly to classification accuracy, while terrain data provides complementary enhancement. By aligning with key crop phenology and leveraging the Google Earth Engine (GEE), FKAN establishes the first operational platform for global winter wheat identification, enabling accurate and scalable crop mapping. The migrated model achieves over 85% accuracy across different regions and years, demonstrating strong robustness and generalization capability. The study identifies optimal phenological periods and feature indices for different crops, providing scientific guidance for future mapping efforts. The FKAN model demonstrated robustness, scalability, and interpretability, was able to automatically extract high-confidence pixels and generate crop planting probabilities, providing an efficient and scalable solution for large-scale crop monitoring. This study generated the first global winter wheat map GlobalWinterWheat10m dataset by the FKAN algorithm. The code and demo link is accessible at <span><span>https://github.com/FZUcheng123/FKAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110689"},"PeriodicalIF":7.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juncheng Ma, JingJing Chen, Dan Xu, Weizhong Jiang, Chaoyuan Wang
{"title":"Dual cross-modality fusion boosts the RGBD-based lettuce fresh weight estimation","authors":"Juncheng Ma, JingJing Chen, Dan Xu, Weizhong Jiang, Chaoyuan Wang","doi":"10.1016/j.compag.2025.110721","DOIUrl":"10.1016/j.compag.2025.110721","url":null,"abstract":"<div><div>In recent studies on estimating the lettuce fresh weight (FW), the depth image has been widely used to compensate for the RGB image. However, the contribution of the depth image varies across the lettuce growth and cultivars, and the widely used indiscriminate stacking fusion cannot fully exploit the potential information in depth images. In this study, an estimation model (LFWNet) for lettuce FW was proposed based on convolutional neural networks (CNNs) and the dual cross-modality fusion (DCMF) of RGB and depth images. The proposed DCMF could effectively capture the cross-modality spatial and channel-wise information and adaptively assign weights to each modality according to the applications. To demonstrate the effectiveness of the LFWNet, an ablation study was conducted, and the adaptability across the lettuce cultivars and growth was evaluated. The results showed that the LFWNet was the best-performing model in the ablation study and demonstrated good adaptability over the lettuce cultivars and the growth. In conjunction with the DCMF, the depth image was still essential to RGBD-based lettuce FW estimation. In addition to the plant vertical information, plant shape information was another way for the depth image to compensate for the RGB image. The depth image contributed more to the early-stage lettuce plants than the late-stage lettuce plants and the poor image quality caused the model to deteriorate rapidly. This study indicates that the LFWNet makes a powerful tool for lettuce FW estimation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110721"},"PeriodicalIF":7.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faming Wang , Xindong Ni , Qi Zhang , Shujin Guo , Jie Zhou , Du Chen
{"title":"Estimation of combine harvester throughput using multisensor data fusion","authors":"Faming Wang , Xindong Ni , Qi Zhang , Shujin Guo , Jie Zhou , Du Chen","doi":"10.1016/j.compag.2025.110713","DOIUrl":"10.1016/j.compag.2025.110713","url":null,"abstract":"<div><div>Throughput is a key indicator of a combine harvester’s operating performance and efficiency. In response to the challenge that throughput estimation models often struggle to achieve high accuracy due to the imperfect architecture of throughput monitoring systems and the insufficient monitoring on operational parameters, a multi-sensor data fusion-based throughput estimation method is proposed. Firstly, a multi-sensor data monitoring and acquisition system for combine harvester was developed to enable online monitoring and the acquisition of multi-sensor parameters from feeding, threshing, travel, and engine units. Secondly, a multi-sensor fusion estimation model based on PCA-WOA-SVR was introduced. Principal component analysis (PCA) first removes redundant and weakly correlated features to reduce dimensionality, then Support Vector Regression (SVR) estimates throughput from the reduced inputs, and Whale Optimization Algorithm (WOA) optimizes the SVR hyperparameters for optimal estimation performance. Finally, field tests were conducted, and the results showed that the system demonstrated high robustness under varying operating conditions. The MAE of PCA-WOA-SVR in the test set was 0.258 kg/s. The R<sup>2</sup>, MSE, RMSE and MAPE were 0.985, 0.099, 0.315, and 5.3 % respectively, showing high estimation accuracy and strong generalization ability. The ablation study results show that the MAE of PCA-WOA-SVR is reduced by 0.367 kg/s, R<sup>2</sup> is increased by 6.7 %, MSE, RMSE and MAPE are reduced by 0.434, 0.415 and 7.4 %, respectively, compared to using SVR alone, demonstrating that WOA and PCA effectively enhance the estimation performance of the SVR model. The estimation results of different unit combination inputs show that as the number of input units increases, the model estimation effect gradually improves, among which the engine unit contributes the most. The MAE of field online monitoring is 0.29 kg/s, the continuous fluctuation range of the online monitoring data is within [−0.02, 0.015], and the single group monitoring time is 24.31 ms, which meets the requirements of online monitoring accuracy, stability and real-time performance. In summary, the throughput estimation method proposed in this study has good robustness, estimation accuracy and generalization ability, providing important technical support for the online monitoring and feedback control of the throughput for combine harvesters.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110713"},"PeriodicalIF":7.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianping Liu , Jialu Xing , Guomin Zhou , Jian Wang , Lulu Sun , Xi Chen
{"title":"Transfer large models to crop pest recognition—A cross-modal unified framework for parameters efficient fine-tuning","authors":"Jianping Liu , Jialu Xing , Guomin Zhou , Jian Wang , Lulu Sun , Xi Chen","doi":"10.1016/j.compag.2025.110661","DOIUrl":"10.1016/j.compag.2025.110661","url":null,"abstract":"<div><div>Crop pest recognition is an important direction in agricultural research, which is of great significance for improving crop yield and scientifically classifying pests for precision agriculture. Traditional deep learning pest recognition usually trains proprietary models on single categories and scenes as well as unimodal information, achieving excellent performance. However, this scheme has a weak foundation of general knowledge, insufficient transferability, and unimodal information has limited effect on the recognition of pest background and different life stages. In recent years, transferring the general knowledge of Large pre-trained models (LPTM) to specific domains through full fine-tuning has become an effective solution. However, full fine-tuning requires massive data and operator resources to effectively adapt all parameters. Therefore, this paper proposes a cross-modal parameter efficient fine-tuning (PEFT) unified framework for crop pest recognition with the multimodal large model CLIP as the pre-training model. The proposed method employs CLIP as the encoder for both image and text modalities, introducing the Dual-<span><math><msup><mrow><mrow><mo>(</mo><mtext>PAL</mtext><mo>)</mo></mrow></mrow><mrow><mtext>G</mtext></mrow></msup></math></span> model. Firstly, learnable Prompt sequences are embedded in the input or hidden layers of the encoder. Secondly, multimodal LoRA is parallelly replaced in the dimension expansion layer of the fully connected layer. Then, the Gate unit integrates three PEFT methods—Prompt, Adapter, and LoRA, to enhance learning ability. We designed the GSC-Adapter and the parameter-efficient Light-GCS-Adapter for cross-modal semantic information fusion. To verify the effectiveness of the method, we conducted a large number of experiments on public datasets for crop pest recognition. Firstly, on the public dataset IP102 (for fine-grained recognition), we surpassed ViT and Swin Transformer with 66% of the sample size. In wolfberry pest dataset WPIT9K, using only about 15% of the sample size, it surpasses the previous state-of-the-art model ITF-WPI, achieving 98% accuracy. It also shows excellent performance on eight general tasks. This study provides a new technical solution for the field of agricultural pest recognition . This solution can efficiently transfer the general knowledge of multimodal LPTM to the specific pest recognition field under the condition of a few samples, with only a minimal number of parameters introduced. At the same time, this method has universality in cross-modal recognition tasks. <em>The code for this study will be posted on GitHub (</em><span><span><em>https://github.com/VcRenOne/Dual--PAL-G</em></span><svg><path></path></svg></span><em>)</em></div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110661"},"PeriodicalIF":7.7,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaoqing Xu , Ye Jin , Yuan Zhong , Luna Luo , Jianli Song
{"title":"Evaluation of spraying characteristics of a new multiple centrifugal nozzle applied to UAV","authors":"Shaoqing Xu , Ye Jin , Yuan Zhong , Luna Luo , Jianli Song","doi":"10.1016/j.compag.2025.110712","DOIUrl":"10.1016/j.compag.2025.110712","url":null,"abstract":"<div><div>Plant protection unmanned aerial vehicles (UAVs) have been widely used in fruit tree plant protection in recent years, especially in hilly and mountainous application scenarios. The ability of UAVs to meet the requirements of citrus red spider control is a current concern for UAV manufacturers, agricultural service organizations, and citrus growers. In this study, a UAV with a multiple centrifugal nozzle was tested. The nozzle has two atomizers, the inner atomizer (P) and the outer atomizer (C), which can achieve multiple droplet atomization. First, the droplet fragmentation characteristics and the droplet size of the nozzle were measured. A high-speed camera was used to study droplet fragmentation characteristics. The atomization process was divided into three stages: the first atomization triggered by the rotation of P, the second atomization caused by the impact of C, and the collisional agglomeration of small droplets around the nozzle. The result of the droplet size test showed that droplet size is inversely proportional to the rotational speed of P. The volume surface mean diameter (VMD) could reach a minimum of about 40 µm by adjusting the rotational speeds of atomizers P and C. In addition, an UAV(EA-30XP) equipped with this nozzle was used for field evaluations. The deposition under different atomizer rotational speed combinations was obtained. The result showed that a P atomizer rotational speed of 4600 rpm and a C rotational speed of 18000 rpm gave the highest deposition efficiency. Coverage on adaxial surface of the leaf in this combination was 3.6 %–9.3 %, with 102.7–184.3 droplets per square centimeter; coverage of the abaxial surface was 1.9 %–3.3 %, with 51.5–84.7 droplets per square centimeter. In addition, the advantages of multiple atomizing centrifugal nozzle in terms of deposition efficiency were also shown in a comparison with single atomizing centrifugal nozzle. The coverage and droplet density of abaxial surface by the single atomizing centrifugal nozzle were significantly lower than the above rotational speed combinations.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110712"},"PeriodicalIF":7.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhen Gao , Daming Dong , Guiyan Yang , Xuelin Wen , Juekun Bai , Fengjing Cao , Chunjiang Zhao , Xiande Zhao
{"title":"In-situ analysis of nitrogen stress in field-grown wheat: Raman spectroscopy as a non-destructive and rapid method","authors":"Zhen Gao , Daming Dong , Guiyan Yang , Xuelin Wen , Juekun Bai , Fengjing Cao , Chunjiang Zhao , Xiande Zhao","doi":"10.1016/j.compag.2025.110700","DOIUrl":"10.1016/j.compag.2025.110700","url":null,"abstract":"<div><div>Nitrogen, as a vital element for plant growth and development, significantly influences crop yields. Nitrogen deficiency severely impairs crop growth, while excess nitrogen harms the environment. To address this, there is an urgent need for rapid and on-site methods to assess the physiological status of crops under nitrogen stress. In this study, we utilized Raman spectroscopy, a non-destructive and rapid analytical technique, to evaluate the physiological status of wheat plants subjected to various nitrogen treatments. These treatments included optimal, low, excessive and zero nitrogen application. By leveraging Raman spectroscopy’s ability to identify characteristic peaks of metabolites in plant leaves and quantify them based on peak intensity, we analyzed the levels of carotenoids, chlorophylls, cellulose, lignin, and aliphatic components. Our results revealed significant differences in metabolite peak intensity under different nitrogen treatments. Optimal nitrogen application promoted the accumulation of metabolites, while nitrogen deficiency led to a marked decrease in photosynthetic pigments and structural components. Excessive nitrogen caused a reduction in lignin and cellulose. To diagnose nitrogen stress, we developed classification models that accurately distinguished between healthy and nitrogen-stressed plants, achieving a training set accuracy of 99 %, a 5-fold cross-validation accuracy of 92 %, and a prediction set accuracy of 93 %. Furthermore, we differentiated wheat plants with varying degrees of nitrogen deficiency, achieving a maximum accuracy of 78 %. When considering both nitrogen deficiency and excess, the maximum accuracy reached 58 %. This study provides a fast, accurate, and non-destructive analytical method for analyzing and diagnosing nitrogen stress in field wheat based on Raman spectroscopy. Future research aims to extend this approach to the diagnosis of nitrogen stress in other crops and to explore its applications in nitrogen fertilization management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110700"},"PeriodicalIF":7.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Alexander Méndez , Blanca Fajardo , Sergi Sanjuan , Jose Manuel Calabuig , Roger Arnau , Arantxa Villagrá , Salvador Calvet-Sanz , Fernando Estelles
{"title":"Development of goat behaviour prediction with accelerometer data: a machine learning and pre-processing approach","authors":"Daniel Alexander Méndez , Blanca Fajardo , Sergi Sanjuan , Jose Manuel Calabuig , Roger Arnau , Arantxa Villagrá , Salvador Calvet-Sanz , Fernando Estelles","doi":"10.1016/j.compag.2025.110701","DOIUrl":"10.1016/j.compag.2025.110701","url":null,"abstract":"<div><div>The increasing use of accelerometer data for monitoring livestock behaviour in Precision Livestock Farming (PLF) has prompted interest in optimizing machine learning models for real-time applications. This study evaluates the effects of pre-processing factors on predicting goat behaviours using accelerometer data collected in an intensive production environment. A triaxial accelerometer placed on goats’ necks recorded movement data, which was synchronized with video-based ethograms for behavioural annotation. Multiple pre-processing techniques, including filtering, windowing, overlapping and sampling frequency with several feature extraction parameters, were assessed to identify optimal combinations for behaviour classification. Various machine learning algorithms, including classification trees, logistic regression, and multilayer perceptron (MLP) models, were applied to predict <em>eating</em>, <em>walking</em>, and <em>inactive</em> behaviours. Results indicate that some of the pre-processing methods applied could induce inflated evaluation metrics and the importance of the selection of train and test sets. Tree-based classifiers and MLPs demonstrate robust performance, achieving average accuracies above 0.9. Battery performance demonstrate that MLP extends the battery life of the accelerometer device by ∼25 %. These findings highlight the potential of machine learning models in real-time behavioural monitoring to enhance livestock management with goats.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110701"},"PeriodicalIF":7.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenli Zhang , Chao Zheng , Chenhuizi Wang , Pieter M. Blok , Haozhou Wang , Wei Guo
{"title":"GrapeCPNet: A self-supervised point cloud completion network for 3D phenotyping of grape bunches","authors":"Wenli Zhang , Chao Zheng , Chenhuizi Wang , Pieter M. Blok , Haozhou Wang , Wei Guo","doi":"10.1016/j.compag.2025.110595","DOIUrl":"10.1016/j.compag.2025.110595","url":null,"abstract":"<div><div>The measurement of phenotypic parameters of fresh grapes, especially at the individual berry level, is critical for yield estimation and quality control. Currently, these measurements are done by humans, making it costly, labor-intensive, and often inaccurate. Advances in 3D reconstruction and point cloud analysis allow extraction of detailed traits for grapes, yet current methods struggle incomplete point clouds due to occlusion. This study presents a novel deep-learning-based phenotyping pipeline designed specifically for 3D point cloud data. First, individual berries are segmented from the grape bunch using the SoftGroup deep learning network. Next, a self-supervised point cloud completion network, termed GrapeCPNet, addresses occlusions by completing missing areas. Finally, morphological analyses are applied to extract berry radius and volumes. Validation on a dataset of four fresh grape varieties yielded <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> values of 85.5% for berry radius and 96.9% for berry volume, respectively. These results demonstrate the potential of the proposed method for rapid and practical extraction of 3D phenotypic traits in grape cultivation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110595"},"PeriodicalIF":7.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144480877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}