{"title":"Robust keypoint-based method for peduncle pose estimation in unstructured environments","authors":"Guozhao Shi , Fugui Zhang , Xuemei Wu","doi":"10.1016/j.compag.2025.110380","DOIUrl":null,"url":null,"abstract":"<div><div>Visual detection for automated fruit harvesting in unstructured environments constitutes a critical technical challenge, especially for fruit peduncles, which exhibit greater sensitivity to environmental factors than the fruits themselves. To address this challenge, this paper proposes a top-down keypoint detection method for pepper peduncles in unstructured environments. The proposed method enables accurate estimation of peduncle poses. The first step of the research involves validating different object detection models and employing ones to identify the bounding boxes of pepper peduncles. Subsequently, a new keypoint detection model based on the Lite Vision Transformer is proposed, leveraging the Transformer’s capacity to capture long-range spatial and semantic dependencies. Experimental results on the pepper dataset collected in unstructured environments demonstrate that the proposed model achieves an AP<sup>50</sup> of 94.6 %. This performance surpasses multiple state-of-the-art keypoint detection methods while maintaining lightweight parameters and low computational complexity. Moreover, a series of tests reveals that the proposed method outperforms other algorithms in complex environments, especially in occlusion scenarios. Finally, a comprehensive evaluation of the top-down approach is conducted, examining the influence of object detection and keypoint detection models on overall performance. The proposed keypoint detection model achieves the highest performance, with a detection speed of 9.38 FPS when using YOLOv8s as the object detection model, and an AP<sup>50</sup> of 83.6 % when using YOLOv8l. Experiments conducted in real unstructured environments demonstrated the robustness of the proposed method, effectively detecting the posture of dense and occluded chili pepper peduncles. This research can be extended to the detection of fruit peduncles in other crops, providing a foundation for pose estimation of fruit peduncles in complex environments.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110380"},"PeriodicalIF":7.7000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925004867","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Visual detection for automated fruit harvesting in unstructured environments constitutes a critical technical challenge, especially for fruit peduncles, which exhibit greater sensitivity to environmental factors than the fruits themselves. To address this challenge, this paper proposes a top-down keypoint detection method for pepper peduncles in unstructured environments. The proposed method enables accurate estimation of peduncle poses. The first step of the research involves validating different object detection models and employing ones to identify the bounding boxes of pepper peduncles. Subsequently, a new keypoint detection model based on the Lite Vision Transformer is proposed, leveraging the Transformer’s capacity to capture long-range spatial and semantic dependencies. Experimental results on the pepper dataset collected in unstructured environments demonstrate that the proposed model achieves an AP50 of 94.6 %. This performance surpasses multiple state-of-the-art keypoint detection methods while maintaining lightweight parameters and low computational complexity. Moreover, a series of tests reveals that the proposed method outperforms other algorithms in complex environments, especially in occlusion scenarios. Finally, a comprehensive evaluation of the top-down approach is conducted, examining the influence of object detection and keypoint detection models on overall performance. The proposed keypoint detection model achieves the highest performance, with a detection speed of 9.38 FPS when using YOLOv8s as the object detection model, and an AP50 of 83.6 % when using YOLOv8l. Experiments conducted in real unstructured environments demonstrated the robustness of the proposed method, effectively detecting the posture of dense and occluded chili pepper peduncles. This research can be extended to the detection of fruit peduncles in other crops, providing a foundation for pose estimation of fruit peduncles in complex environments.
期刊介绍:
Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.