Quan Chen , Jiqing Chen , Zhiwu Jiang , Lixiang Huang , Jingyao Gai , Peilin Li , Mingchang Zhang
{"title":"Research on efficient and fast extraction of vineyard navigation path based on key point detection","authors":"Quan Chen , Jiqing Chen , Zhiwu Jiang , Lixiang Huang , Jingyao Gai , Peilin Li , Mingchang Zhang","doi":"10.1016/j.engappai.2025.111549","DOIUrl":null,"url":null,"abstract":"<div><div>This study proposes an improved model based on key point detection, the YOLOv8-KN (You Only Look Once version 8-Keypoint Detection Navigation) model, which is used for autonomous navigation path extraction of agricultural robots in vineyards. The model comprehensively optimizes the original network structure by introducing FasterNet Block, Efficient Multi-Scale Attention (EMA), Universal Inverted Bottleneck (UIB) module, and Upsampling by Dynamic (DySample) dynamic upsampler. The improved network can accurately locate the key points of grapevine rhizomes directly from the image, and use the least squares method to achieve fast straight-line fitting of the key points on the left and right sides, thereby generating a high-precision navigation path. In the experiment, the model achieved an average precision of 87.1% and a key point detection precision of 91.2% on the grapevine rhizome detection task. At the same time, the model parameters are reduced by 25.8% compared with the original structure, and the computational complexity is controlled within 6.7 Giga Floating-point Operations Per Second (GFLOPs). For navigation path extraction, the evaluation results show that the average yaw angle of the method is only 0.75°, the maximum yaw angle is 1.47°, the average pixel offset error is 7.94 pixels, the maximum offset error is 14.04 pixels, and the average path fitting time is only 1.66 milliseconds (ms). The experimental results thoroughly verify the efficiency and precision of the proposed model in vineyards and provide a lightweight and high-performance solution for the autonomous navigation of agricultural robots in unstructured environments such as vineyards.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"159 ","pages":"Article 111549"},"PeriodicalIF":8.0000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625015519","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
This study proposes an improved model based on key point detection, the YOLOv8-KN (You Only Look Once version 8-Keypoint Detection Navigation) model, which is used for autonomous navigation path extraction of agricultural robots in vineyards. The model comprehensively optimizes the original network structure by introducing FasterNet Block, Efficient Multi-Scale Attention (EMA), Universal Inverted Bottleneck (UIB) module, and Upsampling by Dynamic (DySample) dynamic upsampler. The improved network can accurately locate the key points of grapevine rhizomes directly from the image, and use the least squares method to achieve fast straight-line fitting of the key points on the left and right sides, thereby generating a high-precision navigation path. In the experiment, the model achieved an average precision of 87.1% and a key point detection precision of 91.2% on the grapevine rhizome detection task. At the same time, the model parameters are reduced by 25.8% compared with the original structure, and the computational complexity is controlled within 6.7 Giga Floating-point Operations Per Second (GFLOPs). For navigation path extraction, the evaluation results show that the average yaw angle of the method is only 0.75°, the maximum yaw angle is 1.47°, the average pixel offset error is 7.94 pixels, the maximum offset error is 14.04 pixels, and the average path fitting time is only 1.66 milliseconds (ms). The experimental results thoroughly verify the efficiency and precision of the proposed model in vineyards and provide a lightweight and high-performance solution for the autonomous navigation of agricultural robots in unstructured environments such as vineyards.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.