Zhiheng Lu , Kai Huang , Yu Li , Shusheng Yu , Hao Lai , Tianyun Dong , Wang Yang , Guanghui Wang
{"title":"Research and experimentation on an automatic teat cupattachment system for dairy cows based on visual perception","authors":"Zhiheng Lu , Kai Huang , Yu Li , Shusheng Yu , Hao Lai , Tianyun Dong , Wang Yang , Guanghui Wang","doi":"10.1016/j.biosystemseng.2025.104142","DOIUrl":null,"url":null,"abstract":"<div><div>The milking process is the most labour-intensive activity in dairy farming. Teat cup attachment is the primary challenge in realizing automatic milking. The existing AMS (automatic milking systems) primarily utilize laser/vision technology for their sensing systems, leading to slow teat sensing process and struggles with attaching teat cups to angled teats. Moreover, current depth sensing technologies fail to achieve real-time, high-accuracy 6D pose estimation (3D position and orientation) for deformable biological targets like cow teats. To address these issues, we first expand the cow udder image dataset and propose an arbitrary-oriented teat detection and target endpoint positioning method based on YOLOv8-obb. The <em>AP</em> (average precision) of the method is 97.82%, the average orientation error is 1.83°, the average positioning error of target endpoint is 2.04 pixels, and the detection time is 11 ms per image. Secondly, based on the morphological characteristics of the teats, combined with the teats rotated bounding boxes and binocular vision, we develop an efficient method for spatial positions and orientations calculation of teats’ target endpoints, with an average time consumption of 24.62 ms. Finally, a lightweight automatic teat cup attachment system is built to perform experiments. Among the 25 groups experiments, the average spatial positioning errors of the X, Y, and Z axes are 2.54 mm, 2.08 mm, and 3.90 mm, respectively, while the average spatial orientation errors are 1.16°, 1.28°, and 1.73°, respectively. The average time consumption for the entire process is 116.71 ms. The results prove the feasibility and accuracy of the proposed methods.</div></div>","PeriodicalId":9173,"journal":{"name":"Biosystems Engineering","volume":"254 ","pages":"Article 104142"},"PeriodicalIF":4.4000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biosystems Engineering","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1537511025000789","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
The milking process is the most labour-intensive activity in dairy farming. Teat cup attachment is the primary challenge in realizing automatic milking. The existing AMS (automatic milking systems) primarily utilize laser/vision technology for their sensing systems, leading to slow teat sensing process and struggles with attaching teat cups to angled teats. Moreover, current depth sensing technologies fail to achieve real-time, high-accuracy 6D pose estimation (3D position and orientation) for deformable biological targets like cow teats. To address these issues, we first expand the cow udder image dataset and propose an arbitrary-oriented teat detection and target endpoint positioning method based on YOLOv8-obb. The AP (average precision) of the method is 97.82%, the average orientation error is 1.83°, the average positioning error of target endpoint is 2.04 pixels, and the detection time is 11 ms per image. Secondly, based on the morphological characteristics of the teats, combined with the teats rotated bounding boxes and binocular vision, we develop an efficient method for spatial positions and orientations calculation of teats’ target endpoints, with an average time consumption of 24.62 ms. Finally, a lightweight automatic teat cup attachment system is built to perform experiments. Among the 25 groups experiments, the average spatial positioning errors of the X, Y, and Z axes are 2.54 mm, 2.08 mm, and 3.90 mm, respectively, while the average spatial orientation errors are 1.16°, 1.28°, and 1.73°, respectively. The average time consumption for the entire process is 116.71 ms. The results prove the feasibility and accuracy of the proposed methods.
期刊介绍:
Biosystems Engineering publishes research in engineering and the physical sciences that represent advances in understanding or modelling of the performance of biological systems for sustainable developments in land use and the environment, agriculture and amenity, bioproduction processes and the food chain. The subject matter of the journal reflects the wide range and interdisciplinary nature of research in engineering for biological systems.