{"title":"Cattle verification with YOLO and cross-attention encoder-based pairwise triplet loss","authors":"Niraj Kumar , Anshul Sharma , Abhinav Kumar , Rishav Singh , Sanjay Kumar Singh","doi":"10.1016/j.compag.2025.110223","DOIUrl":null,"url":null,"abstract":"<div><div>Cattle identification through biometrics has increasingly relied on non-invasive methods, where facial and muzzle images are commonly used. While facial images alone offer valuable features, they can be sensitive to variations in lighting, pose, and occlusion, leading to inconsistent results. Similarly, relying solely on muzzle images, though distinct, may not provide sufficient discriminatory power for accurate identification in all cases. Combining both facial and muzzle features offers a more comprehensive solution, overcoming the limitations of using either feature independently. Recent advancements in deep neural networks have significantly influenced this field, but achieving an optimal balance between model accuracy and computational efficiency remains a challenge. In this paper, we present a novel approach by introducing a Cross-Attention Encoder in combination with a Pairwise Triplet Loss function for cattle verification. This method processes face and muzzle images in parallel, enhancing the integration of features from both inputs. The cross-attention mechanism enables the model to focus on the most relevant regions, improving feature alignment and discrimination. With a compact architecture of only 0.6 million parameters, the encoder effectively captures essential features from both the face and muzzle, ensuring precise cattle verification without excessive computational demands. Our approach achieved a testing accuracy of 93.67%, with an average inference time of 35ms per sample, demonstrating the model’s efficiency. These findings highlight the strength of our attention-based method in delivering high accuracy and computational performance for cattle verification tasks.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110223"},"PeriodicalIF":7.7000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925003291","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Cattle identification through biometrics has increasingly relied on non-invasive methods, where facial and muzzle images are commonly used. While facial images alone offer valuable features, they can be sensitive to variations in lighting, pose, and occlusion, leading to inconsistent results. Similarly, relying solely on muzzle images, though distinct, may not provide sufficient discriminatory power for accurate identification in all cases. Combining both facial and muzzle features offers a more comprehensive solution, overcoming the limitations of using either feature independently. Recent advancements in deep neural networks have significantly influenced this field, but achieving an optimal balance between model accuracy and computational efficiency remains a challenge. In this paper, we present a novel approach by introducing a Cross-Attention Encoder in combination with a Pairwise Triplet Loss function for cattle verification. This method processes face and muzzle images in parallel, enhancing the integration of features from both inputs. The cross-attention mechanism enables the model to focus on the most relevant regions, improving feature alignment and discrimination. With a compact architecture of only 0.6 million parameters, the encoder effectively captures essential features from both the face and muzzle, ensuring precise cattle verification without excessive computational demands. Our approach achieved a testing accuracy of 93.67%, with an average inference time of 35ms per sample, demonstrating the model’s efficiency. These findings highlight the strength of our attention-based method in delivering high accuracy and computational performance for cattle verification tasks.
期刊介绍:
Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.