A top-down deep neural network for multi-dairy cows pose estimation and lameness detection

IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY
Saisai Wu , Shuqing Han , Xiaoxiang Mo , Yingying Wei , Yuanyuan Qin , He Chen , Jianzhai Wu , Zhikang Zeng
{"title":"A top-down deep neural network for multi-dairy cows pose estimation and lameness detection","authors":"Saisai Wu ,&nbsp;Shuqing Han ,&nbsp;Xiaoxiang Mo ,&nbsp;Yingying Wei ,&nbsp;Yuanyuan Qin ,&nbsp;He Chen ,&nbsp;Jianzhai Wu ,&nbsp;Zhikang Zeng","doi":"10.1016/j.compag.2025.110911","DOIUrl":null,"url":null,"abstract":"<div><div>Cow pose estimation and real-time health monitoring are important for refined herd management, improved animal welfare, and reduced passive culling rates. However, existing multi-object pose estimation methods often struggle to adapt to multi-scale objects in complex environments and typically exhibit low accuracy in detecting occluded keypoints. To address these challenges, this study proposes a top-down deep neural network for multi-dairy cows pose estimation and lameness detection, which integrates lightweight object detection, multi-scale feature fusion, and comprehensive motion feature analysis to improve the robustness under complex farm conditions. First, the real-time object detector YOLOv8n is improved by introducing the Partial Convolution (PConv) and Slim-neck modules, which improve both the efficiency and accuracy of object bounding box predictions, providing a solid foundation for the subsequent pose estimation. Second, a Path Aggregation Feature Pyramid Network (PAFPN)-based multi-scale feature fusion module is introduced as the neck network within the Real-time Multi-person Pose Estimation (RTMPose). This is further supported by a transfer learning strategy to improve keypoint localization, particularly under-occlusion and scale variation conditions. The experimental results show that the improved model achieves a mean average precision (<em>mAP)</em> of 95.8 %, significantly outperforming the baseline model and other existing algorithms. Seven motion features, including gait symmetry, head swing amplitude, and back curvature, were extracted in real time through pose tracking and motion trajectory analysis. These features were normalized and input into a Random Forest classifier for lameness detection. The model was evaluated on a dataset of 418 dairy cows and achieved average accuracy, sensitivity, and specificity values of 93.8 %, 94.4 %, and 97.5 %, respectively. These results demonstrate that combining multiple motion features provides a more accurate assessment of lameness.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110911"},"PeriodicalIF":8.9000,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925010178","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Cow pose estimation and real-time health monitoring are important for refined herd management, improved animal welfare, and reduced passive culling rates. However, existing multi-object pose estimation methods often struggle to adapt to multi-scale objects in complex environments and typically exhibit low accuracy in detecting occluded keypoints. To address these challenges, this study proposes a top-down deep neural network for multi-dairy cows pose estimation and lameness detection, which integrates lightweight object detection, multi-scale feature fusion, and comprehensive motion feature analysis to improve the robustness under complex farm conditions. First, the real-time object detector YOLOv8n is improved by introducing the Partial Convolution (PConv) and Slim-neck modules, which improve both the efficiency and accuracy of object bounding box predictions, providing a solid foundation for the subsequent pose estimation. Second, a Path Aggregation Feature Pyramid Network (PAFPN)-based multi-scale feature fusion module is introduced as the neck network within the Real-time Multi-person Pose Estimation (RTMPose). This is further supported by a transfer learning strategy to improve keypoint localization, particularly under-occlusion and scale variation conditions. The experimental results show that the improved model achieves a mean average precision (mAP) of 95.8 %, significantly outperforming the baseline model and other existing algorithms. Seven motion features, including gait symmetry, head swing amplitude, and back curvature, were extracted in real time through pose tracking and motion trajectory analysis. These features were normalized and input into a Random Forest classifier for lameness detection. The model was evaluated on a dataset of 418 dairy cows and achieved average accuracy, sensitivity, and specificity values of 93.8 %, 94.4 %, and 97.5 %, respectively. These results demonstrate that combining multiple motion features provides a more accurate assessment of lameness.
基于自顶向下深度神经网络的多头奶牛姿态估计与跛行检测
奶牛姿态估计和实时健康监测对于精细化畜群管理、提高动物福利和降低被动扑杀率具有重要意义。然而,现有的多目标位姿估计方法往往难以适应复杂环境下的多尺度目标,并且在检测遮挡关键点时精度较低。为了解决这些问题,本研究提出了一种自上而下的多头奶牛姿态估计和跛行检测深度神经网络,该网络集成了轻量目标检测、多尺度特征融合和综合运动特征分析,以提高复杂农场条件下的鲁棒性。首先,通过引入部分卷积(PConv)和Slim-neck模块对实时目标检测器YOLOv8n进行改进,提高了目标边界盒预测的效率和准确性,为后续的姿态估计提供了坚实的基础。其次,引入基于路径聚合特征金字塔网络(PAFPN)的多尺度特征融合模块作为实时多人姿态估计(RTMPose)的颈部网络;这进一步得到了迁移学习策略的支持,以改善关键点定位,特别是在遮挡和尺度变化条件下。实验结果表明,改进模型的平均精度(mAP)达到95.8%,显著优于基线模型和其他现有算法。通过位姿跟踪和运动轨迹分析,实时提取步态对称性、头部摆动幅度和背部曲率等7个运动特征。这些特征被归一化并输入到随机森林分类器中进行跛行检测。该模型在418头奶牛的数据集上进行了评估,平均准确率、灵敏度和特异性分别为93.8%、94.4%和97.5%。这些结果表明,结合多种运动特征可以更准确地评估跛行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信