Muhammad Fahad Nasir , Alvaro Fuentes , Shujie Han , Jiaqi Liu , Yongchae Jeong , Sook Yoon , Dong Sun Park
{"title":"基于深度学习的牛行为监测的多摄像机融合和鸟瞰图定位","authors":"Muhammad Fahad Nasir , Alvaro Fuentes , Shujie Han , Jiaqi Liu , Yongchae Jeong , Sook Yoon , Dong Sun Park","doi":"10.1016/j.aiia.2025.06.001","DOIUrl":null,"url":null,"abstract":"<div><div>Cattle behavioral monitoring is an integral component of the modern infrastructure of the livestock industry. Ensuring cattle well-being requires precise observation, typically using wearable devices or surveillance cameras. Integrating deep learning into these systems enhances the monitoring of cattle behavior. However, challenges remain, such as occlusions, pose variations, and limited camera viewpoints, which hinder accurate detection and location mapping of individual cattle. To address these challenges, this paper proposes a multi-viewpoint surveillance system for indoor cattle barns, using footage from four cameras and deep learning-based models including action detection and pose estimation for behavior monitoring. The system accurately detects hierarchical behaviors across camera viewpoints. These results are fed into a Bird's Eye View (BEV) algorithm, producing precise cattle position maps in the barn. Despite complexities like overlapping and non-overlapping camera regions, our system, implemented on a real farm, ensures accurate cattle detection and BEV-based projections in real-time. Detailed experiments validate the system's efficiency, offering an end-to-end methodology for accurate behavior detection and location mapping of individual cattle using multi-camera data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 724-743"},"PeriodicalIF":8.2000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-camera fusion and bird-eye view location mapping for deep learning-based cattle behavior monitoring\",\"authors\":\"Muhammad Fahad Nasir , Alvaro Fuentes , Shujie Han , Jiaqi Liu , Yongchae Jeong , Sook Yoon , Dong Sun Park\",\"doi\":\"10.1016/j.aiia.2025.06.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Cattle behavioral monitoring is an integral component of the modern infrastructure of the livestock industry. Ensuring cattle well-being requires precise observation, typically using wearable devices or surveillance cameras. Integrating deep learning into these systems enhances the monitoring of cattle behavior. However, challenges remain, such as occlusions, pose variations, and limited camera viewpoints, which hinder accurate detection and location mapping of individual cattle. To address these challenges, this paper proposes a multi-viewpoint surveillance system for indoor cattle barns, using footage from four cameras and deep learning-based models including action detection and pose estimation for behavior monitoring. The system accurately detects hierarchical behaviors across camera viewpoints. These results are fed into a Bird's Eye View (BEV) algorithm, producing precise cattle position maps in the barn. Despite complexities like overlapping and non-overlapping camera regions, our system, implemented on a real farm, ensures accurate cattle detection and BEV-based projections in real-time. Detailed experiments validate the system's efficiency, offering an end-to-end methodology for accurate behavior detection and location mapping of individual cattle using multi-camera data.</div></div>\",\"PeriodicalId\":52814,\"journal\":{\"name\":\"Artificial Intelligence in Agriculture\",\"volume\":\"15 4\",\"pages\":\"Pages 724-743\"},\"PeriodicalIF\":8.2000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence in Agriculture\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589721725000649\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Agriculture","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589721725000649","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
Multi-camera fusion and bird-eye view location mapping for deep learning-based cattle behavior monitoring
Cattle behavioral monitoring is an integral component of the modern infrastructure of the livestock industry. Ensuring cattle well-being requires precise observation, typically using wearable devices or surveillance cameras. Integrating deep learning into these systems enhances the monitoring of cattle behavior. However, challenges remain, such as occlusions, pose variations, and limited camera viewpoints, which hinder accurate detection and location mapping of individual cattle. To address these challenges, this paper proposes a multi-viewpoint surveillance system for indoor cattle barns, using footage from four cameras and deep learning-based models including action detection and pose estimation for behavior monitoring. The system accurately detects hierarchical behaviors across camera viewpoints. These results are fed into a Bird's Eye View (BEV) algorithm, producing precise cattle position maps in the barn. Despite complexities like overlapping and non-overlapping camera regions, our system, implemented on a real farm, ensures accurate cattle detection and BEV-based projections in real-time. Detailed experiments validate the system's efficiency, offering an end-to-end methodology for accurate behavior detection and location mapping of individual cattle using multi-camera data.