Research on Calf Behavior Recognition Based on Improved Lightweight YOLOv8 in Farming Scenarios.

IF 2.7 2区 农林科学 Q1 AGRICULTURE, DAIRY & ANIMAL SCIENCE
Animals Pub Date : 2025-03-20 DOI:10.3390/ani15060898
Ze Yuan, Shuai Wang, Chunguang Wang, Zheying Zong, Chunhui Zhang, Lide Su, Zeyu Ban
{"title":"Research on Calf Behavior Recognition Based on Improved Lightweight YOLOv8 in Farming Scenarios.","authors":"Ze Yuan, Shuai Wang, Chunguang Wang, Zheying Zong, Chunhui Zhang, Lide Su, Zeyu Ban","doi":"10.3390/ani15060898","DOIUrl":null,"url":null,"abstract":"<p><p>In order to achieve accurate and efficient recognition of calf behavior in complex scenes such as cow overlapping, occlusion, and different light and occlusion levels, this experiment adopts the method of improving the YOLO v8 model to recognize calf behavior. A calf daily behavior dataset containing 2918 images is selected as the test benchmark through video frame extraction; a P2 small-target detection layer is introduced to improve the resolution of the input scene, which significantly improves the model recognition accuracy, and reduces the computational complexity and storage requirements of the model through the Lamp pruning method. Comparisons are made with the SSD, YOLOv5n, YOLOv8n, YOLOv8-C2f-faster-EMA, YOLO v11n, YOLO v12n, and YOLO v8-P2 advanced models. The results show that the number of parameters, floating point operations (FLOPs), model size, and mean average precision (mAP) of the model after introducing the P2 small-target detection layer and pruning with the Lamp strategy are 0.949 M, 4.0 G, 2.3 Mb, and 90.9%, respectively. The significant improvement in each index effectively reduces the model size and improves the accuracy of the network. The detection results in complex environments with different light and shading levels show that the mAP in daytime (exposure) and nighttime environments is 85.1% and 84.8%, respectively, and the average mAP in the three kinds of shading cases (light, medium, and heavy) is 87.3%, representing a lightweight, high-precision, real-time, and robust model. The results of this study provide a reference for the real-time monitoring of calf behaviors all day long in complex environments.</p>","PeriodicalId":7955,"journal":{"name":"Animals","volume":"15 6","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11939661/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Animals","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.3390/ani15060898","RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, DAIRY & ANIMAL SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In order to achieve accurate and efficient recognition of calf behavior in complex scenes such as cow overlapping, occlusion, and different light and occlusion levels, this experiment adopts the method of improving the YOLO v8 model to recognize calf behavior. A calf daily behavior dataset containing 2918 images is selected as the test benchmark through video frame extraction; a P2 small-target detection layer is introduced to improve the resolution of the input scene, which significantly improves the model recognition accuracy, and reduces the computational complexity and storage requirements of the model through the Lamp pruning method. Comparisons are made with the SSD, YOLOv5n, YOLOv8n, YOLOv8-C2f-faster-EMA, YOLO v11n, YOLO v12n, and YOLO v8-P2 advanced models. The results show that the number of parameters, floating point operations (FLOPs), model size, and mean average precision (mAP) of the model after introducing the P2 small-target detection layer and pruning with the Lamp strategy are 0.949 M, 4.0 G, 2.3 Mb, and 90.9%, respectively. The significant improvement in each index effectively reduces the model size and improves the accuracy of the network. The detection results in complex environments with different light and shading levels show that the mAP in daytime (exposure) and nighttime environments is 85.1% and 84.8%, respectively, and the average mAP in the three kinds of shading cases (light, medium, and heavy) is 87.3%, representing a lightweight, high-precision, real-time, and robust model. The results of this study provide a reference for the real-time monitoring of calf behaviors all day long in complex environments.

求助全文
约1分钟内获得全文 求助全文
来源期刊
Animals
Animals Agricultural and Biological Sciences-Animal Science and Zoology
CiteScore
4.90
自引率
16.70%
发文量
3015
审稿时长
20.52 days
期刊介绍: Animals (ISSN 2076-2615) is an international and interdisciplinary scholarly open access journal. It publishes original research articles, reviews, communications, and short notes that are relevant to any field of study that involves animals, including zoology, ethnozoology, animal science, animal ethics and animal welfare. However, preference will be given to those articles that provide an understanding of animals within a larger context (i.e., the animals'' interactions with the outside world, including humans). There is no restriction on the length of the papers. Our aim is to encourage scientists to publish their experimental and theoretical research in as much detail as possible. Full experimental details and/or method of study, must be provided for research articles. Articles submitted that involve subjecting animals to unnecessary pain or suffering will not be accepted, and all articles must be submitted with the necessary ethical approval (please refer to the Ethical Guidelines for more information).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信