A YOLO-based Separation of Touching-Pigs for Smart Pig Farm Applications

Jihyun Seo, J. Sa, Younchang Choi, Yongwha Chung, Daihee Park, Hakjae Kim
{"title":"A YOLO-based Separation of Touching-Pigs for Smart Pig Farm Applications","authors":"Jihyun Seo, J. Sa, Younchang Choi, Yongwha Chung, Daihee Park, Hakjae Kim","doi":"10.23919/ICACT.2019.8701968","DOIUrl":null,"url":null,"abstract":"For specific livestock such as pigs in a pigsty, many surveillance applications have been reported to consider their health for efficient livestock management. For pig surveillance applications, separating touching-pigs in real-time is an important issue for a final goal of 24-hour tracking of individual pigs. Although convolutional neural network (CNN)-based instance segmentation techniques can be applied to this separation problem, their collective performance of accuracy-time may not satisfy the required performance. In this study, we improve the collective performance of accuracy-time by combining the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) with image processing techniques. We first apply image processing techniques to detect touching-pigs by using both infrared and depth information acquired from an Intel RealSense camera, then apply YOLO to separate the touching-pigs. Especially, in using YOLO as an object detector, we consider the target object as the boundary region of the touching-pigs, rather than the individual pigs of the touching-pigs. Finally, we apply image processing techniques to determine the final boundary line from the YOLO output. Our experimental results show that this method is effective to separate touching-pigs in terms of the collective performance of accuracy-time, compared to the recently reported CNN-based instance segmentation technique.","PeriodicalId":226261,"journal":{"name":"2019 21st International Conference on Advanced Communication Technology (ICACT)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 21st International Conference on Advanced Communication Technology (ICACT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICACT.2019.8701968","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

For specific livestock such as pigs in a pigsty, many surveillance applications have been reported to consider their health for efficient livestock management. For pig surveillance applications, separating touching-pigs in real-time is an important issue for a final goal of 24-hour tracking of individual pigs. Although convolutional neural network (CNN)-based instance segmentation techniques can be applied to this separation problem, their collective performance of accuracy-time may not satisfy the required performance. In this study, we improve the collective performance of accuracy-time by combining the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) with image processing techniques. We first apply image processing techniques to detect touching-pigs by using both infrared and depth information acquired from an Intel RealSense camera, then apply YOLO to separate the touching-pigs. Especially, in using YOLO as an object detector, we consider the target object as the boundary region of the touching-pigs, rather than the individual pigs of the touching-pigs. Finally, we apply image processing techniques to determine the final boundary line from the YOLO output. Our experimental results show that this method is effective to separate touching-pigs in terms of the collective performance of accuracy-time, compared to the recently reported CNN-based instance segmentation technique.
基于yolo的智能养猪场触摸猪分离技术
对于特定的牲畜,如猪圈中的猪,据报道,许多监测应用都考虑到它们的健康状况,以实现有效的牲畜管理。对于猪监测应用来说,实时分离触摸猪是实现24小时跟踪单个猪的最终目标的重要问题。虽然基于卷积神经网络(CNN)的实例分割技术可以应用于该分离问题,但它们的总体精度-时间性能可能不能满足要求的性能。在本研究中,我们通过将最快的基于cnn的目标检测技术(即,You Only Look Once, YOLO)与图像处理技术相结合,提高了准确度-时间的集体性能。我们首先应用图像处理技术,通过使用英特尔RealSense相机获取的红外和深度信息来检测触摸猪,然后应用YOLO对触摸猪进行分离。特别是,在使用YOLO作为目标检测器时,我们将目标物体作为触摸猪的边界区域,而不是触摸猪的单个猪。最后,我们应用图像处理技术从YOLO输出中确定最终边界线。实验结果表明,与最近报道的基于cnn的实例分割技术相比,该方法在准确率-时间的总体性能方面可以有效地分离触摸猪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信