Jihyun Seo, J. Sa, Younchang Choi, Yongwha Chung, Daihee Park, Hakjae Kim
{"title":"基于yolo的智能养猪场触摸猪分离技术","authors":"Jihyun Seo, J. Sa, Younchang Choi, Yongwha Chung, Daihee Park, Hakjae Kim","doi":"10.23919/ICACT.2019.8701968","DOIUrl":null,"url":null,"abstract":"For specific livestock such as pigs in a pigsty, many surveillance applications have been reported to consider their health for efficient livestock management. For pig surveillance applications, separating touching-pigs in real-time is an important issue for a final goal of 24-hour tracking of individual pigs. Although convolutional neural network (CNN)-based instance segmentation techniques can be applied to this separation problem, their collective performance of accuracy-time may not satisfy the required performance. In this study, we improve the collective performance of accuracy-time by combining the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) with image processing techniques. We first apply image processing techniques to detect touching-pigs by using both infrared and depth information acquired from an Intel RealSense camera, then apply YOLO to separate the touching-pigs. Especially, in using YOLO as an object detector, we consider the target object as the boundary region of the touching-pigs, rather than the individual pigs of the touching-pigs. Finally, we apply image processing techniques to determine the final boundary line from the YOLO output. Our experimental results show that this method is effective to separate touching-pigs in terms of the collective performance of accuracy-time, compared to the recently reported CNN-based instance segmentation technique.","PeriodicalId":226261,"journal":{"name":"2019 21st International Conference on Advanced Communication Technology (ICACT)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"A YOLO-based Separation of Touching-Pigs for Smart Pig Farm Applications\",\"authors\":\"Jihyun Seo, J. Sa, Younchang Choi, Yongwha Chung, Daihee Park, Hakjae Kim\",\"doi\":\"10.23919/ICACT.2019.8701968\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For specific livestock such as pigs in a pigsty, many surveillance applications have been reported to consider their health for efficient livestock management. For pig surveillance applications, separating touching-pigs in real-time is an important issue for a final goal of 24-hour tracking of individual pigs. Although convolutional neural network (CNN)-based instance segmentation techniques can be applied to this separation problem, their collective performance of accuracy-time may not satisfy the required performance. In this study, we improve the collective performance of accuracy-time by combining the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) with image processing techniques. We first apply image processing techniques to detect touching-pigs by using both infrared and depth information acquired from an Intel RealSense camera, then apply YOLO to separate the touching-pigs. Especially, in using YOLO as an object detector, we consider the target object as the boundary region of the touching-pigs, rather than the individual pigs of the touching-pigs. Finally, we apply image processing techniques to determine the final boundary line from the YOLO output. Our experimental results show that this method is effective to separate touching-pigs in terms of the collective performance of accuracy-time, compared to the recently reported CNN-based instance segmentation technique.\",\"PeriodicalId\":226261,\"journal\":{\"name\":\"2019 21st International Conference on Advanced Communication Technology (ICACT)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 21st International Conference on Advanced Communication Technology (ICACT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ICACT.2019.8701968\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 21st International Conference on Advanced Communication Technology (ICACT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICACT.2019.8701968","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
摘要
对于特定的牲畜,如猪圈中的猪,据报道,许多监测应用都考虑到它们的健康状况,以实现有效的牲畜管理。对于猪监测应用来说,实时分离触摸猪是实现24小时跟踪单个猪的最终目标的重要问题。虽然基于卷积神经网络(CNN)的实例分割技术可以应用于该分离问题,但它们的总体精度-时间性能可能不能满足要求的性能。在本研究中,我们通过将最快的基于cnn的目标检测技术(即,You Only Look Once, YOLO)与图像处理技术相结合,提高了准确度-时间的集体性能。我们首先应用图像处理技术,通过使用英特尔RealSense相机获取的红外和深度信息来检测触摸猪,然后应用YOLO对触摸猪进行分离。特别是,在使用YOLO作为目标检测器时,我们将目标物体作为触摸猪的边界区域,而不是触摸猪的单个猪。最后,我们应用图像处理技术从YOLO输出中确定最终边界线。实验结果表明,与最近报道的基于cnn的实例分割技术相比,该方法在准确率-时间的总体性能方面可以有效地分离触摸猪。
A YOLO-based Separation of Touching-Pigs for Smart Pig Farm Applications
For specific livestock such as pigs in a pigsty, many surveillance applications have been reported to consider their health for efficient livestock management. For pig surveillance applications, separating touching-pigs in real-time is an important issue for a final goal of 24-hour tracking of individual pigs. Although convolutional neural network (CNN)-based instance segmentation techniques can be applied to this separation problem, their collective performance of accuracy-time may not satisfy the required performance. In this study, we improve the collective performance of accuracy-time by combining the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) with image processing techniques. We first apply image processing techniques to detect touching-pigs by using both infrared and depth information acquired from an Intel RealSense camera, then apply YOLO to separate the touching-pigs. Especially, in using YOLO as an object detector, we consider the target object as the boundary region of the touching-pigs, rather than the individual pigs of the touching-pigs. Finally, we apply image processing techniques to determine the final boundary line from the YOLO output. Our experimental results show that this method is effective to separate touching-pigs in terms of the collective performance of accuracy-time, compared to the recently reported CNN-based instance segmentation technique.