{"title":"用于奶牛线型性状自动评估的深度学习辅助计算机视觉系统","authors":"","doi":"10.1016/j.atech.2024.100509","DOIUrl":null,"url":null,"abstract":"<div><p>The assessment of traits is important in determining production potential, reproductive performance, and overall health of dairy cows. The assessment of these traits typically involves visual inspection and manual measurement, which can be time-consuming, subject to bias, and potentially distressing for the animals. To address these challenges, convolutional neural networks (CNNs)-aided non-invasive computer vision system was developed in the present study. This system consists of a depth camera to acquire the RGB images and depth information of cows. The DeepLabV3+ model, having the ResNet50 model as a backbone, was utilized to segment the body parts of cows from RGB images. Image processing-based algorithms were developed to extract key pixel locations for each trait from these segmented images. The system estimated trait dimensions utilizing 3D data of respective key points. The mean-IoU (intersection-over-union) values for the developed segmentation models were 93.46%, 91.25%, and 99.27% for side-view, back-view traits, and stature, respectively. Additionally, the vision system was able to estimate the trait dimensions with mean absolute percentage error (MAPE) below 6.0%. For a few traits, MAPE, however, exceeded 10.0%, indicating higher error. Inaccurate segmentation, imprecise key point extraction, visual overlaps of specific body parts, and variations in cow postures contribute to such errors. The developed system attained a Ratio of Performance to Deviation (RPD) above 1.2 for all traits, indicating its ability to estimate the dimensions of traits efficaciously. Thus, the present study demonstrated the potential of a CNN-based computer vision-based system for automating the trait measurement process in cows.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3000,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277237552400114X/pdfft?md5=2021b702e771d6837d175d73a83d4cc5&pid=1-s2.0-S277237552400114X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Deep learning aided computer vision system for automated linear type trait evaluation in dairy cows\",\"authors\":\"\",\"doi\":\"10.1016/j.atech.2024.100509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The assessment of traits is important in determining production potential, reproductive performance, and overall health of dairy cows. The assessment of these traits typically involves visual inspection and manual measurement, which can be time-consuming, subject to bias, and potentially distressing for the animals. To address these challenges, convolutional neural networks (CNNs)-aided non-invasive computer vision system was developed in the present study. This system consists of a depth camera to acquire the RGB images and depth information of cows. The DeepLabV3+ model, having the ResNet50 model as a backbone, was utilized to segment the body parts of cows from RGB images. Image processing-based algorithms were developed to extract key pixel locations for each trait from these segmented images. The system estimated trait dimensions utilizing 3D data of respective key points. The mean-IoU (intersection-over-union) values for the developed segmentation models were 93.46%, 91.25%, and 99.27% for side-view, back-view traits, and stature, respectively. Additionally, the vision system was able to estimate the trait dimensions with mean absolute percentage error (MAPE) below 6.0%. For a few traits, MAPE, however, exceeded 10.0%, indicating higher error. Inaccurate segmentation, imprecise key point extraction, visual overlaps of specific body parts, and variations in cow postures contribute to such errors. The developed system attained a Ratio of Performance to Deviation (RPD) above 1.2 for all traits, indicating its ability to estimate the dimensions of traits efficaciously. Thus, the present study demonstrated the potential of a CNN-based computer vision-based system for automating the trait measurement process in cows.</p></div>\",\"PeriodicalId\":74813,\"journal\":{\"name\":\"Smart agricultural technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S277237552400114X/pdfft?md5=2021b702e771d6837d175d73a83d4cc5&pid=1-s2.0-S277237552400114X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart agricultural technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S277237552400114X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S277237552400114X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
Deep learning aided computer vision system for automated linear type trait evaluation in dairy cows
The assessment of traits is important in determining production potential, reproductive performance, and overall health of dairy cows. The assessment of these traits typically involves visual inspection and manual measurement, which can be time-consuming, subject to bias, and potentially distressing for the animals. To address these challenges, convolutional neural networks (CNNs)-aided non-invasive computer vision system was developed in the present study. This system consists of a depth camera to acquire the RGB images and depth information of cows. The DeepLabV3+ model, having the ResNet50 model as a backbone, was utilized to segment the body parts of cows from RGB images. Image processing-based algorithms were developed to extract key pixel locations for each trait from these segmented images. The system estimated trait dimensions utilizing 3D data of respective key points. The mean-IoU (intersection-over-union) values for the developed segmentation models were 93.46%, 91.25%, and 99.27% for side-view, back-view traits, and stature, respectively. Additionally, the vision system was able to estimate the trait dimensions with mean absolute percentage error (MAPE) below 6.0%. For a few traits, MAPE, however, exceeded 10.0%, indicating higher error. Inaccurate segmentation, imprecise key point extraction, visual overlaps of specific body parts, and variations in cow postures contribute to such errors. The developed system attained a Ratio of Performance to Deviation (RPD) above 1.2 for all traits, indicating its ability to estimate the dimensions of traits efficaciously. Thus, the present study demonstrated the potential of a CNN-based computer vision-based system for automating the trait measurement process in cows.