M. Kopaczka, Lisa Ernst, Jakob Heckelmann, C. Schorn, R. Tolba, D. Merhof
{"title":"自动关键帧提取从视频有效的鼠标疼痛评分","authors":"M. Kopaczka, Lisa Ernst, Jakob Heckelmann, C. Schorn, R. Tolba, D. Merhof","doi":"10.1109/SPIN.2018.8474046","DOIUrl":null,"url":null,"abstract":"Laboratory animals used for experiments need to be monitored closely for signs of pain and disstress. A well-established score is the mouse grimace scale (MGS), a method where defined morphological changes of the rodent’s eyes, ears, nose, whiskers and cheeks are assessed by human experts. While proven to be highly reliable, MGS assessment is a time-consuming task requiring manual processing of videos for key frame extraction and subsequent expert grading. While several tools have been presented to support this task for white laboratory rats, no methods are available for the most widely used mouse strain (C56BL6) which is inherently black. In our work, we present a set of methods to aid the expert in the annotation task by automatically processing a video and extracting images of single animals for further assessment. We introduce algorithms for separation of an image potentially containing multiple animals into single subimages displaying exactly one mouse. Additionally, we show how a fully convolutional neural network and a subsequent grading function can be designed in order to select frames that show a profile view of the mouse and therefore allow convenient grading. We evaluate our algorithms and show that the proposed pipeline works reliably and allows fast selection of relevant frames.","PeriodicalId":184596,"journal":{"name":"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Automatic Key Frame Extraction From Videos For Efficient Mouse Pain Scoring\",\"authors\":\"M. Kopaczka, Lisa Ernst, Jakob Heckelmann, C. Schorn, R. Tolba, D. Merhof\",\"doi\":\"10.1109/SPIN.2018.8474046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Laboratory animals used for experiments need to be monitored closely for signs of pain and disstress. A well-established score is the mouse grimace scale (MGS), a method where defined morphological changes of the rodent’s eyes, ears, nose, whiskers and cheeks are assessed by human experts. While proven to be highly reliable, MGS assessment is a time-consuming task requiring manual processing of videos for key frame extraction and subsequent expert grading. While several tools have been presented to support this task for white laboratory rats, no methods are available for the most widely used mouse strain (C56BL6) which is inherently black. In our work, we present a set of methods to aid the expert in the annotation task by automatically processing a video and extracting images of single animals for further assessment. We introduce algorithms for separation of an image potentially containing multiple animals into single subimages displaying exactly one mouse. Additionally, we show how a fully convolutional neural network and a subsequent grading function can be designed in order to select frames that show a profile view of the mouse and therefore allow convenient grading. We evaluate our algorithms and show that the proposed pipeline works reliably and allows fast selection of relevant frames.\",\"PeriodicalId\":184596,\"journal\":{\"name\":\"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)\",\"volume\":\"65 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPIN.2018.8474046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPIN.2018.8474046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic Key Frame Extraction From Videos For Efficient Mouse Pain Scoring
Laboratory animals used for experiments need to be monitored closely for signs of pain and disstress. A well-established score is the mouse grimace scale (MGS), a method where defined morphological changes of the rodent’s eyes, ears, nose, whiskers and cheeks are assessed by human experts. While proven to be highly reliable, MGS assessment is a time-consuming task requiring manual processing of videos for key frame extraction and subsequent expert grading. While several tools have been presented to support this task for white laboratory rats, no methods are available for the most widely used mouse strain (C56BL6) which is inherently black. In our work, we present a set of methods to aid the expert in the annotation task by automatically processing a video and extracting images of single animals for further assessment. We introduce algorithms for separation of an image potentially containing multiple animals into single subimages displaying exactly one mouse. Additionally, we show how a fully convolutional neural network and a subsequent grading function can be designed in order to select frames that show a profile view of the mouse and therefore allow convenient grading. We evaluate our algorithms and show that the proposed pipeline works reliably and allows fast selection of relevant frames.