Yutaka Yoshino, Kazuki Nakada, M. Kobayashi, H. Tatsumi
{"title":"A Study on Machine Learning-Based Image Identification Towards Assitive Automation of Commentary on Animation Characters","authors":"Yutaka Yoshino, Kazuki Nakada, M. Kobayashi, H. Tatsumi","doi":"10.1109/ICMLC48188.2019.8949258","DOIUrl":null,"url":null,"abstract":"This study aims to assist visually impaired people as well as animation novices by focusing on problems that arise at the time of viewing animation videos and images. We focus on the following problems: (1) difficulty of understanding behaviors and situations, (2) difficulty of discriminating animation characters, and (3) confusion caused by animation characters with similarities. We use deep neural networks to identify animation characters as preliminary verification by training a customized convolutional neural network (CNN) from scratch on a small class of data based on the original database of animation characters. The results show that some combinations of characters are difficult to discriminate in cross validation. To resolve this problem, we performed transfer learning based on the CNN variants pre-trained on the natural image database ImageNet. We confirmed that the learning proceeded steadily with a gradual learning curve, resulting in high accuracy. The results indicate that the bottleneck features of the CNN variants pre-trained on ImageNet are effective in identifying animation characters. Furthermore, we verified the operation speed of the inference of our trained CNN on a microcomputer board with a machine learning accelerator Intel Movidius and confirmed that the speed is sufficient in real-time execution.","PeriodicalId":221349,"journal":{"name":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC48188.2019.8949258","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study aims to assist visually impaired people as well as animation novices by focusing on problems that arise at the time of viewing animation videos and images. We focus on the following problems: (1) difficulty of understanding behaviors and situations, (2) difficulty of discriminating animation characters, and (3) confusion caused by animation characters with similarities. We use deep neural networks to identify animation characters as preliminary verification by training a customized convolutional neural network (CNN) from scratch on a small class of data based on the original database of animation characters. The results show that some combinations of characters are difficult to discriminate in cross validation. To resolve this problem, we performed transfer learning based on the CNN variants pre-trained on the natural image database ImageNet. We confirmed that the learning proceeded steadily with a gradual learning curve, resulting in high accuracy. The results indicate that the bottleneck features of the CNN variants pre-trained on ImageNet are effective in identifying animation characters. Furthermore, we verified the operation speed of the inference of our trained CNN on a microcomputer board with a machine learning accelerator Intel Movidius and confirmed that the speed is sufficient in real-time execution.