{"title":"面向面部表情识别增量深度学习的跨领域知识转移","authors":"Nehemia Sugianto, D. Tjondronegoro","doi":"10.1109/RITAPP.2019.8932731","DOIUrl":null,"url":null,"abstract":"For robotics and AI applications, automatic facial expression recognition can be used to measure user’s satisfaction on products and services that are provided through the human-computer interactions. Large-scale datasets are essentially required to construct a robust deep learning model, which leads to increased training computation cost and duration. This requirement is of particular issue when the training is supposed to be performed on an ongoing basis in devices with limited computation capacity, such as humanoid robots. Knowledge transfer has become a commonly used technique to adapt existing models and speed-up training process by supporting refinements on the existing parameters and weights for the target task. However, most state-of-the-art facial expression recognition models are still based on a single stage training (train at once), which would not be enough for achieving a satisfactory performance in real world scenarios. This paper proposes a knowledge transfer method to support learning using cross-domain datasets, from generic to specific domain. The experimental results demonstrate that shorter and incremental training using smaller-gap-domain from cross-domain datasets can achieve a comparable performance to training using a single large dataset from the target domain.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Cross-Domain Knowledge Transfer for Incremental Deep Learning in Facial Expression Recognition\",\"authors\":\"Nehemia Sugianto, D. Tjondronegoro\",\"doi\":\"10.1109/RITAPP.2019.8932731\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For robotics and AI applications, automatic facial expression recognition can be used to measure user’s satisfaction on products and services that are provided through the human-computer interactions. Large-scale datasets are essentially required to construct a robust deep learning model, which leads to increased training computation cost and duration. This requirement is of particular issue when the training is supposed to be performed on an ongoing basis in devices with limited computation capacity, such as humanoid robots. Knowledge transfer has become a commonly used technique to adapt existing models and speed-up training process by supporting refinements on the existing parameters and weights for the target task. However, most state-of-the-art facial expression recognition models are still based on a single stage training (train at once), which would not be enough for achieving a satisfactory performance in real world scenarios. This paper proposes a knowledge transfer method to support learning using cross-domain datasets, from generic to specific domain. The experimental results demonstrate that shorter and incremental training using smaller-gap-domain from cross-domain datasets can achieve a comparable performance to training using a single large dataset from the target domain.\",\"PeriodicalId\":234023,\"journal\":{\"name\":\"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RITAPP.2019.8932731\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RITAPP.2019.8932731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-Domain Knowledge Transfer for Incremental Deep Learning in Facial Expression Recognition
For robotics and AI applications, automatic facial expression recognition can be used to measure user’s satisfaction on products and services that are provided through the human-computer interactions. Large-scale datasets are essentially required to construct a robust deep learning model, which leads to increased training computation cost and duration. This requirement is of particular issue when the training is supposed to be performed on an ongoing basis in devices with limited computation capacity, such as humanoid robots. Knowledge transfer has become a commonly used technique to adapt existing models and speed-up training process by supporting refinements on the existing parameters and weights for the target task. However, most state-of-the-art facial expression recognition models are still based on a single stage training (train at once), which would not be enough for achieving a satisfactory performance in real world scenarios. This paper proposes a knowledge transfer method to support learning using cross-domain datasets, from generic to specific domain. The experimental results demonstrate that shorter and incremental training using smaller-gap-domain from cross-domain datasets can achieve a comparable performance to training using a single large dataset from the target domain.