基于自我学习和迁移学习的情绪识别方法分析

Piyush Bhandari, Rakesh Kumar Bijarniya, Subhamoy Chatterjee, M. Kolekar
{"title":"基于自我学习和迁移学习的情绪识别方法分析","authors":"Piyush Bhandari, Rakesh Kumar Bijarniya, Subhamoy Chatterjee, M. Kolekar","doi":"10.1109/SPIN.2018.8474199","DOIUrl":null,"url":null,"abstract":"Using a deep Learning approach for any classification task demands the availability of a large labeled dataset. Such datasets are not only hard to find but also quite tedious to generate. Whereas unlabeled and un-organized sets of information is largely available in the world wide web. Especially in emotion or expression recognition, quality datasets, which are organized and freely available are very limited in number. In order to tackle such problems of small sized dataset, we analyze approaches such as self-taught and transfer learning for the expression classification, along with the extent to which the weights are transferable to the expression classification task. The base model for both types of learning is trained using the cifar10 database. We do not assume that the base data follows the same class labels or generative distribution as the test data. For testing our algorithm we use JAFFE dataset and draw inferences from the results obtained. We document that self taught learning forces the neural network to settle at a local minima rather than the global minimum. Transfer learning outperforms self taught learning and we also observe the correlation between the layers of a deep network when transferring the weights in a layer wise fashion.","PeriodicalId":184596,"journal":{"name":"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Analysis for Self-taught and Transfer Learning Based Approaches for Emotion Recognition\",\"authors\":\"Piyush Bhandari, Rakesh Kumar Bijarniya, Subhamoy Chatterjee, M. Kolekar\",\"doi\":\"10.1109/SPIN.2018.8474199\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Using a deep Learning approach for any classification task demands the availability of a large labeled dataset. Such datasets are not only hard to find but also quite tedious to generate. Whereas unlabeled and un-organized sets of information is largely available in the world wide web. Especially in emotion or expression recognition, quality datasets, which are organized and freely available are very limited in number. In order to tackle such problems of small sized dataset, we analyze approaches such as self-taught and transfer learning for the expression classification, along with the extent to which the weights are transferable to the expression classification task. The base model for both types of learning is trained using the cifar10 database. We do not assume that the base data follows the same class labels or generative distribution as the test data. For testing our algorithm we use JAFFE dataset and draw inferences from the results obtained. We document that self taught learning forces the neural network to settle at a local minima rather than the global minimum. Transfer learning outperforms self taught learning and we also observe the correlation between the layers of a deep network when transferring the weights in a layer wise fashion.\",\"PeriodicalId\":184596,\"journal\":{\"name\":\"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPIN.2018.8474199\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPIN.2018.8474199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

对任何分类任务使用深度学习方法都需要一个大型标记数据集的可用性。这样的数据集不仅很难找到,而且生成起来也相当繁琐。然而,在万维网上,大量的信息是没有标签和没有组织的。特别是在情绪或表情识别方面,有组织且免费提供的高质量数据集数量非常有限。为了解决小数据集的这类问题,我们分析了表达分类的自学和迁移学习等方法,以及权重可转移到表达分类任务的程度。这两种学习类型的基本模型都是使用cifar10数据库进行训练的。我们不假设基本数据遵循与测试数据相同的类标签或生成分布。为了测试我们的算法,我们使用JAFFE数据集并从得到的结果中得出推论。我们证明了自学迫使神经网络在局部最小值而不是全局最小值处定居。迁移学习优于自学学习,当以分层方式传递权重时,我们也观察到深度网络各层之间的相关性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analysis for Self-taught and Transfer Learning Based Approaches for Emotion Recognition
Using a deep Learning approach for any classification task demands the availability of a large labeled dataset. Such datasets are not only hard to find but also quite tedious to generate. Whereas unlabeled and un-organized sets of information is largely available in the world wide web. Especially in emotion or expression recognition, quality datasets, which are organized and freely available are very limited in number. In order to tackle such problems of small sized dataset, we analyze approaches such as self-taught and transfer learning for the expression classification, along with the extent to which the weights are transferable to the expression classification task. The base model for both types of learning is trained using the cifar10 database. We do not assume that the base data follows the same class labels or generative distribution as the test data. For testing our algorithm we use JAFFE dataset and draw inferences from the results obtained. We document that self taught learning forces the neural network to settle at a local minima rather than the global minimum. Transfer learning outperforms self taught learning and we also observe the correlation between the layers of a deep network when transferring the weights in a layer wise fashion.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信