Learning Effective Global Receptive Field for Facial Expression Recognition

Jiayi Han, Ang Li, Donghong Han, Jianfeng Feng
{"title":"Learning Effective Global Receptive Field for Facial Expression Recognition","authors":"Jiayi Han, Ang Li, Donghong Han, Jianfeng Feng","doi":"10.1109/FG57933.2023.10042628","DOIUrl":null,"url":null,"abstract":"Facial expression recognition (FER) remains a challenging task despite years of effort because of the variations in view angles and human poses and the exclusion of expression-relevant facial parts. In this work, we propose to learn effective Global receptive field and Class-sensitive metrics for FER, namely GCNet which contains a Class-sensitive metric learning module (CSMLM) and mobile dilation modules (MDMs). CSMLM fully takes advantage of the variation in human faces to extract class-sensitive and spatially consistent features to improve the effectiveness of FER. MDM utilizes cascaded dilation convolution layers to achieve a global receptive field. However, directly adding a dilation convolution layer to a given sequence of convolution layers may face the gridding problem, which leads to sparse feature maps. In this work, we find the upper bound of the dilation rate of the additional convolution layer that avoids the gridding problem. Experiments show that the proposed approach reaches state-of-the-art (SOTA) performance on the RAF-DB, FER-Plus, and SFEW2.0 datasets.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FG57933.2023.10042628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Facial expression recognition (FER) remains a challenging task despite years of effort because of the variations in view angles and human poses and the exclusion of expression-relevant facial parts. In this work, we propose to learn effective Global receptive field and Class-sensitive metrics for FER, namely GCNet which contains a Class-sensitive metric learning module (CSMLM) and mobile dilation modules (MDMs). CSMLM fully takes advantage of the variation in human faces to extract class-sensitive and spatially consistent features to improve the effectiveness of FER. MDM utilizes cascaded dilation convolution layers to achieve a global receptive field. However, directly adding a dilation convolution layer to a given sequence of convolution layers may face the gridding problem, which leads to sparse feature maps. In this work, we find the upper bound of the dilation rate of the additional convolution layer that avoids the gridding problem. Experiments show that the proposed approach reaches state-of-the-art (SOTA) performance on the RAF-DB, FER-Plus, and SFEW2.0 datasets.
学习面部表情识别的有效全局接受野
面部表情识别(FER)尽管经过多年的努力,但由于视角和人体姿势的变化以及与表情相关的面部部位的排除,仍然是一项具有挑战性的任务。在这项工作中,我们建议学习有效的全局接受场和类敏感度量,即GCNet,它包含一个类敏感度量学习模块(CSMLM)和移动扩展模块(mdm)。CSMLM充分利用人脸的变化特征,提取类别敏感和空间一致的特征,提高了算法的有效性。MDM利用级联扩张卷积层来实现全局接受野。然而,在给定的卷积层序列上直接添加扩张卷积层可能会面临网格化问题,从而导致稀疏的特征映射。在这项工作中,我们找到了避免网格化问题的附加卷积层膨胀率的上界。实验表明,该方法在RAF-DB、FER-Plus和SFEW2.0数据集上达到了最先进(SOTA)的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信