一种有效的光照不变驾驶场景动态面部表情识别方法

IF 2.5 4区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Ercheng Pei, Man Guo, Abel Díaz Berenguer, Lang He, HaiFeng Chen
{"title":"一种有效的光照不变驾驶场景动态面部表情识别方法","authors":"Ercheng Pei,&nbsp;Man Guo,&nbsp;Abel Díaz Berenguer,&nbsp;Lang He,&nbsp;HaiFeng Chen","doi":"10.1049/itr2.70009","DOIUrl":null,"url":null,"abstract":"<p>Facial expression recognition (FER) is significant in many application scenarios, such as driving scenarios with very different lighting conditions between day and night. Existing methods primarily focus on eliminating the negative effects of pose and identity information on FER, but overlook the challenges posed by lighting variations. So, this work proposes an efficient illumination-invariant dynamic FER method. To augment the robustness of FER methods to illumination variance, contrast normalisation is introduced to form a low-level illumination-invariant expression features learningmodule. In addition, to extract dynamic and salient expression features, a two-stage temporal attention mechanism is introduced to form a high-level dynamic salient expression features learning module deciphering dynamic facial expression patterns. Furthermore, additive angular margin loss is incorporated into the training of the proposed model to increase the distances between samples of different categories while reducing the distances between samples belonging to the same category. We conducted comprehensive experiments using the Oulu-CASIA and DFEW datasets. On the Oulu-CASIA VIS and NIR subsets in the normal illumination, the proposed method achieved accuracies of 92.08% and 91.46%, which are 1.01 and 7.06 percentage points higher than the SOTA results from the DCBLSTM and CELDL method, respectively. Based on the Oulu-CASIA NIR subset in the dark illumination, the proposed method achieved an accuracies of 91.25%, which are 4.54 percentage points higher than the SOTA result from the CDLLNet method. On the DFEW dataset, the proposed method achieved a UAR of 60.67% and a WAR of 71.48% with 12M parameters, approaching the SOTA result from the VideoMAE model with 86M parameters. The outcomes of our experiments validate the effectiveness of the proposed dynamic FER method, affirming its ability in addressing the challenges posed by diverse illumination conditions in driving scenarios.</p>","PeriodicalId":50381,"journal":{"name":"IET Intelligent Transport Systems","volume":"19 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70009","citationCount":"0","resultStr":"{\"title\":\"An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios\",\"authors\":\"Ercheng Pei,&nbsp;Man Guo,&nbsp;Abel Díaz Berenguer,&nbsp;Lang He,&nbsp;HaiFeng Chen\",\"doi\":\"10.1049/itr2.70009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Facial expression recognition (FER) is significant in many application scenarios, such as driving scenarios with very different lighting conditions between day and night. Existing methods primarily focus on eliminating the negative effects of pose and identity information on FER, but overlook the challenges posed by lighting variations. So, this work proposes an efficient illumination-invariant dynamic FER method. To augment the robustness of FER methods to illumination variance, contrast normalisation is introduced to form a low-level illumination-invariant expression features learningmodule. In addition, to extract dynamic and salient expression features, a two-stage temporal attention mechanism is introduced to form a high-level dynamic salient expression features learning module deciphering dynamic facial expression patterns. Furthermore, additive angular margin loss is incorporated into the training of the proposed model to increase the distances between samples of different categories while reducing the distances between samples belonging to the same category. We conducted comprehensive experiments using the Oulu-CASIA and DFEW datasets. On the Oulu-CASIA VIS and NIR subsets in the normal illumination, the proposed method achieved accuracies of 92.08% and 91.46%, which are 1.01 and 7.06 percentage points higher than the SOTA results from the DCBLSTM and CELDL method, respectively. Based on the Oulu-CASIA NIR subset in the dark illumination, the proposed method achieved an accuracies of 91.25%, which are 4.54 percentage points higher than the SOTA result from the CDLLNet method. On the DFEW dataset, the proposed method achieved a UAR of 60.67% and a WAR of 71.48% with 12M parameters, approaching the SOTA result from the VideoMAE model with 86M parameters. The outcomes of our experiments validate the effectiveness of the proposed dynamic FER method, affirming its ability in addressing the challenges posed by diverse illumination conditions in driving scenarios.</p>\",\"PeriodicalId\":50381,\"journal\":{\"name\":\"IET Intelligent Transport Systems\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70009\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Intelligent Transport Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/itr2.70009\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Intelligent Transport Systems","FirstCategoryId":"5","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/itr2.70009","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

面部表情识别(FER)在许多应用场景中都很重要,例如在昼夜光照条件差异很大的驾驶场景中。现有方法主要侧重于消除姿态和身份信息对FER的负面影响,但忽略了光照变化带来的挑战。因此,本文提出了一种高效的光照不变动态法。为了增强FER方法对光照变化的鲁棒性,引入了对比度归一化,形成了一个低级光照不变表达式特征学习模块。此外,为了提取动态和显著性表情特征,引入两阶段时间注意机制,形成高层次动态显著性表情特征学习模块,解码动态面部表情模式。此外,在模型的训练中加入了加性角余量损失,增加了不同类别样本之间的距离,同时减少了同一类别样本之间的距离。我们使用Oulu-CASIA和DFEW数据集进行了综合实验。在正常光照下的Oulu-CASIA VIS和NIR子集上,该方法的准确率分别为92.08%和91.46%,比DCBLSTM和CELDL方法的SOTA结果分别提高了1.01和7.06个百分点。基于暗光照下的Oulu-CASIA近红外子集,该方法的准确率为91.25%,比CDLLNet方法的SOTA结果提高了4.54个百分点。在DFEW数据集上,使用12M个参数时,UAR为60.67%,WAR为71.48%,接近使用86M个参数的VideoMAE模型的SOTA结果。我们的实验结果验证了所提出的动态FER方法的有效性,肯定了它在解决驾驶场景中不同照明条件带来的挑战方面的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios

An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios

An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios

An Efficient Illumination-Invariant Dynamic Facial Expression Recognition for Driving Scenarios

Facial expression recognition (FER) is significant in many application scenarios, such as driving scenarios with very different lighting conditions between day and night. Existing methods primarily focus on eliminating the negative effects of pose and identity information on FER, but overlook the challenges posed by lighting variations. So, this work proposes an efficient illumination-invariant dynamic FER method. To augment the robustness of FER methods to illumination variance, contrast normalisation is introduced to form a low-level illumination-invariant expression features learningmodule. In addition, to extract dynamic and salient expression features, a two-stage temporal attention mechanism is introduced to form a high-level dynamic salient expression features learning module deciphering dynamic facial expression patterns. Furthermore, additive angular margin loss is incorporated into the training of the proposed model to increase the distances between samples of different categories while reducing the distances between samples belonging to the same category. We conducted comprehensive experiments using the Oulu-CASIA and DFEW datasets. On the Oulu-CASIA VIS and NIR subsets in the normal illumination, the proposed method achieved accuracies of 92.08% and 91.46%, which are 1.01 and 7.06 percentage points higher than the SOTA results from the DCBLSTM and CELDL method, respectively. Based on the Oulu-CASIA NIR subset in the dark illumination, the proposed method achieved an accuracies of 91.25%, which are 4.54 percentage points higher than the SOTA result from the CDLLNet method. On the DFEW dataset, the proposed method achieved a UAR of 60.67% and a WAR of 71.48% with 12M parameters, approaching the SOTA result from the VideoMAE model with 86M parameters. The outcomes of our experiments validate the effectiveness of the proposed dynamic FER method, affirming its ability in addressing the challenges posed by diverse illumination conditions in driving scenarios.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Intelligent Transport Systems
IET Intelligent Transport Systems 工程技术-运输科技
CiteScore
6.50
自引率
7.40%
发文量
159
审稿时长
3 months
期刊介绍: IET Intelligent Transport Systems is an interdisciplinary journal devoted to research into the practical applications of ITS and infrastructures. The scope of the journal includes the following: Sustainable traffic solutions Deployments with enabling technologies Pervasive monitoring Applications; demonstrations and evaluation Economic and behavioural analyses of ITS services and scenario Data Integration and analytics Information collection and processing; image processing applications in ITS ITS aspects of electric vehicles Autonomous vehicles; connected vehicle systems; In-vehicle ITS, safety and vulnerable road user aspects Mobility as a service systems Traffic management and control Public transport systems technologies Fleet and public transport logistics Emergency and incident management Demand management and electronic payment systems Traffic related air pollution management Policy and institutional issues Interoperability, standards and architectures Funding scenarios Enforcement Human machine interaction Education, training and outreach Current Special Issue Call for papers: Intelligent Transportation Systems in Smart Cities for Sustainable Environment - https://digital-library.theiet.org/files/IET_ITS_CFP_ITSSCSE.pdf Sustainably Intelligent Mobility (SIM) - https://digital-library.theiet.org/files/IET_ITS_CFP_SIM.pdf Traffic Theory and Modelling in the Era of Artificial Intelligence and Big Data (in collaboration with World Congress for Transport Research, WCTR 2019) - https://digital-library.theiet.org/files/IET_ITS_CFP_WCTR.pdf
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信