自发微表情运动识别的双流注意感知网络

Bo Sun, Siming Cao, Jun He, Lejun Yu
{"title":"自发微表情运动识别的双流注意感知网络","authors":"Bo Sun, Siming Cao, Jun He, Lejun Yu","doi":"10.1109/ICSESS47205.2019.9040685","DOIUrl":null,"url":null,"abstract":"Micro-expression is a special facial movement, which can be used as an important basis for judging people's subjective emotions. Constrained by the physiology, micro-expression can be described temporally by four phases: neutral, onset, apex, and offset. And previous studies confirmed that using the crucial temporal sequences is better than using the whole video for micro-expression recognition. Therefore, micro-expression movement spotting is considered beneficial for micro-expression recognition. While it is a challenging task due to the short duration, low intensity and usually local motion characteristics of micro-expression. Inspired by the mechanism of the ventral and dorsal visual pathways in the cerebral visual cortex, we propose an end2end two-stream attention-aware network for micro-expression movement spotting in this paper. We construct a spatial-temporal cascaded network for each stream which combines convolutional neural network and attention-aware bilateral long short-term memory recurrent neural network. And we apply the attention mechanism to two-stream feature fusion. Experiments are conducted on three available published micro-expression datasets (SMIC2, CASME, and CASME II). The experimental results show that the proposed framework outperforms state-of-the-art methods for the task of micro-expression movement spotting.","PeriodicalId":203944,"journal":{"name":"2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Two-stream Attention-aware Network for Spontaneous Micro-expression Movement Spotting\",\"authors\":\"Bo Sun, Siming Cao, Jun He, Lejun Yu\",\"doi\":\"10.1109/ICSESS47205.2019.9040685\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Micro-expression is a special facial movement, which can be used as an important basis for judging people's subjective emotions. Constrained by the physiology, micro-expression can be described temporally by four phases: neutral, onset, apex, and offset. And previous studies confirmed that using the crucial temporal sequences is better than using the whole video for micro-expression recognition. Therefore, micro-expression movement spotting is considered beneficial for micro-expression recognition. While it is a challenging task due to the short duration, low intensity and usually local motion characteristics of micro-expression. Inspired by the mechanism of the ventral and dorsal visual pathways in the cerebral visual cortex, we propose an end2end two-stream attention-aware network for micro-expression movement spotting in this paper. We construct a spatial-temporal cascaded network for each stream which combines convolutional neural network and attention-aware bilateral long short-term memory recurrent neural network. And we apply the attention mechanism to two-stream feature fusion. Experiments are conducted on three available published micro-expression datasets (SMIC2, CASME, and CASME II). The experimental results show that the proposed framework outperforms state-of-the-art methods for the task of micro-expression movement spotting.\",\"PeriodicalId\":203944,\"journal\":{\"name\":\"2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSESS47205.2019.9040685\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSESS47205.2019.9040685","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

微表情是一种特殊的面部动作,可以作为判断人主观情绪的重要依据。受生理的限制,微表情在时间上可分为四个阶段:中性、开始、顶点和偏移。以往的研究证实,使用关键时间序列比使用整个视频进行微表情识别效果更好。因此,微表情运动定位被认为有利于微表情识别。然而,由于微表情持续时间短、强度低且通常是局部运动的特点,这是一项具有挑战性的任务。受大脑视觉皮层腹侧和背侧视觉通路机制的启发,我们提出了一个端对端双流的微表情运动注意感知网络。我们将卷积神经网络与注意感知双侧长短期记忆递归神经网络相结合,构建了每个流的时空级联网络。并将注意机制应用于两流特征融合。在SMIC2、CASME和CASME II三个已发表的微表情数据集上进行了实验,实验结果表明,所提出的框架在微表情运动识别任务上优于现有的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Two-stream Attention-aware Network for Spontaneous Micro-expression Movement Spotting
Micro-expression is a special facial movement, which can be used as an important basis for judging people's subjective emotions. Constrained by the physiology, micro-expression can be described temporally by four phases: neutral, onset, apex, and offset. And previous studies confirmed that using the crucial temporal sequences is better than using the whole video for micro-expression recognition. Therefore, micro-expression movement spotting is considered beneficial for micro-expression recognition. While it is a challenging task due to the short duration, low intensity and usually local motion characteristics of micro-expression. Inspired by the mechanism of the ventral and dorsal visual pathways in the cerebral visual cortex, we propose an end2end two-stream attention-aware network for micro-expression movement spotting in this paper. We construct a spatial-temporal cascaded network for each stream which combines convolutional neural network and attention-aware bilateral long short-term memory recurrent neural network. And we apply the attention mechanism to two-stream feature fusion. Experiments are conducted on three available published micro-expression datasets (SMIC2, CASME, and CASME II). The experimental results show that the proposed framework outperforms state-of-the-art methods for the task of micro-expression movement spotting.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信