D-PAttNet:用于动作单元检测的动态补丁-注意力深度网络

IF 2.4 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Frontiers in Computer Science Pub Date : 2019-11-01 Epub Date: 2019-11-29 DOI:10.3389/fcomp.2019.00011
Itir Onal Ertugrul, Le Yang, László A Jeni, Jeffrey F Cohn
{"title":"D-PAttNet:用于动作单元检测的动态补丁-注意力深度网络","authors":"Itir Onal Ertugrul, Le Yang, László A Jeni, Jeffrey F Cohn","doi":"10.3389/fcomp.2019.00011","DOIUrl":null,"url":null,"abstract":"<p><p>Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":"1 ","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6953909/pdf/","citationCount":"0","resultStr":"{\"title\":\"D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.\",\"authors\":\"Itir Onal Ertugrul, Le Yang, László A Jeni, Jeffrey F Cohn\",\"doi\":\"10.3389/fcomp.2019.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.</p>\",\"PeriodicalId\":52823,\"journal\":{\"name\":\"Frontiers in Computer Science\",\"volume\":\"1 \",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6953909/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fcomp.2019.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2019/11/29 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fcomp.2019.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/11/29 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

面部动作单元(AU)与特定的局部面部区域有关。最近在自动 AU 检测方面所做的努力主要集中在学习面部斑块表征以检测特定的 AU。这些努力遇到了三个障碍。首先,它们隐含地假定面部补丁对头部旋转具有鲁棒性;然而非正面旋转是很常见的。其次,AU 和斑块之间的映射是先验定义的,忽略了 AU 之间的共现。第三,AUs 的动态要么被忽略,要么被顺序建模,而不是像人类感知那样同时建模。受人类感知领域最新进展的启发,我们提出了一种动态斑块注意力深度网络(称为 D-PAttNet),用于 AU 检测,该网络(i)控制三维头部和面部旋转,(ii)学习斑块到 AU 的映射,(iii)建立时空动态模型。D-PAttNet 方法大大改进了现有的技术水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.

D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.

D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.

D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.

Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in Computer Science
Frontiers in Computer Science COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-
CiteScore
4.30
自引率
0.00%
发文量
152
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信