{"title":"一种用于蒙面人脸识别的双向注意模块","authors":"M. S. Shakeel","doi":"10.1109/VCIP56404.2022.10008847","DOIUrl":null,"url":null,"abstract":"Masked Face Recognition (MFR) is a recent addition to the directory of existing challenges in facial biometrics. Due to the limited exposure of facial regions due to mask-occlusion, it is essential to exploit the available non-occluded regions as much as possible for identity feature learning. Aiming to address this issue, we propose a dual-branch bidirectional attention module (BAM), which consists of a spatial attention block (SAB) and a channel attention block (CAB) in each branch. In the first stage, the SAB performs bidirectional interactions between the original feature map and its augmented version to highlight informative spatial locations for feature learning. The learned bidirectional spatial attention maps are then passed through a channel attention block (CAB) to assign high weights to only informative feature channels. Finally, the channel-wise calibrated feature responses are fused to generate a final attention-aware feature representation for MFR. Extensive experiments indicate that our proposed BAM is superior to various state-of-the-art methods in terms of recognizing mask-occluded face images under complex facial variations.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"BAM: A Bidirectional Attention Module for Masked Face Recognition\",\"authors\":\"M. S. Shakeel\",\"doi\":\"10.1109/VCIP56404.2022.10008847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Masked Face Recognition (MFR) is a recent addition to the directory of existing challenges in facial biometrics. Due to the limited exposure of facial regions due to mask-occlusion, it is essential to exploit the available non-occluded regions as much as possible for identity feature learning. Aiming to address this issue, we propose a dual-branch bidirectional attention module (BAM), which consists of a spatial attention block (SAB) and a channel attention block (CAB) in each branch. In the first stage, the SAB performs bidirectional interactions between the original feature map and its augmented version to highlight informative spatial locations for feature learning. The learned bidirectional spatial attention maps are then passed through a channel attention block (CAB) to assign high weights to only informative feature channels. Finally, the channel-wise calibrated feature responses are fused to generate a final attention-aware feature representation for MFR. Extensive experiments indicate that our proposed BAM is superior to various state-of-the-art methods in terms of recognizing mask-occluded face images under complex facial variations.\",\"PeriodicalId\":269379,\"journal\":{\"name\":\"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP56404.2022.10008847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP56404.2022.10008847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
BAM: A Bidirectional Attention Module for Masked Face Recognition
Masked Face Recognition (MFR) is a recent addition to the directory of existing challenges in facial biometrics. Due to the limited exposure of facial regions due to mask-occlusion, it is essential to exploit the available non-occluded regions as much as possible for identity feature learning. Aiming to address this issue, we propose a dual-branch bidirectional attention module (BAM), which consists of a spatial attention block (SAB) and a channel attention block (CAB) in each branch. In the first stage, the SAB performs bidirectional interactions between the original feature map and its augmented version to highlight informative spatial locations for feature learning. The learned bidirectional spatial attention maps are then passed through a channel attention block (CAB) to assign high weights to only informative feature channels. Finally, the channel-wise calibrated feature responses are fused to generate a final attention-aware feature representation for MFR. Extensive experiments indicate that our proposed BAM is superior to various state-of-the-art methods in terms of recognizing mask-occluded face images under complex facial variations.