{"title":"Efficient Speaker Naming via Deep Audio-Face Fusion and End-to-End Attention Model","authors":"Xin Liu, Jiajia Geng, Haibin Ling","doi":"10.1109/ACPR.2017.13","DOIUrl":null,"url":null,"abstract":"Speaker naming has recently received wide attention in identifying the speaking character in a movie video, and it is an extremely challenging topic mainly attributed to the significant variation of facial appearance. Motivated by multimodal applications, we present an efficient speaker naming approach via deep audio-face fusion and end-to-end attention model. First, we start with LSTM-encoding of acoustic feature and VGG-encoding of face images, and then exploit an end-to-end common attention vector by convolution-softmax encoding of their locally concatenated features, whereby the face attention vector can be well discriminated. Further, we apply the low-rank bilinear model to efficiently fuse the face attention vector and acoustic feature vector, whereby the joint audio-face representation can be discriminatively obtained for speaker naming. In addition, we address another acoustic feature representation scheme by convolution-encoding, which can replace LSTM in networks to speed up the training process. The experimental results have shown that our proposed speaker naming approach yields comparative and even better results than the state-of-the-art counterparts.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"189 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACPR.2017.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Speaker naming has recently received wide attention in identifying the speaking character in a movie video, and it is an extremely challenging topic mainly attributed to the significant variation of facial appearance. Motivated by multimodal applications, we present an efficient speaker naming approach via deep audio-face fusion and end-to-end attention model. First, we start with LSTM-encoding of acoustic feature and VGG-encoding of face images, and then exploit an end-to-end common attention vector by convolution-softmax encoding of their locally concatenated features, whereby the face attention vector can be well discriminated. Further, we apply the low-rank bilinear model to efficiently fuse the face attention vector and acoustic feature vector, whereby the joint audio-face representation can be discriminatively obtained for speaker naming. In addition, we address another acoustic feature representation scheme by convolution-encoding, which can replace LSTM in networks to speed up the training process. The experimental results have shown that our proposed speaker naming approach yields comparative and even better results than the state-of-the-art counterparts.