Sai Bharath Chandra Gutha, M. Shaik, Tejas Udayakumar, Ajit Ashok Saunshikhar
{"title":"Improved Feed Forward Attention Mechanism in Bidirectional Recurrent Neural Networks for Robust Sequence Classification","authors":"Sai Bharath Chandra Gutha, M. Shaik, Tejas Udayakumar, Ajit Ashok Saunshikhar","doi":"10.1109/SPCOM50965.2020.9179606","DOIUrl":null,"url":null,"abstract":"Feed Forward Attention (FFA) in Recurrent Neural Networks (RNNs) is a popular attention mechanism to classify sequential data. In Bidirectional RNNs (BiRNNs), FFA concatenates hidden states from forward and backward layers to compute unscaled logits and normalized attention weights at each time step and softmax is applied to the weighted sum of logits to compute posterior probabilities. Such concatenation corresponds to the addition of individual unnormalized attention weights and unscaled logits from forward and backward layers. In this paper, we present a novel attention mechanism called the Improved Feed Forward Attention Mechanism (IFFA), that computes the probabilities and normalized attention weights separately for forward and backward layers without concatenating the hidden states. Finally, weighted probabilities are computed at each time step and averaged across time. Our experimental results show IFFA outperforming FFA in diverse classification tasks such as speech accent, emotion and whisper classification.","PeriodicalId":208527,"journal":{"name":"2020 International Conference on Signal Processing and Communications (SPCOM)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Signal Processing and Communications (SPCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPCOM50965.2020.9179606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Feed Forward Attention (FFA) in Recurrent Neural Networks (RNNs) is a popular attention mechanism to classify sequential data. In Bidirectional RNNs (BiRNNs), FFA concatenates hidden states from forward and backward layers to compute unscaled logits and normalized attention weights at each time step and softmax is applied to the weighted sum of logits to compute posterior probabilities. Such concatenation corresponds to the addition of individual unnormalized attention weights and unscaled logits from forward and backward layers. In this paper, we present a novel attention mechanism called the Improved Feed Forward Attention Mechanism (IFFA), that computes the probabilities and normalized attention weights separately for forward and backward layers without concatenating the hidden states. Finally, weighted probabilities are computed at each time step and averaged across time. Our experimental results show IFFA outperforming FFA in diverse classification tasks such as speech accent, emotion and whisper classification.