{"title":"An Investigation of Positional Encoding in Transformer-based End-to-end Speech Recognition","authors":"Fengpeng Yue, Tom Ko","doi":"10.1109/ISCSLP49672.2021.9362093","DOIUrl":null,"url":null,"abstract":"In the Transformer architecture, the model does not intrinsically learn the ordering information of the input frames and tokens due to its self-attention mechanism. In sequence-to-sequence learning tasks, the missing of ordering information is explicitly filled up by the use of positional representation. Currently, there are two major ways of using positional representation: the absolute way and relative way. In both ways, the positional in-formation is represented by positional vector. In this paper, we propose the use of positional matrix in the context of relative positional vector. Instead of adding the vectors to the key vectors in the self-attention layer, our method transforms the key vectors according to its position. Experiments on LibriSpeech dataset show that our approach outperforms the positional vector approach.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCSLP49672.2021.9362093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In the Transformer architecture, the model does not intrinsically learn the ordering information of the input frames and tokens due to its self-attention mechanism. In sequence-to-sequence learning tasks, the missing of ordering information is explicitly filled up by the use of positional representation. Currently, there are two major ways of using positional representation: the absolute way and relative way. In both ways, the positional in-formation is represented by positional vector. In this paper, we propose the use of positional matrix in the context of relative positional vector. Instead of adding the vectors to the key vectors in the self-attention layer, our method transforms the key vectors according to its position. Experiments on LibriSpeech dataset show that our approach outperforms the positional vector approach.