{"title":"End To End Model For Speaker Identification With Minimal Training Data","authors":"Sathiyakugan Balakrishnan, Kanthasamy Jathusan, Uthayasanker Thayasivam","doi":"10.1109/MERCon52712.2021.9525740","DOIUrl":null,"url":null,"abstract":"Deep learning has achieved immense universality by outperforming GMM and i-vectors on speaker identification. Neural Network approaches have obtained promising results when fed by raw speech samples directly. Modified Convolutional Neural Network (CNN) architecture called SincNet, based on parameterized sinc functions which offer a very compact way to derive a customized filter bank in the short utterance. This paper proposes attention based Long Short Term Memory (LSTM) architecture that encourages discovering more meaningful speaker-related features with minimal training data. Attention layer built using Neural Networks offers a unique and efficient representation of the speaker characteristics which explore the connection between an aspect and the content of short utterances. The proposed approach converges faster and performs better than the SincNet on the experiments carried out in the speaker identification tasks.","PeriodicalId":6855,"journal":{"name":"2021 Moratuwa Engineering Research Conference (MERCon)","volume":"85 1","pages":"456-461"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Moratuwa Engineering Research Conference (MERCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MERCon52712.2021.9525740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has achieved immense universality by outperforming GMM and i-vectors on speaker identification. Neural Network approaches have obtained promising results when fed by raw speech samples directly. Modified Convolutional Neural Network (CNN) architecture called SincNet, based on parameterized sinc functions which offer a very compact way to derive a customized filter bank in the short utterance. This paper proposes attention based Long Short Term Memory (LSTM) architecture that encourages discovering more meaningful speaker-related features with minimal training data. Attention layer built using Neural Networks offers a unique and efficient representation of the speaker characteristics which explore the connection between an aspect and the content of short utterances. The proposed approach converges faster and performs better than the SincNet on the experiments carried out in the speaker identification tasks.