Zhanibek Kozhirbayev, Talgat Islamgozhayev, Zhandos Yessenbayev, A. Sharipbay
{"title":"Preliminary tasks of unsupervised speech recognition based on unaligned audio and text data","authors":"Zhanibek Kozhirbayev, Talgat Islamgozhayev, Zhandos Yessenbayev, A. Sharipbay","doi":"10.1109/ICEMIS56295.2022.9914249","DOIUrl":null,"url":null,"abstract":"We present herein our work on the preliminary tasks of unsupervised speech recognition using only unaligned audio and text datasets. The motivation for this is the general progress in generative models. Using the assumption that the frequencies and contextual relationships of words are close in audio and text domains for the same language. The experiments on acoustic and test data using the variational autoencoder (VAE) architecture were conducted on word level. Our plan to extract the encoding part of the acoustic VAE and the decoding part of the text VAE to build a joint VAE.","PeriodicalId":191284,"journal":{"name":"2022 International Conference on Engineering & MIS (ICEMIS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Engineering & MIS (ICEMIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEMIS56295.2022.9914249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present herein our work on the preliminary tasks of unsupervised speech recognition using only unaligned audio and text datasets. The motivation for this is the general progress in generative models. Using the assumption that the frequencies and contextual relationships of words are close in audio and text domains for the same language. The experiments on acoustic and test data using the variational autoencoder (VAE) architecture were conducted on word level. Our plan to extract the encoding part of the acoustic VAE and the decoding part of the text VAE to build a joint VAE.