{"title":"Factored language modeling for Russian LVCSR","authors":"Daria Vazhenina, K. Markov","doi":"10.1109/ICAWST.2013.6765434","DOIUrl":null,"url":null,"abstract":"The Russian language is characterized by very flexible word order, which limits the ability of the standard n-grams to capture important regularities in the data. Moreover, Russian is highly inflectional language with rich morphology, which leads to high out-of-vocabulary word rates. Recently factored language model (FLM) was proposed with the aim of addressing the problems of morphologically rich languages. In this paper, we describe our implementation of the FLM for the Russian language automatic speech recognition (ASR). We investigated the effect of different factors, and propose a strategy to find the best factor set and back-off path. Evaluation experiments showed that FLM can decrease the perplexity as much as 20%. This allows to achieve 4.0% word error rate (WER) relative reduction, which further increases to 6.9% when FLM is interpolated with the conventional 3-gram LM.","PeriodicalId":68697,"journal":{"name":"炎黄地理","volume":"108 1","pages":"205-211"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"炎黄地理","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1109/ICAWST.2013.6765434","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The Russian language is characterized by very flexible word order, which limits the ability of the standard n-grams to capture important regularities in the data. Moreover, Russian is highly inflectional language with rich morphology, which leads to high out-of-vocabulary word rates. Recently factored language model (FLM) was proposed with the aim of addressing the problems of morphologically rich languages. In this paper, we describe our implementation of the FLM for the Russian language automatic speech recognition (ASR). We investigated the effect of different factors, and propose a strategy to find the best factor set and back-off path. Evaluation experiments showed that FLM can decrease the perplexity as much as 20%. This allows to achieve 4.0% word error rate (WER) relative reduction, which further increases to 6.9% when FLM is interpolated with the conventional 3-gram LM.