俄语LVCSR的因子语言建模

Daria Vazhenina, K. Markov
{"title":"俄语LVCSR的因子语言建模","authors":"Daria Vazhenina, K. Markov","doi":"10.1109/ICAWST.2013.6765434","DOIUrl":null,"url":null,"abstract":"The Russian language is characterized by very flexible word order, which limits the ability of the standard n-grams to capture important regularities in the data. Moreover, Russian is highly inflectional language with rich morphology, which leads to high out-of-vocabulary word rates. Recently factored language model (FLM) was proposed with the aim of addressing the problems of morphologically rich languages. In this paper, we describe our implementation of the FLM for the Russian language automatic speech recognition (ASR). We investigated the effect of different factors, and propose a strategy to find the best factor set and back-off path. Evaluation experiments showed that FLM can decrease the perplexity as much as 20%. This allows to achieve 4.0% word error rate (WER) relative reduction, which further increases to 6.9% when FLM is interpolated with the conventional 3-gram LM.","PeriodicalId":68697,"journal":{"name":"炎黄地理","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Factored language modeling for Russian LVCSR\",\"authors\":\"Daria Vazhenina, K. Markov\",\"doi\":\"10.1109/ICAWST.2013.6765434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Russian language is characterized by very flexible word order, which limits the ability of the standard n-grams to capture important regularities in the data. Moreover, Russian is highly inflectional language with rich morphology, which leads to high out-of-vocabulary word rates. Recently factored language model (FLM) was proposed with the aim of addressing the problems of morphologically rich languages. In this paper, we describe our implementation of the FLM for the Russian language automatic speech recognition (ASR). We investigated the effect of different factors, and propose a strategy to find the best factor set and back-off path. Evaluation experiments showed that FLM can decrease the perplexity as much as 20%. This allows to achieve 4.0% word error rate (WER) relative reduction, which further increases to 6.9% when FLM is interpolated with the conventional 3-gram LM.\",\"PeriodicalId\":68697,\"journal\":{\"name\":\"炎黄地理\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"炎黄地理\",\"FirstCategoryId\":\"1089\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAWST.2013.6765434\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"炎黄地理","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1109/ICAWST.2013.6765434","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

俄语的特点是非常灵活的词序,这限制了标准n-gram捕捉数据中重要规律的能力。此外,俄语是一种高度屈折的语言,词形丰富,导致词汇外率很高。因子语言模型(FLM)是近年来提出的一种用于解决形态丰富语言问题的语言模型。在本文中,我们描述了FLM在俄语自动语音识别(ASR)中的实现。研究了不同因素的影响,提出了寻找最佳因素集和退出路径的策略。评价实验表明,FLM可使模糊度降低20%。这允许实现4.0%的单词错误率(WER)相对降低,当FLM与传统的3克LM插值时,错误率进一步增加到6.9%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Factored language modeling for Russian LVCSR
The Russian language is characterized by very flexible word order, which limits the ability of the standard n-grams to capture important regularities in the data. Moreover, Russian is highly inflectional language with rich morphology, which leads to high out-of-vocabulary word rates. Recently factored language model (FLM) was proposed with the aim of addressing the problems of morphologically rich languages. In this paper, we describe our implementation of the FLM for the Russian language automatic speech recognition (ASR). We investigated the effect of different factors, and propose a strategy to find the best factor set and back-off path. Evaluation experiments showed that FLM can decrease the perplexity as much as 20%. This allows to achieve 4.0% word error rate (WER) relative reduction, which further increases to 6.9% when FLM is interpolated with the conventional 3-gram LM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
784
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信