基于音节级长短期记忆递归神经网络的智能个人助理韩语语音界面语言模型

Donghyun Lee, Hosung Park, Minkyu Lim, Ji-Hwan Kim
{"title":"基于音节级长短期记忆递归神经网络的智能个人助理韩语语音界面语言模型","authors":"Donghyun Lee, Hosung Park, Minkyu Lim, Ji-Hwan Kim","doi":"10.1109/GCCE46687.2019.9015213","DOIUrl":null,"url":null,"abstract":"This study proposes a syllable-level long short-term memory (LSTM) recurrent neural network (RNN)-based language model for a Korean voice interface in intelligent personal assistants (IPAs). Most Korean voice interfaces in IPAs use word-level $n$ -gram language models. Such models suffer from the following two problems: 1) the syntax information in a longer word history is limited because of the limitation of $n$ and 2) The out-of-vocabulary (OOV) problem can occur in a word-based vocabulary. To solve the first problem, the proposed model uses an LSTM RNN-based language model because an LSTM RNN provides long-term dependency information. To solve the second problem, the proposed model is trained with a syllable-level text corpus. Korean words comprise syllables, and therefore, OOV words are not presented in a syllable-based lexicon. In experiments, the RNN-based language model and the proposed model achieved perplexity (PPL) of 68.74 and 17.81, respectively.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Syllable-Level Long Short-Term Memory Recurrent Neural Network-based Language Model for Korean Voice Interface in Intelligent Personal Assistants\",\"authors\":\"Donghyun Lee, Hosung Park, Minkyu Lim, Ji-Hwan Kim\",\"doi\":\"10.1109/GCCE46687.2019.9015213\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study proposes a syllable-level long short-term memory (LSTM) recurrent neural network (RNN)-based language model for a Korean voice interface in intelligent personal assistants (IPAs). Most Korean voice interfaces in IPAs use word-level $n$ -gram language models. Such models suffer from the following two problems: 1) the syntax information in a longer word history is limited because of the limitation of $n$ and 2) The out-of-vocabulary (OOV) problem can occur in a word-based vocabulary. To solve the first problem, the proposed model uses an LSTM RNN-based language model because an LSTM RNN provides long-term dependency information. To solve the second problem, the proposed model is trained with a syllable-level text corpus. Korean words comprise syllables, and therefore, OOV words are not presented in a syllable-based lexicon. In experiments, the RNN-based language model and the proposed model achieved perplexity (PPL) of 68.74 and 17.81, respectively.\",\"PeriodicalId\":303502,\"journal\":{\"name\":\"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GCCE46687.2019.9015213\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCCE46687.2019.9015213","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

本研究提出一种音节级长短期记忆(LSTM)递归神经网络(RNN)语言模型,用于智能个人助理(IPAs)的韩语语音界面。IPAs中的韩语语音界面大多使用单词级的$n$ -gram语言模型。这种模型存在以下两个问题:1)由于$n$的限制,较长单词历史中的语法信息受到限制;2)基于单词的词汇表中可能出现词汇外(OOV)问题。为了解决第一个问题,该模型使用了基于LSTM RNN的语言模型,因为LSTM RNN提供了长期依赖信息。为了解决第二个问题,该模型使用音节级文本语料库进行训练。韩语单词是由音节组成的,因此,OOV单词不会出现在基于音节的词典中。在实验中,基于rnn的语言模型和所提模型的perplexity (PPL)分别达到68.74和17.81。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Syllable-Level Long Short-Term Memory Recurrent Neural Network-based Language Model for Korean Voice Interface in Intelligent Personal Assistants
This study proposes a syllable-level long short-term memory (LSTM) recurrent neural network (RNN)-based language model for a Korean voice interface in intelligent personal assistants (IPAs). Most Korean voice interfaces in IPAs use word-level $n$ -gram language models. Such models suffer from the following two problems: 1) the syntax information in a longer word history is limited because of the limitation of $n$ and 2) The out-of-vocabulary (OOV) problem can occur in a word-based vocabulary. To solve the first problem, the proposed model uses an LSTM RNN-based language model because an LSTM RNN provides long-term dependency information. To solve the second problem, the proposed model is trained with a syllable-level text corpus. Korean words comprise syllables, and therefore, OOV words are not presented in a syllable-based lexicon. In experiments, the RNN-based language model and the proposed model achieved perplexity (PPL) of 68.74 and 17.81, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信