Multi-lingual unsupervised acoustic modeling using multi-task deep neural network under mismatch conditions

Yao Haitao, Xu Ji, Liu Jian
{"title":"Multi-lingual unsupervised acoustic modeling using multi-task deep neural network under mismatch conditions","authors":"Yao Haitao, Xu Ji, Liu Jian","doi":"10.1109/ICCSN.2016.7586635","DOIUrl":null,"url":null,"abstract":"This Cross-lingual knowledge sharing based acoustic modeling methods are usually used in Automatic Speech Recognition (ASR) of languages which do not have enough transcribed speech for acoustic model (AM) training. Conventional methods such as IPA based universal acoustic modeling have been proved to be effective under matched acoustic conditions, while usually poorly preformed when mismatch appears between the target language and the source languages. This paper proposes a method of multi-lingual unsupervised AM training for zero-resourced languages under mismatch conditions. The proposed method includes two main steps. In the first step, initial AM of the target low-resourced language was obtained using multi-task training method, in which original source language data and mapped source language data are jointly used. In the second step, AM of the target language is trained using automatically transcribed target language data, in the way of iteratively training new AMs and adapting the initial AMs. Experiments were conducted on a corpus with 100 hours untranscribed Japanese speech and 300 hours transcribed speech of other languages. The best result achieved by this paper is 51.75% character error rate (CER), which obtains 24.78% absolute reduction compared to baseline IPA system.","PeriodicalId":158877,"journal":{"name":"2016 8th IEEE International Conference on Communication Software and Networks (ICCSN)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 8th IEEE International Conference on Communication Software and Networks (ICCSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSN.2016.7586635","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This Cross-lingual knowledge sharing based acoustic modeling methods are usually used in Automatic Speech Recognition (ASR) of languages which do not have enough transcribed speech for acoustic model (AM) training. Conventional methods such as IPA based universal acoustic modeling have been proved to be effective under matched acoustic conditions, while usually poorly preformed when mismatch appears between the target language and the source languages. This paper proposes a method of multi-lingual unsupervised AM training for zero-resourced languages under mismatch conditions. The proposed method includes two main steps. In the first step, initial AM of the target low-resourced language was obtained using multi-task training method, in which original source language data and mapped source language data are jointly used. In the second step, AM of the target language is trained using automatically transcribed target language data, in the way of iteratively training new AMs and adapting the initial AMs. Experiments were conducted on a corpus with 100 hours untranscribed Japanese speech and 300 hours transcribed speech of other languages. The best result achieved by this paper is 51.75% character error rate (CER), which obtains 24.78% absolute reduction compared to baseline IPA system.
失配条件下多任务深度神经网络多语种无监督声学建模
这种基于跨语言知识共享的声学建模方法通常用于没有足够转录语音进行声学模型训练的语言的自动语音识别(ASR)。传统的基于IPA的通用声学建模方法已被证明在声学匹配条件下是有效的,而在目标语言和源语言不匹配的情况下通常表现不佳。提出了一种在不匹配条件下对零资源语言进行多语种无监督AM训练的方法。该方法包括两个主要步骤。第一步,采用联合使用原始源语言数据和映射源语言数据的多任务训练方法获得目标低资源语言的初始AM;第二步,使用自动转录的目标语言数据,以迭代训练新的目标语言模型和自适应初始目标语言模型的方式训练目标语言的目标语言模型。实验以100小时未转录的日语语音和300小时转录的其他语言语音为对象进行。本文的最佳结果是字符错误率(CER)为51.75%,与基准IPA系统相比,绝对降低了24.78%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信