Severity Based Adaptation for ASR to Aid Dysarthric Speakers

B. Al-Qatab, Mumtaz Begum Mustafa, S. Salim
{"title":"Severity Based Adaptation for ASR to Aid Dysarthric Speakers","authors":"B. Al-Qatab, Mumtaz Begum Mustafa, S. Salim","doi":"10.1109/AMS.2014.40","DOIUrl":null,"url":null,"abstract":"Automatic speech recognition (ASR) for dysarthric speakers is one of the most challenging research areas. The lack of corpus for dysarthric speakers makes it even more difficult. This paper introduces the Intra-Severity adaptation, using small amount of speech data, in which data from all participants in a given severity type will use for adaptation of that type. The adaptation is performed for two types of acoustic models, which are the Controlled Acoustic Model (CAM) developed using rich phonetic corpus, and Dysarthric Acoustic Model (DAM) that includes speech collected from dysarthric speakers suffering from variety level of severity. This paper compares two adaptation techniques for building ASR systems for dysarthric speakers, which are Maximum Likelihood Linear Regression (MLLR) and Constrained Maximum Likelihood Linear Regression (CMLLR).The result shows that the Word Recognition Accuracy (WRA) for the CAM outperformed DAM for both the Speaker Independent (SI) and Speaker Adaptation (SA). On the other hand, it was found that MLLR is outperformed the CMLLR for both Controlled Speaker Adaptation (CSA) and Dysarthric Speaker Adaptation (DSA).","PeriodicalId":198621,"journal":{"name":"2014 8th Asia Modelling Symposium","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 8th Asia Modelling Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AMS.2014.40","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Automatic speech recognition (ASR) for dysarthric speakers is one of the most challenging research areas. The lack of corpus for dysarthric speakers makes it even more difficult. This paper introduces the Intra-Severity adaptation, using small amount of speech data, in which data from all participants in a given severity type will use for adaptation of that type. The adaptation is performed for two types of acoustic models, which are the Controlled Acoustic Model (CAM) developed using rich phonetic corpus, and Dysarthric Acoustic Model (DAM) that includes speech collected from dysarthric speakers suffering from variety level of severity. This paper compares two adaptation techniques for building ASR systems for dysarthric speakers, which are Maximum Likelihood Linear Regression (MLLR) and Constrained Maximum Likelihood Linear Regression (CMLLR).The result shows that the Word Recognition Accuracy (WRA) for the CAM outperformed DAM for both the Speaker Independent (SI) and Speaker Adaptation (SA). On the other hand, it was found that MLLR is outperformed the CMLLR for both Controlled Speaker Adaptation (CSA) and Dysarthric Speaker Adaptation (DSA).
基于严重程度的ASR适应对发音困难说话者的帮助
语言障碍说话人的自动语音识别(ASR)是最具挑战性的研究领域之一。语料库的缺乏使说话困难的人更难做到这一点。本文介绍了使用少量语音数据的Intra-Severity自适应,其中来自给定严重性类型的所有参与者的数据将用于该类型的自适应。本文针对两种类型的声学模型进行了适应,一种是使用丰富语料库开发的受控声学模型(CAM),另一种是包括从不同严重程度的困难说话者收集的语音的困难声学模型(DAM)。本文比较了最大似然线性回归(MLLR)和约束最大似然线性回归(CMLLR)这两种自适应技术在构建发音困难说话人ASR系统中的应用。结果表明,CAM在独立说话人(SI)和自适应说话人(SA)两种情况下的词识别精度均优于DAM。另一方面,MLLR在受控说话人自适应(CSA)和困难说话人自适应(DSA)方面都优于cllr。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信