基于听觉的智能家居环境辅助生活鲁棒语音识别系统

Hsien-Shun Kuo, Po-Hsun Sung, Sheng-Chieh Lee, Ta-Wen Kuan, Jhing-Fa Wang
{"title":"基于听觉的智能家居环境辅助生活鲁棒语音识别系统","authors":"Hsien-Shun Kuo, Po-Hsun Sung, Sheng-Chieh Lee, Ta-Wen Kuan, Jhing-Fa Wang","doi":"10.1109/ICOT.2014.6956626","DOIUrl":null,"url":null,"abstract":"An auditory-based feature extraction algorithm is proposed for enhancing the robustness of automatic speech recognition. In the proposed approach, the speech signal is characterized using a new feature referred to as the Basilar-membrane Frequency-band Cepstral Coefficient (BFCC). In contrast to the conventional Mel-Frequency Cepstral Coefficient (MFCC) method based on a Fourier spectrogram, the proposed BFCC method uses an auditory spectrogram based on a gammachirp wavelet transform in order to more accurately mimic the auditory response of the human ear and improve the noise immunity. In addition, a Hidden Markov Model (HMM) is used for both training and testing purposes. The evaluation results obtained using the AURORA 2 noisy speech database show that compared to the MFCC method, the proposed scheme improves the speech recognition rate by 15% on average given speech samples with Siganl-to-Noise Ratios (SNRs) ranging from 0 to 20 dB. Thus, the proposed method has significant potential for the development of robust speech recognition systems for ambient assisted living.","PeriodicalId":343641,"journal":{"name":"2014 International Conference on Orange Technologies","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Auditory-based robust speech recognition system for ambient assisted living in smart home\",\"authors\":\"Hsien-Shun Kuo, Po-Hsun Sung, Sheng-Chieh Lee, Ta-Wen Kuan, Jhing-Fa Wang\",\"doi\":\"10.1109/ICOT.2014.6956626\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An auditory-based feature extraction algorithm is proposed for enhancing the robustness of automatic speech recognition. In the proposed approach, the speech signal is characterized using a new feature referred to as the Basilar-membrane Frequency-band Cepstral Coefficient (BFCC). In contrast to the conventional Mel-Frequency Cepstral Coefficient (MFCC) method based on a Fourier spectrogram, the proposed BFCC method uses an auditory spectrogram based on a gammachirp wavelet transform in order to more accurately mimic the auditory response of the human ear and improve the noise immunity. In addition, a Hidden Markov Model (HMM) is used for both training and testing purposes. The evaluation results obtained using the AURORA 2 noisy speech database show that compared to the MFCC method, the proposed scheme improves the speech recognition rate by 15% on average given speech samples with Siganl-to-Noise Ratios (SNRs) ranging from 0 to 20 dB. Thus, the proposed method has significant potential for the development of robust speech recognition systems for ambient assisted living.\",\"PeriodicalId\":343641,\"journal\":{\"name\":\"2014 International Conference on Orange Technologies\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 International Conference on Orange Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOT.2014.6956626\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Orange Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOT.2014.6956626","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

为了提高自动语音识别的鲁棒性,提出了一种基于听觉的特征提取算法。在提出的方法中,语音信号使用一种称为基底膜频带倒谱系数(BFCC)的新特征进行表征。与传统的基于傅里叶谱图的Mel-Frequency倒谱系数(MFCC)方法相比,BFCC方法采用基于伽玛基普小波变换的听觉谱图,更准确地模拟人耳的听觉反应,提高了抗噪声能力。此外,隐马尔可夫模型(HMM)用于训练和测试目的。基于AURORA 2噪声语音数据库的评估结果表明,在给定信噪比为0 ~ 20 dB的语音样本上,与MFCC方法相比,该方法的语音识别率平均提高了15%。因此,所提出的方法对于开发用于环境辅助生活的鲁棒语音识别系统具有重要的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Auditory-based robust speech recognition system for ambient assisted living in smart home
An auditory-based feature extraction algorithm is proposed for enhancing the robustness of automatic speech recognition. In the proposed approach, the speech signal is characterized using a new feature referred to as the Basilar-membrane Frequency-band Cepstral Coefficient (BFCC). In contrast to the conventional Mel-Frequency Cepstral Coefficient (MFCC) method based on a Fourier spectrogram, the proposed BFCC method uses an auditory spectrogram based on a gammachirp wavelet transform in order to more accurately mimic the auditory response of the human ear and improve the noise immunity. In addition, a Hidden Markov Model (HMM) is used for both training and testing purposes. The evaluation results obtained using the AURORA 2 noisy speech database show that compared to the MFCC method, the proposed scheme improves the speech recognition rate by 15% on average given speech samples with Siganl-to-Noise Ratios (SNRs) ranging from 0 to 20 dB. Thus, the proposed method has significant potential for the development of robust speech recognition systems for ambient assisted living.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信