{"title":"鲁棒语音识别的调制频谱增强","authors":"Bi-Cheng Yan, Shih-Hung Liu, Berlin Chen","doi":"10.1145/3373477.3373695","DOIUrl":null,"url":null,"abstract":"Data augmentation is a crucial mechanism being employed to increase the diversity of training data in order to avoid overfitting and improve robustness of statistical models in various applications. In the context of automatic speech recognition (ASR), a recent trend has been to develop effective methods to augment training speech data by warping or masking utterances based on their waveforms or spectrograms. Extending this line of research, we make attempts to explore novel ways to generate augmented training speech data, in comparison to the existing state-of-the-art approaches. The main contribution of this paper is at least two-fold. First, we propose to warp the intermediate representation of the cepstral feature vector sequence of an utterance in a holistic manner. This intermediate representation can be embodied in different modulation domains by performing discrete Fourier transform (DFT) along the either the time- or the component-axis of a cepstral feature vector sequence. Second, we also develop a two-stage augmentation approach, which successively conduct perturbation in the waveform domain and warping in different modulation domains of cepstral speech feature vector sequences, to further enhance robustness. A series of experiments are carried out on the Aurora-4 database and task, in conjunction with a typical DNN-HMM based ASR system. The proposed augmentation method that conducts warping in the component-axis modulation domain of cepstral feature vector sequences can yield a word error rate reduction (WERR) of 17.6% and 0.69%, respectively, for the clean-and multi-condition training settings. In addition, the proposed two-stage augmentation method can at best achieve a WERR of 1.13% when using the multi-condition training setup.","PeriodicalId":300431,"journal":{"name":"Proceedings of the 1st International Conference on Advanced Information Science and System","volume":"59 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Modulation spectrum augmentation for robust speech recognition\",\"authors\":\"Bi-Cheng Yan, Shih-Hung Liu, Berlin Chen\",\"doi\":\"10.1145/3373477.3373695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data augmentation is a crucial mechanism being employed to increase the diversity of training data in order to avoid overfitting and improve robustness of statistical models in various applications. In the context of automatic speech recognition (ASR), a recent trend has been to develop effective methods to augment training speech data by warping or masking utterances based on their waveforms or spectrograms. Extending this line of research, we make attempts to explore novel ways to generate augmented training speech data, in comparison to the existing state-of-the-art approaches. The main contribution of this paper is at least two-fold. First, we propose to warp the intermediate representation of the cepstral feature vector sequence of an utterance in a holistic manner. This intermediate representation can be embodied in different modulation domains by performing discrete Fourier transform (DFT) along the either the time- or the component-axis of a cepstral feature vector sequence. Second, we also develop a two-stage augmentation approach, which successively conduct perturbation in the waveform domain and warping in different modulation domains of cepstral speech feature vector sequences, to further enhance robustness. A series of experiments are carried out on the Aurora-4 database and task, in conjunction with a typical DNN-HMM based ASR system. The proposed augmentation method that conducts warping in the component-axis modulation domain of cepstral feature vector sequences can yield a word error rate reduction (WERR) of 17.6% and 0.69%, respectively, for the clean-and multi-condition training settings. In addition, the proposed two-stage augmentation method can at best achieve a WERR of 1.13% when using the multi-condition training setup.\",\"PeriodicalId\":300431,\"journal\":{\"name\":\"Proceedings of the 1st International Conference on Advanced Information Science and System\",\"volume\":\"59 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st International Conference on Advanced Information Science and System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3373477.3373695\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3373477.3373695","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Modulation spectrum augmentation for robust speech recognition
Data augmentation is a crucial mechanism being employed to increase the diversity of training data in order to avoid overfitting and improve robustness of statistical models in various applications. In the context of automatic speech recognition (ASR), a recent trend has been to develop effective methods to augment training speech data by warping or masking utterances based on their waveforms or spectrograms. Extending this line of research, we make attempts to explore novel ways to generate augmented training speech data, in comparison to the existing state-of-the-art approaches. The main contribution of this paper is at least two-fold. First, we propose to warp the intermediate representation of the cepstral feature vector sequence of an utterance in a holistic manner. This intermediate representation can be embodied in different modulation domains by performing discrete Fourier transform (DFT) along the either the time- or the component-axis of a cepstral feature vector sequence. Second, we also develop a two-stage augmentation approach, which successively conduct perturbation in the waveform domain and warping in different modulation domains of cepstral speech feature vector sequences, to further enhance robustness. A series of experiments are carried out on the Aurora-4 database and task, in conjunction with a typical DNN-HMM based ASR system. The proposed augmentation method that conducts warping in the component-axis modulation domain of cepstral feature vector sequences can yield a word error rate reduction (WERR) of 17.6% and 0.69%, respectively, for the clean-and multi-condition training settings. In addition, the proposed two-stage augmentation method can at best achieve a WERR of 1.13% when using the multi-condition training setup.