{"title":"Speech synthesis from surface electromyogram signal","authors":"Y. Lam, M. Mak, P. Leong","doi":"10.1109/ISSPIT.2005.1577192","DOIUrl":null,"url":null,"abstract":"This paper presents a methodology that uses surface electromyogram (SEMG) signals recorded from the cheek and chin to synthesize speech. Simultaneously recorded speech and SEMG signals are blocked into frames and transformed into features. Linear predictive coding (LPC) and short-time Fourier transform coefficients are chosen as speech and SEMG features respectively. A neural network is applied to convert SEMG features into speech features on a frame-by-frame basis. The converted speech features are used to reconstruct the original speech. Feature selection, conversion methodology and experimental results are discussed. The results show that phoneme-based feature extraction and frame-based feature conversion could be applied to SEMG-based continuous speech synthesis","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"287 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPIT.2005.1577192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper presents a methodology that uses surface electromyogram (SEMG) signals recorded from the cheek and chin to synthesize speech. Simultaneously recorded speech and SEMG signals are blocked into frames and transformed into features. Linear predictive coding (LPC) and short-time Fourier transform coefficients are chosen as speech and SEMG features respectively. A neural network is applied to convert SEMG features into speech features on a frame-by-frame basis. The converted speech features are used to reconstruct the original speech. Feature selection, conversion methodology and experimental results are discussed. The results show that phoneme-based feature extraction and frame-based feature conversion could be applied to SEMG-based continuous speech synthesis