{"title":"Parametric models of the magnitude/phase spectrum for harmonic speech coding","authors":"D. Thomson","doi":"10.1109/ICASSP.1988.196596","DOIUrl":null,"url":null,"abstract":"A method is described for representing magnitude and phase in a sinusoidal transform coder. Instead of transmitting individual sinusoids, the entire speech spectrum is transmitted. The synthesizer estimates the frequency, amplitude, and phase of each harmonic from the spectrum. Relatively high-quality speech in the 4.8-9.6 kb/s range is obtained by modeling the magnitude/phase spectrum with a combination of pole-zero analysis, phase prediction and vector quantization. A window subtraction method ensures proper synthesis of unvoiced speech. The system is robust since it does not depend on pitch estimates or voicing decisions.<<ETX>>","PeriodicalId":448544,"journal":{"name":"ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing","volume":"152 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1988-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.1988.196596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
A method is described for representing magnitude and phase in a sinusoidal transform coder. Instead of transmitting individual sinusoids, the entire speech spectrum is transmitted. The synthesizer estimates the frequency, amplitude, and phase of each harmonic from the spectrum. Relatively high-quality speech in the 4.8-9.6 kb/s range is obtained by modeling the magnitude/phase spectrum with a combination of pole-zero analysis, phase prediction and vector quantization. A window subtraction method ensures proper synthesis of unvoiced speech. The system is robust since it does not depend on pitch estimates or voicing decisions.<>