{"title":"Single-channel speech separation based on robust sparse Bayesian learning","authors":"Zhe Wang, G. Bi, Xiumei Li","doi":"10.1109/ICCA.2017.8003044","DOIUrl":null,"url":null,"abstract":"This paper describes a novel algorithm to improve the performance of sparsity based single-channel speech separation(SCSS) problem based on compressed sensing which is an emerging technique for efficient data reconstruction. The conventional approach assumes the mixing conditions and source signals are stationary. For practical applications of audio source separation, however, we face the challenges of non-stationary mixing conditions due to the variation of sources or moving speakers. The proposed algorithm deals with this non-stationary situation in SCSS where the speech signals is recovered based on an auto-calibration sparse Bayesian learning algorithm. Numerical experiments including the performance comparison with other sparse representation approach are provided to show the achieved performance improvement.","PeriodicalId":379025,"journal":{"name":"2017 13th IEEE International Conference on Control & Automation (ICCA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 13th IEEE International Conference on Control & Automation (ICCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCA.2017.8003044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper describes a novel algorithm to improve the performance of sparsity based single-channel speech separation(SCSS) problem based on compressed sensing which is an emerging technique for efficient data reconstruction. The conventional approach assumes the mixing conditions and source signals are stationary. For practical applications of audio source separation, however, we face the challenges of non-stationary mixing conditions due to the variation of sources or moving speakers. The proposed algorithm deals with this non-stationary situation in SCSS where the speech signals is recovered based on an auto-calibration sparse Bayesian learning algorithm. Numerical experiments including the performance comparison with other sparse representation approach are provided to show the achieved performance improvement.