M. Souden, K. Kinoshita, Marc Delcroix, T. Nakatani
{"title":"Distributed microphone array processing for speech source separation with classifier fusion","authors":"M. Souden, K. Kinoshita, Marc Delcroix, T. Nakatani","doi":"10.1109/MLSP.2012.6349782","DOIUrl":null,"url":null,"abstract":"We propose a new approach for clustering and separating competing speech signals using a distributed microphone array (DMA). This approach can be viewed as an extension of expectation-maximization (EM)-based source separation to DMAs. To achieve distributed processing, we assume the conditional independence (with respect to sources' activities) of the normalized recordings of different nodes. By doing so, only the posterior probabilities of sources' activities need to be shared between nodes. Consequently, the EM algorithm is formulated such that at the expectation step, local posterior probabilities are estimated locally and shared between nodes. In the maximization step, every node fuses the received probabilities via either product or sum rules and estimates its local parameters. We show that, even if we make binary decisions (presence/ absence of speech) during EM iterations instead of transmitting continuous posterior probability values, we can achieve separation without causing significant speech distortion. Our preliminary investigations demonstrate that the proposed processing technique approaches the centralized solution and can outperform Oracle best node-wise clustering in terms of objective source separation metrics.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"138 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Workshop on Machine Learning for Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLSP.2012.6349782","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
We propose a new approach for clustering and separating competing speech signals using a distributed microphone array (DMA). This approach can be viewed as an extension of expectation-maximization (EM)-based source separation to DMAs. To achieve distributed processing, we assume the conditional independence (with respect to sources' activities) of the normalized recordings of different nodes. By doing so, only the posterior probabilities of sources' activities need to be shared between nodes. Consequently, the EM algorithm is formulated such that at the expectation step, local posterior probabilities are estimated locally and shared between nodes. In the maximization step, every node fuses the received probabilities via either product or sum rules and estimates its local parameters. We show that, even if we make binary decisions (presence/ absence of speech) during EM iterations instead of transmitting continuous posterior probability values, we can achieve separation without causing significant speech distortion. Our preliminary investigations demonstrate that the proposed processing technique approaches the centralized solution and can outperform Oracle best node-wise clustering in terms of objective source separation metrics.