{"title":"A sequential processing model for speech separation based on auditory scene analysis","authors":"I. Nakanishi, Junichi Hanada","doi":"10.1109/ISPACS.2015.7432750","DOIUrl":null,"url":null,"abstract":"Speech separation based on auditory scene analysis (ASA) has been widely studied. We propose a sequential processing model of computational ASA (CASA), in which a mixed speech is sequentially decomposed into frequency signals using modified Discrete Fourier Transform (DFT), four features in ASA are extracted from the decomposed frequency signals, the frequency signals are regrouped by examining the extracted features, and each separated speech is obtained by recomposing the frequency signals in a group. In this paper, we attempt to separate speeches only using the harmonic structure, which is one of the features and regarded as the backbone in our sequential implementation model.","PeriodicalId":238787,"journal":{"name":"2015 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS.2015.7432750","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Speech separation based on auditory scene analysis (ASA) has been widely studied. We propose a sequential processing model of computational ASA (CASA), in which a mixed speech is sequentially decomposed into frequency signals using modified Discrete Fourier Transform (DFT), four features in ASA are extracted from the decomposed frequency signals, the frequency signals are regrouped by examining the extracted features, and each separated speech is obtained by recomposing the frequency signals in a group. In this paper, we attempt to separate speeches only using the harmonic structure, which is one of the features and regarded as the backbone in our sequential implementation model.