Yicheng Hsu, Ssuhan Chen, Yuhsin Lai, Chingyen Wang, Mingsian R Bai
{"title":"Spatial-temporal activity-informed diarization and separation.","authors":"Yicheng Hsu, Ssuhan Chen, Yuhsin Lai, Chingyen Wang, Mingsian R Bai","doi":"10.1121/10.0035830","DOIUrl":null,"url":null,"abstract":"<p><p>A robust multichannel speaker diarization and separation system is proposed by exploiting the spatiotemporal activity of the speakers. The system is realized in a hybrid architecture that combines the array signal processing units and the deep learning units. For speaker diarization, a spatial coherence matrix across time frames is computed based on the whitened Relative Transfer Functions of the microphone array. This serves as a robust feature for subsequent machine learning without the need for prior knowledge of the array configuration. A computationally efficient modified End-to-End Neural Diarization system in the Encoder-Decoder-based Attractor network is constructed to estimate the speaker activity from the spatial coherence matrix. For speaker separation, we propose the Global and Local Activity-driven Speaker Extraction network to separate speaker signals via speaker-specific global and local spatial activity functions. The local spatial activity functions depend on the coherence between the whitened Relative Transfer Functions of each time-frequency bin and the target speaker-dominant bins. The global spatial activity functions are computed from the global spatial coherence functions based on frequency-averaged local spatial activity functions. Experimental results have demonstrated superior speaker, diarization, counting, and separation performance achieved by the proposed system with low computational complexity compared to the pre-selected baselines.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"157 2","pages":"1162-1175"},"PeriodicalIF":2.1000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Acoustical Society of America","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1121/10.0035830","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
Abstract
A robust multichannel speaker diarization and separation system is proposed by exploiting the spatiotemporal activity of the speakers. The system is realized in a hybrid architecture that combines the array signal processing units and the deep learning units. For speaker diarization, a spatial coherence matrix across time frames is computed based on the whitened Relative Transfer Functions of the microphone array. This serves as a robust feature for subsequent machine learning without the need for prior knowledge of the array configuration. A computationally efficient modified End-to-End Neural Diarization system in the Encoder-Decoder-based Attractor network is constructed to estimate the speaker activity from the spatial coherence matrix. For speaker separation, we propose the Global and Local Activity-driven Speaker Extraction network to separate speaker signals via speaker-specific global and local spatial activity functions. The local spatial activity functions depend on the coherence between the whitened Relative Transfer Functions of each time-frequency bin and the target speaker-dominant bins. The global spatial activity functions are computed from the global spatial coherence functions based on frequency-averaged local spatial activity functions. Experimental results have demonstrated superior speaker, diarization, counting, and separation performance achieved by the proposed system with low computational complexity compared to the pre-selected baselines.
期刊介绍:
Since 1929 The Journal of the Acoustical Society of America has been the leading source of theoretical and experimental research results in the broad interdisciplinary study of sound. Subject coverage includes: linear and nonlinear acoustics; aeroacoustics, underwater sound and acoustical oceanography; ultrasonics and quantum acoustics; architectural and structural acoustics and vibration; speech, music and noise; psychology and physiology of hearing; engineering acoustics, transduction; bioacoustics, animal bioacoustics.