Spatial-temporal activity-informed diarization and separation.

IF 2.1 2区 物理与天体物理 Q2 ACOUSTICS
Yicheng Hsu, Ssuhan Chen, Yuhsin Lai, Chingyen Wang, Mingsian R Bai
{"title":"Spatial-temporal activity-informed diarization and separation.","authors":"Yicheng Hsu, Ssuhan Chen, Yuhsin Lai, Chingyen Wang, Mingsian R Bai","doi":"10.1121/10.0035830","DOIUrl":null,"url":null,"abstract":"<p><p>A robust multichannel speaker diarization and separation system is proposed by exploiting the spatiotemporal activity of the speakers. The system is realized in a hybrid architecture that combines the array signal processing units and the deep learning units. For speaker diarization, a spatial coherence matrix across time frames is computed based on the whitened Relative Transfer Functions of the microphone array. This serves as a robust feature for subsequent machine learning without the need for prior knowledge of the array configuration. A computationally efficient modified End-to-End Neural Diarization system in the Encoder-Decoder-based Attractor network is constructed to estimate the speaker activity from the spatial coherence matrix. For speaker separation, we propose the Global and Local Activity-driven Speaker Extraction network to separate speaker signals via speaker-specific global and local spatial activity functions. The local spatial activity functions depend on the coherence between the whitened Relative Transfer Functions of each time-frequency bin and the target speaker-dominant bins. The global spatial activity functions are computed from the global spatial coherence functions based on frequency-averaged local spatial activity functions. Experimental results have demonstrated superior speaker, diarization, counting, and separation performance achieved by the proposed system with low computational complexity compared to the pre-selected baselines.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"157 2","pages":"1162-1175"},"PeriodicalIF":2.1000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Acoustical Society of America","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1121/10.0035830","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

A robust multichannel speaker diarization and separation system is proposed by exploiting the spatiotemporal activity of the speakers. The system is realized in a hybrid architecture that combines the array signal processing units and the deep learning units. For speaker diarization, a spatial coherence matrix across time frames is computed based on the whitened Relative Transfer Functions of the microphone array. This serves as a robust feature for subsequent machine learning without the need for prior knowledge of the array configuration. A computationally efficient modified End-to-End Neural Diarization system in the Encoder-Decoder-based Attractor network is constructed to estimate the speaker activity from the spatial coherence matrix. For speaker separation, we propose the Global and Local Activity-driven Speaker Extraction network to separate speaker signals via speaker-specific global and local spatial activity functions. The local spatial activity functions depend on the coherence between the whitened Relative Transfer Functions of each time-frequency bin and the target speaker-dominant bins. The global spatial activity functions are computed from the global spatial coherence functions based on frequency-averaged local spatial activity functions. Experimental results have demonstrated superior speaker, diarization, counting, and separation performance achieved by the proposed system with low computational complexity compared to the pre-selected baselines.

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.60
自引率
16.70%
发文量
1433
审稿时长
4.7 months
期刊介绍: Since 1929 The Journal of the Acoustical Society of America has been the leading source of theoretical and experimental research results in the broad interdisciplinary study of sound. Subject coverage includes: linear and nonlinear acoustics; aeroacoustics, underwater sound and acoustical oceanography; ultrasonics and quantum acoustics; architectural and structural acoustics and vibration; speech, music and noise; psychology and physiology of hearing; engineering acoustics, transduction; bioacoustics, animal bioacoustics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信