Li Fang, Long Ye, Xinglong Ma, Ruiqi Wang, Wei Zhong, Qin Zhang
{"title":"Frame-Level Multiple Sound Sources Localization Based on Visual Understanding","authors":"Li Fang, Long Ye, Xinglong Ma, Ruiqi Wang, Wei Zhong, Qin Zhang","doi":"10.1109/acait53529.2021.9731148","DOIUrl":null,"url":null,"abstract":"Sound source localization is an important field of audio and visual research. In the dynamic performance stage, finding the positions of multiple sounding objects in real time can give the audience an immersive feeling. Due to the complexity of the performance scene, it is a challenge to perform audio-visual recognition and localization because of the audio overlapping and visual object masking. To address this problem, we propose a novel two-stream learning framework that disentangles different classes of audio-visual representations from complex scenes, then maps the audio area of each visual in multi-instance labels learning through adaptive multi-stream fusion, and localizes sounding instrument from coarse to fine. We have obtained the state-of-the-art results on the public dataset. Experiment results show that our method can effectively realize frame-level multiple sound sources location.","PeriodicalId":173633,"journal":{"name":"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"325 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/acait53529.2021.9731148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sound source localization is an important field of audio and visual research. In the dynamic performance stage, finding the positions of multiple sounding objects in real time can give the audience an immersive feeling. Due to the complexity of the performance scene, it is a challenge to perform audio-visual recognition and localization because of the audio overlapping and visual object masking. To address this problem, we propose a novel two-stream learning framework that disentangles different classes of audio-visual representations from complex scenes, then maps the audio area of each visual in multi-instance labels learning through adaptive multi-stream fusion, and localizes sounding instrument from coarse to fine. We have obtained the state-of-the-art results on the public dataset. Experiment results show that our method can effectively realize frame-level multiple sound sources location.