Multi Event Localization by Audio-Visual Fusion with Omnidirectional Camera and Microphone Array

Wenru Zheng, Ryota Yoshihashi, Rei Kawakami, Ikuro Sato, Asako Kanezaki
{"title":"Multi Event Localization by Audio-Visual Fusion with Omnidirectional Camera and Microphone Array","authors":"Wenru Zheng, Ryota Yoshihashi, Rei Kawakami, Ikuro Sato, Asako Kanezaki","doi":"10.1109/CVPRW59228.2023.00255","DOIUrl":null,"url":null,"abstract":"Audio-visual fusion is a promising approach for identifying multiple events occurring simultaneously at different locations in the real world. Previous studies on audio-visual event localization (AVE) have been built on datasets that only have monaural or stereo channels in the audio; thus, it was hard to distinguish the direction of audio when different sounds are heard from multiple locations. In this paper, we develop a multi-event localization method using multichannel audio and omnidirectional images. To take full advantage of the spatial correlation between the features in the two modalities, our method employs early fusion that can retain audio direction and background information in images. We also created a new dataset of multi-label events containing around 660 omnidirectional videos with multichannel audio, which was used to showcase the effectiveness of the proposed method.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW59228.2023.00255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Audio-visual fusion is a promising approach for identifying multiple events occurring simultaneously at different locations in the real world. Previous studies on audio-visual event localization (AVE) have been built on datasets that only have monaural or stereo channels in the audio; thus, it was hard to distinguish the direction of audio when different sounds are heard from multiple locations. In this paper, we develop a multi-event localization method using multichannel audio and omnidirectional images. To take full advantage of the spatial correlation between the features in the two modalities, our method employs early fusion that can retain audio direction and background information in images. We also created a new dataset of multi-label events containing around 660 omnidirectional videos with multichannel audio, which was used to showcase the effectiveness of the proposed method.
基于全向摄像机和传声器阵列的视听融合多事件定位
视听融合是一种很有前途的方法,用于识别在现实世界中不同地点同时发生的多个事件。以往关于视听事件定位(AVE)的研究都是建立在音频中只有单声或立体声通道的数据集上;因此,当从多个位置听到不同的声音时,很难区分声音的方向。在本文中,我们开发了一种利用多通道音频和全向图像的多事件定位方法。为了充分利用两种模态特征之间的空间相关性,我们的方法采用了早期融合,可以保留图像中的音频方向和背景信息。我们还创建了一个新的多标签事件数据集,其中包含大约660个带有多声道音频的全向视频,用于展示所提出方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信