{"title":"DETECLAP: Enhancing Audio-Visual Representation Learning with Object Information","authors":"Shota Nakada, Taichi Nishimura, Hokuto Munakata, Masayoshi Kondo, Tatsuya Komatsu","doi":"arxiv-2409.11729","DOIUrl":null,"url":null,"abstract":"Current audio-visual representation learning can capture rough object\ncategories (e.g., ``animals'' and ``instruments''), but it lacks the ability to\nrecognize fine-grained details, such as specific categories like ``dogs'' and\n``flutes'' within animals and instruments. To address this issue, we introduce\nDETECLAP, a method to enhance audio-visual representation learning with object\ninformation. Our key idea is to introduce an audio-visual label prediction loss\nto the existing Contrastive Audio-Visual Masked AutoEncoder to enhance its\nobject awareness. To avoid costly manual annotations, we prepare object labels\nfrom both audio and visual inputs using state-of-the-art language-audio models\nand object detectors. We evaluate the method of audio-visual retrieval and\nclassification using the VGGSound and AudioSet20K datasets. Our method achieves\nimprovements in recall@10 of +1.5% and +1.2% for audio-to-visual and\nvisual-to-audio retrieval, respectively, and an improvement in accuracy of\n+0.6% for audio-visual classification.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11729","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Current audio-visual representation learning can capture rough object
categories (e.g., ``animals'' and ``instruments''), but it lacks the ability to
recognize fine-grained details, such as specific categories like ``dogs'' and
``flutes'' within animals and instruments. To address this issue, we introduce
DETECLAP, a method to enhance audio-visual representation learning with object
information. Our key idea is to introduce an audio-visual label prediction loss
to the existing Contrastive Audio-Visual Masked AutoEncoder to enhance its
object awareness. To avoid costly manual annotations, we prepare object labels
from both audio and visual inputs using state-of-the-art language-audio models
and object detectors. We evaluate the method of audio-visual retrieval and
classification using the VGGSound and AudioSet20K datasets. Our method achieves
improvements in recall@10 of +1.5% and +1.2% for audio-to-visual and
visual-to-audio retrieval, respectively, and an improvement in accuracy of
+0.6% for audio-visual classification.