弱监督视听事件感知的学习概率存在-缺失证据

IF 18.6
Junyu Gao;Mengyuan Chen;Changsheng Xu
{"title":"弱监督视听事件感知的学习概率存在-缺失证据","authors":"Junyu Gao;Mengyuan Chen;Changsheng Xu","doi":"10.1109/TPAMI.2025.3546312","DOIUrl":null,"url":null,"abstract":"With only video-level event labels, this paper targets at the task of weakly-supervised audio-visual event perception (WS-AVEP), which aims to temporally localize and categorize events that belong to each modality. Despite the recent progress, most existing approaches either ignore the unsynchronized property of audio-visual tracks or discount the complementary modality for explicit enhancement. We argue that, a modality should provide ample presence evidence for an event, while the complementary modality offers absence evidence as a reference. However, to learn reliable evidence, we face challenging uncertainties caused by weak supervision and the complicated audio-visual data itself. To this end, we propose to collect Probabilistic Presence-Absence Evidence (PPAE) in a unified framework. Specifically, by leveraging uni-modal and cross-modal representations, a probabilistic presence-absence evidence collector (PAEC) is designed. To learn the evidence in a reliable range, we propose a joint-modal mutual learning (JML) process, which calibrates the evidence of diverse audible, visible, and audi-visible events adaptively and dynamically. Extensive experiments show that our method surpasses state-of-the-arts (e.g., absolute gains of 3.1% and 4.2% in terms of event-level audio and visual metrics on the LLP dataset).","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 6","pages":"4787-4802"},"PeriodicalIF":18.6000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Probabilistic Presence-Absence Evidence for Weakly-Supervised Audio-Visual Event Perception\",\"authors\":\"Junyu Gao;Mengyuan Chen;Changsheng Xu\",\"doi\":\"10.1109/TPAMI.2025.3546312\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With only video-level event labels, this paper targets at the task of weakly-supervised audio-visual event perception (WS-AVEP), which aims to temporally localize and categorize events that belong to each modality. Despite the recent progress, most existing approaches either ignore the unsynchronized property of audio-visual tracks or discount the complementary modality for explicit enhancement. We argue that, a modality should provide ample presence evidence for an event, while the complementary modality offers absence evidence as a reference. However, to learn reliable evidence, we face challenging uncertainties caused by weak supervision and the complicated audio-visual data itself. To this end, we propose to collect Probabilistic Presence-Absence Evidence (PPAE) in a unified framework. Specifically, by leveraging uni-modal and cross-modal representations, a probabilistic presence-absence evidence collector (PAEC) is designed. To learn the evidence in a reliable range, we propose a joint-modal mutual learning (JML) process, which calibrates the evidence of diverse audible, visible, and audi-visible events adaptively and dynamically. Extensive experiments show that our method surpasses state-of-the-arts (e.g., absolute gains of 3.1% and 4.2% in terms of event-level audio and visual metrics on the LLP dataset).\",\"PeriodicalId\":94034,\"journal\":{\"name\":\"IEEE transactions on pattern analysis and machine intelligence\",\"volume\":\"47 6\",\"pages\":\"4787-4802\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on pattern analysis and machine intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10906447/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10906447/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文仅针对视频级事件标签,针对弱监督视听事件感知(WS-AVEP)任务,该任务旨在对属于每种模态的事件进行临时定位和分类。尽管最近取得了进展,但大多数现有方法要么忽略了视听轨道的非同步特性,要么忽略了显式增强的互补模态。我们认为,一个情态应该为一个事件提供充分的存在证据,而互补情态则提供不存在证据作为参考。然而,要获得可靠的证据,我们面临着监管不力和视听数据本身复杂所带来的不确定性的挑战。为此,我们建议在一个统一的框架中收集概率存在-缺失证据(PPAE)。具体来说,通过利用单模态和跨模态表示,设计了一个概率存在-不存在证据收集器(PAEC)。为了在一个可靠的范围内学习证据,我们提出了一个联合模态相互学习(JML)过程,该过程自适应地动态校准各种可听、可见和可听-可见事件的证据。大量的实验表明,我们的方法超越了最先进的技术(例如,在LLP数据集上,事件级音频和视觉指标的绝对增益分别为3.1%和4.2%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning Probabilistic Presence-Absence Evidence for Weakly-Supervised Audio-Visual Event Perception
With only video-level event labels, this paper targets at the task of weakly-supervised audio-visual event perception (WS-AVEP), which aims to temporally localize and categorize events that belong to each modality. Despite the recent progress, most existing approaches either ignore the unsynchronized property of audio-visual tracks or discount the complementary modality for explicit enhancement. We argue that, a modality should provide ample presence evidence for an event, while the complementary modality offers absence evidence as a reference. However, to learn reliable evidence, we face challenging uncertainties caused by weak supervision and the complicated audio-visual data itself. To this end, we propose to collect Probabilistic Presence-Absence Evidence (PPAE) in a unified framework. Specifically, by leveraging uni-modal and cross-modal representations, a probabilistic presence-absence evidence collector (PAEC) is designed. To learn the evidence in a reliable range, we propose a joint-modal mutual learning (JML) process, which calibrates the evidence of diverse audible, visible, and audi-visible events adaptively and dynamically. Extensive experiments show that our method surpasses state-of-the-arts (e.g., absolute gains of 3.1% and 4.2% in terms of event-level audio and visual metrics on the LLP dataset).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信