Pose-Motion Video Anomaly Detection via Memory-Augmented Reconstruction and Conditional Variational Prediction

Weilin Wan, Weizhong Zhang, Cheng Jin
{"title":"Pose-Motion Video Anomaly Detection via Memory-Augmented Reconstruction and Conditional Variational Prediction","authors":"Weilin Wan, Weizhong Zhang, Cheng Jin","doi":"10.1109/ICME55011.2023.00464","DOIUrl":null,"url":null,"abstract":"Video anomaly detection (VAD) is a challenging computer vision problem. Due to the scarcity of anomalous events in training, the models learned by existing methods would mistakenly fit the ubiquitous non-causal or even spurious correlations, leading to failure in inference. In this paper, we propose a new two-phase Pose-Motion Video Anomaly Detection (PoMo) approach by jointly exploiting the informative features including the poses and optical flows that have rich causal correlations with abnormality. PoMo can effectively prevent the non-causal features from leaking in by either encoding only the essential information, i.e., the poses and optical flows, with our normalized autoencoder (phase one), or separately modeling the knowledge learned in phase one using our causal-conditioned autoencoder (phase two). The difference between normal and abnormal events can be amplified through these two phases. Thus the generalization ability can be reinforced. Extensive experimental results demonstrate the superiority of our approach over the existing methods and the improvements in AUC-ROC can be up to 1.5%.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00464","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Video anomaly detection (VAD) is a challenging computer vision problem. Due to the scarcity of anomalous events in training, the models learned by existing methods would mistakenly fit the ubiquitous non-causal or even spurious correlations, leading to failure in inference. In this paper, we propose a new two-phase Pose-Motion Video Anomaly Detection (PoMo) approach by jointly exploiting the informative features including the poses and optical flows that have rich causal correlations with abnormality. PoMo can effectively prevent the non-causal features from leaking in by either encoding only the essential information, i.e., the poses and optical flows, with our normalized autoencoder (phase one), or separately modeling the knowledge learned in phase one using our causal-conditioned autoencoder (phase two). The difference between normal and abnormal events can be amplified through these two phases. Thus the generalization ability can be reinforced. Extensive experimental results demonstrate the superiority of our approach over the existing methods and the improvements in AUC-ROC can be up to 1.5%.
基于记忆增强重建和条件变分预测的姿态运动视频异常检测
视频异常检测(VAD)是一个具有挑战性的计算机视觉问题。由于训练中异常事件的稀缺性,现有方法学习到的模型会错误地拟合无处不在的非因果关系,甚至是虚假的相关性,导致推理失败。在本文中,我们提出了一种新的两相姿态-运动视频异常检测(PoMo)方法,该方法联合利用了与异常具有丰富因果关系的姿态和光流等信息特征。PoMo可以通过使用归一化自编码器(阶段一)只编码基本信息,即姿态和光流,或使用因果条件自编码器(阶段二)单独建模在阶段一学习到的知识,有效地防止非因果特征泄漏。正常事件和异常事件之间的区别可以通过这两个阶段放大。从而增强归纳能力。大量的实验结果表明,我们的方法优于现有的方法,AUC-ROC的改进可达1.5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信