Augmented Regularity for Efficient Video Anomaly Detection: An edge AI application

Jiafei Liang, Zhou Yue, Feng Yang, Zhiwen Fang
{"title":"Augmented Regularity for Efficient Video Anomaly Detection: An edge AI application","authors":"Jiafei Liang, Zhou Yue, Feng Yang, Zhiwen Fang","doi":"10.1109/ICDCSW56584.2022.00037","DOIUrl":null,"url":null,"abstract":"Video anomaly detection, as a critical edge AI application, may dramatically minimize transmission burden by transmitting just anomalous data. Traditionally, dense consecutive frames with high resolution are utilized as the input to assure that video anomaly detection performed well. Dense input with high resolution, on the other hand, will result in high computation. To address the demand of high performance with little computation on edge devices, we propose an efficient video anomaly detection based on augmented regularity with mutual learning. Sparse frames collected from every two frames with a low-resolution of 160 × 160 are used as the input to decrease processing. Generally, performance will be hampered as a result of the low-quality inputs. To improve performance, an auxiliary network is created that uses dense inputs to mine plentiful patterns from successive frames and promotes the proposed network throughout the training phase via mutual learning. Additionally, we design augmented regularity to increase scene generalization when edge devices are grouped in distributed applications with various scenes. During the training phase, the augmented regularity, which is irrelevant to the input video, is concatenated in the input as a hidden label message. The label message infers that the inputs are normal. In the inference phase, the abnormal information can be detected from the hidden label message through generated errors. Experimental results on benchmark datasets demonstrate that the proposed method can achieve the state-of-the-art at a super-real-time speed of 80fps.","PeriodicalId":357138,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCSW56584.2022.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Video anomaly detection, as a critical edge AI application, may dramatically minimize transmission burden by transmitting just anomalous data. Traditionally, dense consecutive frames with high resolution are utilized as the input to assure that video anomaly detection performed well. Dense input with high resolution, on the other hand, will result in high computation. To address the demand of high performance with little computation on edge devices, we propose an efficient video anomaly detection based on augmented regularity with mutual learning. Sparse frames collected from every two frames with a low-resolution of 160 × 160 are used as the input to decrease processing. Generally, performance will be hampered as a result of the low-quality inputs. To improve performance, an auxiliary network is created that uses dense inputs to mine plentiful patterns from successive frames and promotes the proposed network throughout the training phase via mutual learning. Additionally, we design augmented regularity to increase scene generalization when edge devices are grouped in distributed applications with various scenes. During the training phase, the augmented regularity, which is irrelevant to the input video, is concatenated in the input as a hidden label message. The label message infers that the inputs are normal. In the inference phase, the abnormal information can be detected from the hidden label message through generated errors. Experimental results on benchmark datasets demonstrate that the proposed method can achieve the state-of-the-art at a super-real-time speed of 80fps.
增强规则高效视频异常检测:边缘人工智能应用
视频异常检测作为一项关键的边缘人工智能应用,可以通过只传输异常数据来极大地减少传输负担。传统的方法是利用高分辨率的密集连续帧作为输入,以保证视频异常检测的良好性能。另一方面,高分辨率的密集输入将导致高计算量。为了满足边缘设备对高性能和低计算量的需求,我们提出了一种基于互学习增强规则的高效视频异常检测方法。采用每两帧采集的低分辨率为160 × 160的稀疏帧作为输入,减少处理。一般来说,低质量的投入会阻碍绩效。为了提高性能,创建了一个辅助网络,该网络使用密集输入从连续的帧中挖掘丰富的模式,并通过相互学习在整个训练阶段促进所提出的网络。此外,我们还设计了增强规则,以提高在不同场景的分布式应用程序中对边缘设备进行分组时的场景泛化。在训练阶段,与输入视频无关的增强规则作为隐藏标签信息连接到输入中。标签消息推断输入是正常的。在推理阶段,可以通过生成的错误从隐藏的标签消息中检测异常信息。在基准数据集上的实验结果表明,该方法可以达到80fps的超实时速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信