基于混合深度学习架构的异常事件检测的双流时空特征提取和分类模型

Pub Date : 2023-07-08 DOI:10.1142/s0219467824500529
P. Mangai, M. Geetha, G. Kumaravelan
{"title":"基于混合深度学习架构的异常事件检测的双流时空特征提取和分类模型","authors":"P. Mangai, M. Geetha, G. Kumaravelan","doi":"10.1142/s0219467824500529","DOIUrl":null,"url":null,"abstract":"Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures\",\"authors\":\"P. Mangai, M. Geetha, G. Kumaravelan\",\"doi\":\"10.1142/s0219467824500529\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0,\"publicationDate\":\"2023-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s0219467824500529\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0219467824500529","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

利用监控录像识别事件是减少犯罪和非法活动的主要来源。具体地,异常事件检测获得更多关注,从而可以提供即时响应。使用传统技术的视频处理可以识别事件,但无法对其进行分类。最近,基于深度学习的视频处理应用提供了优异的性能,然而该架构考虑了事件检测的空间或时间特征。为了提高视频关键帧异常事件检测的检测率和分类精度,必须同时考虑空间和时间特征。早期的方法考虑关键帧中的任何一个特征来检测视频帧中的异常。然而,由于视频环境和其他因素,结果并不准确,有时容易出错。因此,提出了双流混合深度学习架构来处理视频异常检测过程中的空间和时间特征,以获得增强的检测性能。所提出的混合模型使用YOLO-V4和VGG-16提取空间特征,并使用光学FlowNet和VGG-1提取时间特征。使用混合CNN-LSTM模型对提取的特征进行融合和分类。使用基准UCF犯罪数据集的实验验证了所提出的模型相对于现有异常检测方法的性能。所提出的模型达到了95.6%的最大精度,这表明与最先进的技术相比具有更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
分享
查看原文
Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures
Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信