{"title":"视频异常检测的深度时间递归差分网络","authors":"Gargi V. Pillai;Debashis Sen","doi":"10.1109/TAI.2024.3521877","DOIUrl":null,"url":null,"abstract":"Intelligent video surveillance systems with anomaly detection capabilities are indispensable for outdoor security. Video anomaly detection (VAD) is usually performed by learning patterns representing normal events and declaring an anomaly when an abnormal pattern is encountered. However, the features of normal patterns in a video often vary with time as real-world videos are non-stationary in nature, which makes its handling essential during VAD. To this end, we propose an approach for anomaly detection in videos, where a novel deep temporally recursive differencing network (DDN) diminishes the adverse effects of the non-stationary nature on VAD. The DDN consists of multiple layers of differencing operators of optimized orders, where every two consecutive layers are separated by a suitable nonlinearity. Spatial and temporal features are extracted from nonoverlapping blocks in video frames and fed to the DDN. While the spatial feature is obtained using a pretrained network, our temporal feature computation involves the use of FlowNetS with a new training strategy that does not require ground truth. The features at the output of DDN are used in a predictor based on autoregression and moving average of the regression errors. Then, the predictor's output estimates are compared to the corresponding actual values for anomaly detection, which also involves block-level selection and consistency check. Qualitative evaluation and quantitative comparison with several existing approaches on multiple standard datasets demonstrate the effectiveness of the proposed VAD approach. An ablation study highlighting the significance of the various components of our approach and a hyperparameter analysis are also provided.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 5","pages":"1414-1428"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Temporally Recursive Differencing Network for Anomaly Detection in Videos\",\"authors\":\"Gargi V. Pillai;Debashis Sen\",\"doi\":\"10.1109/TAI.2024.3521877\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Intelligent video surveillance systems with anomaly detection capabilities are indispensable for outdoor security. Video anomaly detection (VAD) is usually performed by learning patterns representing normal events and declaring an anomaly when an abnormal pattern is encountered. However, the features of normal patterns in a video often vary with time as real-world videos are non-stationary in nature, which makes its handling essential during VAD. To this end, we propose an approach for anomaly detection in videos, where a novel deep temporally recursive differencing network (DDN) diminishes the adverse effects of the non-stationary nature on VAD. The DDN consists of multiple layers of differencing operators of optimized orders, where every two consecutive layers are separated by a suitable nonlinearity. Spatial and temporal features are extracted from nonoverlapping blocks in video frames and fed to the DDN. While the spatial feature is obtained using a pretrained network, our temporal feature computation involves the use of FlowNetS with a new training strategy that does not require ground truth. The features at the output of DDN are used in a predictor based on autoregression and moving average of the regression errors. Then, the predictor's output estimates are compared to the corresponding actual values for anomaly detection, which also involves block-level selection and consistency check. Qualitative evaluation and quantitative comparison with several existing approaches on multiple standard datasets demonstrate the effectiveness of the proposed VAD approach. An ablation study highlighting the significance of the various components of our approach and a hyperparameter analysis are also provided.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"6 5\",\"pages\":\"1414-1428\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10812947/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10812947/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Temporally Recursive Differencing Network for Anomaly Detection in Videos
Intelligent video surveillance systems with anomaly detection capabilities are indispensable for outdoor security. Video anomaly detection (VAD) is usually performed by learning patterns representing normal events and declaring an anomaly when an abnormal pattern is encountered. However, the features of normal patterns in a video often vary with time as real-world videos are non-stationary in nature, which makes its handling essential during VAD. To this end, we propose an approach for anomaly detection in videos, where a novel deep temporally recursive differencing network (DDN) diminishes the adverse effects of the non-stationary nature on VAD. The DDN consists of multiple layers of differencing operators of optimized orders, where every two consecutive layers are separated by a suitable nonlinearity. Spatial and temporal features are extracted from nonoverlapping blocks in video frames and fed to the DDN. While the spatial feature is obtained using a pretrained network, our temporal feature computation involves the use of FlowNetS with a new training strategy that does not require ground truth. The features at the output of DDN are used in a predictor based on autoregression and moving average of the regression errors. Then, the predictor's output estimates are compared to the corresponding actual values for anomaly detection, which also involves block-level selection and consistency check. Qualitative evaluation and quantitative comparison with several existing approaches on multiple standard datasets demonstrate the effectiveness of the proposed VAD approach. An ablation study highlighting the significance of the various components of our approach and a hyperparameter analysis are also provided.