A Scalable and Generalised Deep Learning Framework for Anomaly Detection in Surveillance Videos

IF 3.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Sabah Abdulazeez Jebur, Laith Alzubaidi, Ahmed Saihood, Khalid A. Hussein, Haider Kadhim Hoomod, YuanTong Gu
{"title":"A Scalable and Generalised Deep Learning Framework for Anomaly Detection in Surveillance Videos","authors":"Sabah Abdulazeez Jebur,&nbsp;Laith Alzubaidi,&nbsp;Ahmed Saihood,&nbsp;Khalid A. Hussein,&nbsp;Haider Kadhim Hoomod,&nbsp;YuanTong Gu","doi":"10.1155/int/1947582","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Anomaly detection in videos is challenging due to the complexity, noise, and diverse nature of activities such as violence, shoplifting, and vandalism. While deep learning (DL) has shown excellent performance in this area, existing approaches have struggled to apply DL models across different anomaly tasks without extensive retraining. This repeated retraining is time-consuming, computationally intensive, and unfair. To address this limitation, a new DL framework is introduced in this study, consisting of three key components: transfer learning to enhance feature generalization, model fusion to improve feature representation, and multitask classification to generalize the classifier across multiple tasks without training from scratch when a new task is introduced. The framework’s main advantage is its ability to generalize without requiring retraining from scratch for each new task. Empirical evaluations demonstrate the framework’s effectiveness, achieving an accuracy of 97.99% on the RLVS (violence detection), 83.59% on the UCF dataset (shoplifting detection), and 88.37% across both datasets using a single classifier without retraining. Additionally, when tested on an unseen dataset, the framework achieved an accuracy of 87.25% and 79.39% on violence and shoplifting datasets, respectively. The study also utilises two explainability tools to identify potential biases, ensuring robustness and fairness. This research represents the first successful resolution of the generalization issue in anomaly detection, marking a significant advancement in the field.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1947582","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/1947582","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Anomaly detection in videos is challenging due to the complexity, noise, and diverse nature of activities such as violence, shoplifting, and vandalism. While deep learning (DL) has shown excellent performance in this area, existing approaches have struggled to apply DL models across different anomaly tasks without extensive retraining. This repeated retraining is time-consuming, computationally intensive, and unfair. To address this limitation, a new DL framework is introduced in this study, consisting of three key components: transfer learning to enhance feature generalization, model fusion to improve feature representation, and multitask classification to generalize the classifier across multiple tasks without training from scratch when a new task is introduced. The framework’s main advantage is its ability to generalize without requiring retraining from scratch for each new task. Empirical evaluations demonstrate the framework’s effectiveness, achieving an accuracy of 97.99% on the RLVS (violence detection), 83.59% on the UCF dataset (shoplifting detection), and 88.37% across both datasets using a single classifier without retraining. Additionally, when tested on an unseen dataset, the framework achieved an accuracy of 87.25% and 79.39% on violence and shoplifting datasets, respectively. The study also utilises two explainability tools to identify potential biases, ensuring robustness and fairness. This research represents the first successful resolution of the generalization issue in anomaly detection, marking a significant advancement in the field.

Abstract Image

一种用于监控视频异常检测的可扩展和广义深度学习框架
由于暴力、入店行窃和故意破坏等活动的复杂性、噪声和多样性,视频中的异常检测具有挑战性。虽然深度学习(DL)在这一领域表现出色,但现有的方法很难在没有大量再训练的情况下将DL模型应用于不同的异常任务。这种重复的再培训是耗时的,计算密集型的,不公平的。为了解决这一限制,本研究引入了一个新的深度学习框架,该框架由三个关键部分组成:迁移学习(增强特征泛化)、模型融合(改善特征表示)和多任务分类(在引入新任务时无需从头开始训练即可跨多个任务泛化分类器)。该框架的主要优点是它能够泛化,而不需要为每个新任务从头开始重新训练。经验评估证明了该框架的有效性,在RLVS(暴力检测)上实现了97.99%的准确率,在UCF数据集(商店盗窃检测)上实现了83.59%的准确率,在两个数据集上使用单个分类器实现了88.37%的准确率,而无需再训练。此外,当对未见过的数据集进行测试时,该框架在暴力和商店盗窃数据集上的准确率分别达到87.25%和79.39%。该研究还利用两种可解释性工具来识别潜在的偏见,确保稳健性和公平性。该研究首次成功解决了异常检测中的泛化问题,标志着该领域的重大进步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信