用于暴力检测的数据高效视频转换器

Al.maamoon Rasool Abdali
{"title":"用于暴力检测的数据高效视频转换器","authors":"Al.maamoon Rasool Abdali","doi":"10.1109/COMNETSAT53002.2021.9530829","DOIUrl":null,"url":null,"abstract":"In smart cities, violence event detection is critical to ensure city safety. Several studies have been done on this topic with a focus on 2d-Convolutional Neural Network (2d-CNN) to detect spatial features from each frame, followed by one of the Recurrent Neural Networks (RNN) variants as a temporal features learning method. On the other hand, the transformer network has achieved a great result in many areas. The bottleneck for transformers is the need for large data set to achieve good results. In this work, we propose a data-efficient video transformer (DeVTr) based on the transformer network as a Spatio-temporal learning method with a pre-trained 2d-Convolutional neural network (2d-CNN) as an embedding layer for the input data. The model has been trained and tested on the Real-life violence dataset (RLVS) and achieved an accuracy of 96.25%. A comparison of the result for the suggested method with previous techniques illustrated that the suggested method provides the best result among all the other studies for violence event detection.","PeriodicalId":148136,"journal":{"name":"2021 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)","volume":"248 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Data Efficient Video Transformer for Violence Detection\",\"authors\":\"Al.maamoon Rasool Abdali\",\"doi\":\"10.1109/COMNETSAT53002.2021.9530829\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In smart cities, violence event detection is critical to ensure city safety. Several studies have been done on this topic with a focus on 2d-Convolutional Neural Network (2d-CNN) to detect spatial features from each frame, followed by one of the Recurrent Neural Networks (RNN) variants as a temporal features learning method. On the other hand, the transformer network has achieved a great result in many areas. The bottleneck for transformers is the need for large data set to achieve good results. In this work, we propose a data-efficient video transformer (DeVTr) based on the transformer network as a Spatio-temporal learning method with a pre-trained 2d-Convolutional neural network (2d-CNN) as an embedding layer for the input data. The model has been trained and tested on the Real-life violence dataset (RLVS) and achieved an accuracy of 96.25%. A comparison of the result for the suggested method with previous techniques illustrated that the suggested method provides the best result among all the other studies for violence event detection.\",\"PeriodicalId\":148136,\"journal\":{\"name\":\"2021 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)\",\"volume\":\"248 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COMNETSAT53002.2021.9530829\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMNETSAT53002.2021.9530829","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

在智慧城市中,暴力事件检测是保障城市安全的关键。关于这个主题已经做了一些研究,重点是2d-卷积神经网络(2d-CNN)来检测每帧的空间特征,然后是一种递归神经网络(RNN)变体作为时间特征学习方法。另一方面,变压器网络在许多地区取得了很大的成效。变压器的瓶颈是需要大的数据集才能获得好的结果。在这项工作中,我们提出了一种基于变压器网络的数据高效视频变压器(DeVTr)作为一种时空学习方法,使用预训练的2d-卷积神经网络(2d-CNN)作为输入数据的嵌入层。该模型已经在现实生活暴力数据集(RLVS)上进行了训练和测试,准确率达到96.25%。将建议的方法的结果与以前的技术进行比较表明,建议的方法在所有其他研究中提供了最好的结果,用于暴力事件检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Data Efficient Video Transformer for Violence Detection
In smart cities, violence event detection is critical to ensure city safety. Several studies have been done on this topic with a focus on 2d-Convolutional Neural Network (2d-CNN) to detect spatial features from each frame, followed by one of the Recurrent Neural Networks (RNN) variants as a temporal features learning method. On the other hand, the transformer network has achieved a great result in many areas. The bottleneck for transformers is the need for large data set to achieve good results. In this work, we propose a data-efficient video transformer (DeVTr) based on the transformer network as a Spatio-temporal learning method with a pre-trained 2d-Convolutional neural network (2d-CNN) as an embedding layer for the input data. The model has been trained and tested on the Real-life violence dataset (RLVS) and achieved an accuracy of 96.25%. A comparison of the result for the suggested method with previous techniques illustrated that the suggested method provides the best result among all the other studies for violence event detection.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信