{"title":"DIAT-DSCNN-GRU-HARNet:基于视频的人类活动分类的轻量级DCNN","authors":"Ajay Waghumbare, Upasna Singh","doi":"10.3103/S014641162570021X","DOIUrl":null,"url":null,"abstract":"<p>The research in computer vision and pattern recognition focuses on detecting and classifying human actions in videos. Using standard convolution for spatial feature extraction leads to large parameters, causing issues like lower performance, overfitting, slow training, and poor prediction. For temporal feature extraction, recurrent neural networks (RNN) are used but these face vanishing gradient problem. Another variant of RNN i.e. long short term memory faces problem of high computational cost. To solve these issues, lightweight convolution neural network model is being proposed named as “DIAT-DSCNN-GRU-HARNet”. This model classifies human activities in videos using separable convolution, dilated convolution, and gated recurrent unit, considering parameters, model size, and floating point operations. We conducted in-depth experiments using realistic videos of UCF-ARG-Arial, UCF-ARG-Ground, and HON4D, comparing results with other approaches to demonstrate the effectiveness of our suggested technique.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"59 2","pages":"255 - 265"},"PeriodicalIF":0.5000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DIAT-DSCNN-GRU-HARNet: A Lightweight DCNN for Video Based Classification of Human Activities\",\"authors\":\"Ajay Waghumbare, Upasna Singh\",\"doi\":\"10.3103/S014641162570021X\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The research in computer vision and pattern recognition focuses on detecting and classifying human actions in videos. Using standard convolution for spatial feature extraction leads to large parameters, causing issues like lower performance, overfitting, slow training, and poor prediction. For temporal feature extraction, recurrent neural networks (RNN) are used but these face vanishing gradient problem. Another variant of RNN i.e. long short term memory faces problem of high computational cost. To solve these issues, lightweight convolution neural network model is being proposed named as “DIAT-DSCNN-GRU-HARNet”. This model classifies human activities in videos using separable convolution, dilated convolution, and gated recurrent unit, considering parameters, model size, and floating point operations. We conducted in-depth experiments using realistic videos of UCF-ARG-Arial, UCF-ARG-Ground, and HON4D, comparing results with other approaches to demonstrate the effectiveness of our suggested technique.</p>\",\"PeriodicalId\":46238,\"journal\":{\"name\":\"AUTOMATIC CONTROL AND COMPUTER SCIENCES\",\"volume\":\"59 2\",\"pages\":\"255 - 265\"},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2025-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AUTOMATIC CONTROL AND COMPUTER SCIENCES\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.3103/S014641162570021X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.3103/S014641162570021X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
DIAT-DSCNN-GRU-HARNet: A Lightweight DCNN for Video Based Classification of Human Activities
The research in computer vision and pattern recognition focuses on detecting and classifying human actions in videos. Using standard convolution for spatial feature extraction leads to large parameters, causing issues like lower performance, overfitting, slow training, and poor prediction. For temporal feature extraction, recurrent neural networks (RNN) are used but these face vanishing gradient problem. Another variant of RNN i.e. long short term memory faces problem of high computational cost. To solve these issues, lightweight convolution neural network model is being proposed named as “DIAT-DSCNN-GRU-HARNet”. This model classifies human activities in videos using separable convolution, dilated convolution, and gated recurrent unit, considering parameters, model size, and floating point operations. We conducted in-depth experiments using realistic videos of UCF-ARG-Arial, UCF-ARG-Ground, and HON4D, comparing results with other approaches to demonstrate the effectiveness of our suggested technique.
期刊介绍:
Automatic Control and Computer Sciences is a peer reviewed journal that publishes articles on• Control systems, cyber-physical system, real-time systems, robotics, smart sensors, embedded intelligence • Network information technologies, information security, statistical methods of data processing, distributed artificial intelligence, complex systems modeling, knowledge representation, processing and management • Signal and image processing, machine learning, machine perception, computer vision