{"title":"基于DCNN的微多普勒特征人体活动识别","authors":"A. Waghumbare, Upasna Singh, Nihit Singhal","doi":"10.1109/IBSSC56953.2022.10037310","DOIUrl":null,"url":null,"abstract":"In recent years, Deep Convolutional Neural Networks (DCNNs) have demonstrated some promising results in classification of micro-Doppler (m-D) radar data in human activity recognition. Compared with camera-based, radar-based human activity recognition is robust to low light conditions, adverse weather conditions, long-range operations, through wall imaging etc. An indigenously developed “DIAT-J.1RADHAR” human activity recognition dataset comprising micro-Doppler signature images of six different activites like (i) person fight punching (boxing) during the one-to-one attack, (ii) person intruding for pre-attack surveillance (army marching), (iii) person training (army jogging), (iv) person shooting (or escaping) with a rifle (jumping with holding a gun), (v) stone/hand-grenade throwing for damage/blasting (stone-pelting/grenades-throwing), and (vi) person hidden translation for attack execution or escape (army crawling and compared performance of this data on various DCNN models. To reduce variations in data, we have cleaned data and make it suitable for DCNN model by using preprocessing methods such as re-scaling, rotation, width shift range, height shift range, sheer range, zoom range and horizontal flip etc. We used different DCNN pre-trained models such as VGG-16, VGG-19, and Inception V3. These models are fine-tuned and the resultant models are performing efficiently for human activity recognition in DIAT-μRadHAR human activity dataset.","PeriodicalId":426897,"journal":{"name":"2022 IEEE Bombay Section Signature Conference (IBSSC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"DCNN Based Human Activity Recognition Using Micro-Doppler Signatures\",\"authors\":\"A. Waghumbare, Upasna Singh, Nihit Singhal\",\"doi\":\"10.1109/IBSSC56953.2022.10037310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, Deep Convolutional Neural Networks (DCNNs) have demonstrated some promising results in classification of micro-Doppler (m-D) radar data in human activity recognition. Compared with camera-based, radar-based human activity recognition is robust to low light conditions, adverse weather conditions, long-range operations, through wall imaging etc. An indigenously developed “DIAT-J.1RADHAR” human activity recognition dataset comprising micro-Doppler signature images of six different activites like (i) person fight punching (boxing) during the one-to-one attack, (ii) person intruding for pre-attack surveillance (army marching), (iii) person training (army jogging), (iv) person shooting (or escaping) with a rifle (jumping with holding a gun), (v) stone/hand-grenade throwing for damage/blasting (stone-pelting/grenades-throwing), and (vi) person hidden translation for attack execution or escape (army crawling and compared performance of this data on various DCNN models. To reduce variations in data, we have cleaned data and make it suitable for DCNN model by using preprocessing methods such as re-scaling, rotation, width shift range, height shift range, sheer range, zoom range and horizontal flip etc. We used different DCNN pre-trained models such as VGG-16, VGG-19, and Inception V3. These models are fine-tuned and the resultant models are performing efficiently for human activity recognition in DIAT-μRadHAR human activity dataset.\",\"PeriodicalId\":426897,\"journal\":{\"name\":\"2022 IEEE Bombay Section Signature Conference (IBSSC)\",\"volume\":\"109 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Bombay Section Signature Conference (IBSSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IBSSC56953.2022.10037310\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Bombay Section Signature Conference (IBSSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IBSSC56953.2022.10037310","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DCNN Based Human Activity Recognition Using Micro-Doppler Signatures
In recent years, Deep Convolutional Neural Networks (DCNNs) have demonstrated some promising results in classification of micro-Doppler (m-D) radar data in human activity recognition. Compared with camera-based, radar-based human activity recognition is robust to low light conditions, adverse weather conditions, long-range operations, through wall imaging etc. An indigenously developed “DIAT-J.1RADHAR” human activity recognition dataset comprising micro-Doppler signature images of six different activites like (i) person fight punching (boxing) during the one-to-one attack, (ii) person intruding for pre-attack surveillance (army marching), (iii) person training (army jogging), (iv) person shooting (or escaping) with a rifle (jumping with holding a gun), (v) stone/hand-grenade throwing for damage/blasting (stone-pelting/grenades-throwing), and (vi) person hidden translation for attack execution or escape (army crawling and compared performance of this data on various DCNN models. To reduce variations in data, we have cleaned data and make it suitable for DCNN model by using preprocessing methods such as re-scaling, rotation, width shift range, height shift range, sheer range, zoom range and horizontal flip etc. We used different DCNN pre-trained models such as VGG-16, VGG-19, and Inception V3. These models are fine-tuned and the resultant models are performing efficiently for human activity recognition in DIAT-μRadHAR human activity dataset.