Shradha Bhatia, Tushar Chauhan, Sumita Gupta, S. Gambhir, Jitesh H. Panchal
{"title":"An Approach to Recognize Human Activities based on ConvLSTM and LRCN","authors":"Shradha Bhatia, Tushar Chauhan, Sumita Gupta, S. Gambhir, Jitesh H. Panchal","doi":"10.1109/ISCON57294.2023.10112060","DOIUrl":null,"url":null,"abstract":"In recent times, approaches based on deep learning (DL) have been effectively used to predict a variety of human actions using time series data from smartphones and wearable sensors. Time series data handling remains a barrier for DL-based techniques, even though they did quite well in activity detection. Traditional pattern recognition techniques have achieved significant advancements in recent years. However, the performance of the generalization model may be hampered by the approaches’ heavy reliance on human feature extraction. Deep learning methods are becoming more and more successful, and employing these approaches to understand human behaviours in mobile and wearable computing situations or using vision-based technologies has garnered a lot of interest. ConvLSTM and LRCN which is a combination of Convolutional Neural Network (CNN) and Long shirt-term memory (LSTM) are the machine learning methods we employed in this research. With the help of CNNLSTM, it is possible to anticipate human actions more accurately while also simplifying the model and doing away with the necessity for sophisticated feature engineering. Both in terms of space and time, the CNN-LSTM network is deep. In this paper, the LRCN model gets 92% accuracy when we compare the performance of all the models that were utilized against on each other.","PeriodicalId":280183,"journal":{"name":"2023 6th International Conference on Information Systems and Computer Networks (ISCON)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th International Conference on Information Systems and Computer Networks (ISCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCON57294.2023.10112060","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent times, approaches based on deep learning (DL) have been effectively used to predict a variety of human actions using time series data from smartphones and wearable sensors. Time series data handling remains a barrier for DL-based techniques, even though they did quite well in activity detection. Traditional pattern recognition techniques have achieved significant advancements in recent years. However, the performance of the generalization model may be hampered by the approaches’ heavy reliance on human feature extraction. Deep learning methods are becoming more and more successful, and employing these approaches to understand human behaviours in mobile and wearable computing situations or using vision-based technologies has garnered a lot of interest. ConvLSTM and LRCN which is a combination of Convolutional Neural Network (CNN) and Long shirt-term memory (LSTM) are the machine learning methods we employed in this research. With the help of CNNLSTM, it is possible to anticipate human actions more accurately while also simplifying the model and doing away with the necessity for sophisticated feature engineering. Both in terms of space and time, the CNN-LSTM network is deep. In this paper, the LRCN model gets 92% accuracy when we compare the performance of all the models that were utilized against on each other.