Amit Kumar, Kristina Yordanova, T. Kirste, Mohit Kumar
{"title":"Combining off-the-shelf Image Classifiers with Transfer Learning for Activity Recognition","authors":"Amit Kumar, Kristina Yordanova, T. Kirste, Mohit Kumar","doi":"10.1145/3266157.3266219","DOIUrl":null,"url":null,"abstract":"Human Activity Recognition (HAR) plays an important role in many real world applications. Currently, various techniques have been proposed for sensor-based \"HAR\" in daily health monitoring, rehabilitative training and disease prevention. However, non-visual sensors in general and wearable sensors in specific have several limitations: acceptability and willingness to use wearable sensors; battery life; ease of use; size and effectiveness of the sensors. Therefore, adopting vision-based human activity recognition approach is more viable option since its diversity would enable the application to be deployed in wide range of domains. The most popular technique of vision based activity recognition, Deep Learning, however, requires huge domain-specific datasets for training which, is time consuming and expensive. To address this problem this paper proposes a Transfer Learning technique by adopting vision-based approach to \"HAR\" by using already trained Deep Learning models. A new stochastic model is developed by borrowing the concept of \"Dirichlet Alloaction\" from Latent Dirichlet Allocation (LDA) for an inference of the posterior distribution of the variables relating the deep learning classifiers predicted labels with the corresponding activities. Results show that an average accuracy of 95.43% is achieved during training the model as compared to 74.88 and 61.4% of Decision Tree and SVM respectively.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3266157.3266219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Human Activity Recognition (HAR) plays an important role in many real world applications. Currently, various techniques have been proposed for sensor-based "HAR" in daily health monitoring, rehabilitative training and disease prevention. However, non-visual sensors in general and wearable sensors in specific have several limitations: acceptability and willingness to use wearable sensors; battery life; ease of use; size and effectiveness of the sensors. Therefore, adopting vision-based human activity recognition approach is more viable option since its diversity would enable the application to be deployed in wide range of domains. The most popular technique of vision based activity recognition, Deep Learning, however, requires huge domain-specific datasets for training which, is time consuming and expensive. To address this problem this paper proposes a Transfer Learning technique by adopting vision-based approach to "HAR" by using already trained Deep Learning models. A new stochastic model is developed by borrowing the concept of "Dirichlet Alloaction" from Latent Dirichlet Allocation (LDA) for an inference of the posterior distribution of the variables relating the deep learning classifiers predicted labels with the corresponding activities. Results show that an average accuracy of 95.43% is achieved during training the model as compared to 74.88 and 61.4% of Decision Tree and SVM respectively.