kar 2402565399 ku, Walter Gerych, Luke Buquicchio, Abdulaziz Alajaji, E. Agu, Elke A. Rundensteiner
{"title":"从可穿戴传感器数据中使用主题模型生成特征的复杂活动识别","authors":"kar 2402565399 ku, Walter Gerych, Luke Buquicchio, Abdulaziz Alajaji, E. Agu, Elke A. Rundensteiner","doi":"10.1109/SMARTCOMP52413.2021.00026","DOIUrl":null,"url":null,"abstract":"The recognition of complex activities such as \"having dinner\" or \"cooking\" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as \"having dinner\" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.","PeriodicalId":330785,"journal":{"name":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CARTMAN: Complex Activity Recognition Using Topic Models for Feature Generation from Wearable Sensor Data\",\"authors\":\"kar 2402565399 ku, Walter Gerych, Luke Buquicchio, Abdulaziz Alajaji, E. Agu, Elke A. Rundensteiner\",\"doi\":\"10.1109/SMARTCOMP52413.2021.00026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The recognition of complex activities such as \\\"having dinner\\\" or \\\"cooking\\\" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as \\\"having dinner\\\" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.\",\"PeriodicalId\":330785,\"journal\":{\"name\":\"2021 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMARTCOMP52413.2021.00026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP52413.2021.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CARTMAN: Complex Activity Recognition Using Topic Models for Feature Generation from Wearable Sensor Data
The recognition of complex activities such as "having dinner" or "cooking" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as "having dinner" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.