从可穿戴传感器数据中使用主题模型生成特征的复杂活动识别

kar 2402565399 ku, Walter Gerych, Luke Buquicchio, Abdulaziz Alajaji, E. Agu, Elke A. Rundensteiner
{"title":"从可穿戴传感器数据中使用主题模型生成特征的复杂活动识别","authors":"kar 2402565399 ku, Walter Gerych, Luke Buquicchio, Abdulaziz Alajaji, E. Agu, Elke A. Rundensteiner","doi":"10.1109/SMARTCOMP52413.2021.00026","DOIUrl":null,"url":null,"abstract":"The recognition of complex activities such as \"having dinner\" or \"cooking\" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as \"having dinner\" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.","PeriodicalId":330785,"journal":{"name":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CARTMAN: Complex Activity Recognition Using Topic Models for Feature Generation from Wearable Sensor Data\",\"authors\":\"kar 2402565399 ku, Walter Gerych, Luke Buquicchio, Abdulaziz Alajaji, E. Agu, Elke A. Rundensteiner\",\"doi\":\"10.1109/SMARTCOMP52413.2021.00026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The recognition of complex activities such as \\\"having dinner\\\" or \\\"cooking\\\" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as \\\"having dinner\\\" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.\",\"PeriodicalId\":330785,\"journal\":{\"name\":\"2021 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMARTCOMP52413.2021.00026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP52413.2021.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

从可穿戴传感器数据中识别“吃饭”或“做饭”等复杂活动,是各种医疗保健、安全和情境感知移动和普适计算应用中的一个重要问题。与简单的活动(如散步)涉及单个、不可分割的重复动作相比,识别复杂的活动(如“吃饭”)是一个更难的子问题,它可能由多个交错或并发的简单活动组成,每次都有不同的顺序。大多数先前的工作都集中在识别简单的活动,使用手工制作的特征,或者没有使用最先进的神经网络模型进行分类。在本文中,我们提出了一种复杂活动识别方法CARTMAN,它使用Latent Dirichlet allocation (LDA)主题模型来生成智能手机传感器特征,以捕获复杂活动的潜在表示。然后使用具有自关注的DeepConvLSTM神经网络对这些LDA特征进行分类。DeepConvLSTM从传感器数据中自动学习时空特征,而自关注层识别并关注时间序列传感器数据中的预测点。我们的CARTMAN方法在宏观和加权f1分数方面优于当前最先进的复杂活动模型和基线模型6-23%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CARTMAN: Complex Activity Recognition Using Topic Models for Feature Generation from Wearable Sensor Data
The recognition of complex activities such as "having dinner" or "cooking" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as "having dinner" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信