motif2vec: Semantic-aware Representation Learning for Wearables' Time Series Data

Suwen Lin, Xian Wu, N. Chawla
{"title":"motif2vec: Semantic-aware Representation Learning for Wearables' Time Series Data","authors":"Suwen Lin, Xian Wu, N. Chawla","doi":"10.1109/DSAA53316.2021.9564120","DOIUrl":null,"url":null,"abstract":"The proliferation of wearable sensors allows for the continuous collection of temporal characterization of an individual's physical activity and physiological data. This is enabling an unprecedented opportunity to delve into a deeper analysis of the underlying patterns of such temporal data and to infer attributes associated with health, behaviors, and well-being. However, there remain several challenges to fully discover both structural and temporal patterns (motifs) in these data streams and to leverage the semantic relationship among these motifs. These include: i) the temporal data of variable length and high resolution leads to the motifs of various sizes; ii) periodic occurrences and hierarchical overlaps of these motifs further challenge the modeling of their complex structural and semantic relations. We propose a semantic-aware unsupervised representation learning model, motif2vec, to learn the latent representation of time series data collected from wearable sensors. The motif2vec consists of three major components: 1) transforming the time series into a set of variable-length motif sequences; 2) formalizing random walks to construct the neighborhood of motifs and thus to extract structural and semantic relationship among motifs; 3) learning time series latent features to capture the motif neighborhood structure with a skip-gram model. Experiments on two real-world datasets, derived from two different wearables and population groups, show motif2vec outperforms six state-of-the-art benchmarks on various tasks.","PeriodicalId":129612,"journal":{"name":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA53316.2021.9564120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The proliferation of wearable sensors allows for the continuous collection of temporal characterization of an individual's physical activity and physiological data. This is enabling an unprecedented opportunity to delve into a deeper analysis of the underlying patterns of such temporal data and to infer attributes associated with health, behaviors, and well-being. However, there remain several challenges to fully discover both structural and temporal patterns (motifs) in these data streams and to leverage the semantic relationship among these motifs. These include: i) the temporal data of variable length and high resolution leads to the motifs of various sizes; ii) periodic occurrences and hierarchical overlaps of these motifs further challenge the modeling of their complex structural and semantic relations. We propose a semantic-aware unsupervised representation learning model, motif2vec, to learn the latent representation of time series data collected from wearable sensors. The motif2vec consists of three major components: 1) transforming the time series into a set of variable-length motif sequences; 2) formalizing random walks to construct the neighborhood of motifs and thus to extract structural and semantic relationship among motifs; 3) learning time series latent features to capture the motif neighborhood structure with a skip-gram model. Experiments on two real-world datasets, derived from two different wearables and population groups, show motif2vec outperforms six state-of-the-art benchmarks on various tasks.
motif2vec:可穿戴设备时间序列数据的语义感知表示学习
可穿戴传感器的普及使得人们能够持续收集个人身体活动和生理数据的时间特征。这提供了一个前所未有的机会,可以深入分析这些时间数据的潜在模式,并推断与健康、行为和福祉相关的属性。然而,要充分发现这些数据流中的结构模式和时间模式(基序),并利用这些基序之间的语义关系,仍然存在一些挑战。这包括:i)可变长度和高分辨率的时间数据导致不同大小的图案;Ii)这些母题的周期性出现和层次重叠进一步挑战了它们复杂的结构和语义关系的建模。我们提出了一个语义感知的无监督表示学习模型motif2vec来学习从可穿戴传感器收集的时间序列数据的潜在表示。motif2vec由三个主要部分组成:1)将时间序列转化为一组变长motif序列;2)形式化随机漫步,构建母题的邻域,从而提取母题之间的结构和语义关系;3)学习时间序列潜在特征,利用skip-gram模型捕捉motif的邻域结构。在两个真实世界的数据集上进行的实验,来自两种不同的可穿戴设备和人群,表明motif2vec在各种任务上的表现优于六个最先进的基准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信