Multi-DGI: Multi-head Pooling Deep Graph Infomax for Human Activity Recognition

Yifan Chen, Haiqi Zhu, Zhiyuan Chen
{"title":"Multi-DGI: Multi-head Pooling Deep Graph Infomax for Human Activity Recognition","authors":"Yifan Chen, Haiqi Zhu, Zhiyuan Chen","doi":"10.1007/s11036-024-02306-y","DOIUrl":null,"url":null,"abstract":"<p>Human Activity Recognition (HAR) is a crucial research domain with substantial real-world implications. Despite the extensive application of machine learning techniques in various domains, most traditional models neglect the inherent spatio-temporal relationships within time-series data. To address this limitation, we propose an unsupervised Graph Representation Learning (GRL) model named Multi-head Pooling Deep Graph Infomax (Multi-DGI), which is applied to reveal the spatio-temporal patterns from the graph-structured HAR data. By employing an adaptive Multi-head Pooling mechanism, Multi-DGI captures comprehensive graph summaries, furnishing general embeddings for downstream classifiers, thereby reducing dependence on graph constructions. Using the UCI WISDM dataset and three basic graph construction methods, Multi-DGI delivers a minimum enhancement of 2.9%, 1.0%, 7.5%, and 6.4% in Accuracy, Precision, Recall, and Macro-F1 scores, respectively. The demonstrated robustness of Multi-DGI in extracting intricate patterns from rudimentary graphs reduces the dependence of GRL on high-quality graphs, thereby broadening its applicability in time-series analysis. Our code and data are available at https://github.com/AnguoCYF/Multi-DGI.</p>","PeriodicalId":501103,"journal":{"name":"Mobile Networks and Applications","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mobile Networks and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11036-024-02306-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human Activity Recognition (HAR) is a crucial research domain with substantial real-world implications. Despite the extensive application of machine learning techniques in various domains, most traditional models neglect the inherent spatio-temporal relationships within time-series data. To address this limitation, we propose an unsupervised Graph Representation Learning (GRL) model named Multi-head Pooling Deep Graph Infomax (Multi-DGI), which is applied to reveal the spatio-temporal patterns from the graph-structured HAR data. By employing an adaptive Multi-head Pooling mechanism, Multi-DGI captures comprehensive graph summaries, furnishing general embeddings for downstream classifiers, thereby reducing dependence on graph constructions. Using the UCI WISDM dataset and three basic graph construction methods, Multi-DGI delivers a minimum enhancement of 2.9%, 1.0%, 7.5%, and 6.4% in Accuracy, Precision, Recall, and Macro-F1 scores, respectively. The demonstrated robustness of Multi-DGI in extracting intricate patterns from rudimentary graphs reduces the dependence of GRL on high-quality graphs, thereby broadening its applicability in time-series analysis. Our code and data are available at https://github.com/AnguoCYF/Multi-DGI.

Abstract Image

Multi-DGI:用于人类活动识别的多头汇集深度图 Infomax
人类活动识别(HAR)是一个重要的研究领域,对现实世界具有重大影响。尽管机器学习技术被广泛应用于各个领域,但大多数传统模型都忽视了时间序列数据中固有的时空关系。针对这一局限性,我们提出了一种名为 "多头池化深度图 Infomax(Multi-DGI)"的无监督图表示学习(GRL)模型,用于从图结构的 HAR 数据中揭示时空模式。通过采用自适应多头汇集机制,Multi-DGI 可捕获全面的图摘要,为下游分类器提供通用嵌入,从而减少对图构造的依赖。利用 UCI WISDM 数据集和三种基本图构建方法,Multi-DGI 在准确度、精确度、召回率和 Macro-F1 分数上分别至少提高了 2.9%、1.0%、7.5% 和 6.4%。Multi-DGI 在从原始图形中提取复杂模式方面表现出的鲁棒性降低了 GRL 对高质量图形的依赖性,从而扩大了其在时间序列分析中的适用性。我们的代码和数据见 https://github.com/AnguoCYF/Multi-DGI。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信