事件序列学习的解纠缠深度多元Hawkes过程

Xixun Lin, Jiangxia Cao, Peng Zhang, Chuan Zhou, Zhao Li, Jia Wu, Bin Wang
{"title":"事件序列学习的解纠缠深度多元Hawkes过程","authors":"Xixun Lin, Jiangxia Cao, Peng Zhang, Chuan Zhou, Zhao Li, Jia Wu, Bin Wang","doi":"10.1109/ICDM51629.2021.00047","DOIUrl":null,"url":null,"abstract":"Multivariate Hawkes processes (MHPs) are classic methods to learn temporal patterns in event sequences of different entities. Traditional MHPs with explicit parametric intensity functions are friendly to model interpretability. However, recent Deep MHPs which employ various variants of recurrent neural networks are hardly to understand, albeit more expressive towards event sequences. The lack of model interpretability of Deep MHPs leads to a limited comprehension of complicated dynamics between events. To this end, we present a new Disentangled Deep Multivariate Hawkes Process $(\\mathrm{D}^{2}$ MHP) to enhance model expressiveness and meanwhile maintain model interpretability. $\\mathrm{D}^{2}$ MHP achieves state disentanglement by disentangling the latent representation of an event sequence into static and dynamic latent variables, and matches these latent variables to interpretable factors in the intensity function. Moreover, considering that an entity typically has multiple identities, $\\mathrm{D}^{2}$ MHP further splits these latent variables into factorized representations, each of which is associated with a corresponding identity. Experiments on real-world datasets show that $\\mathrm{D}^{2}$ MHP yields significant and consistent improvements over state-of-the-art baselines. We also demonstrate model interpretability via the detailed analysis.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Disentangled Deep Multivariate Hawkes Process for Learning Event Sequences\",\"authors\":\"Xixun Lin, Jiangxia Cao, Peng Zhang, Chuan Zhou, Zhao Li, Jia Wu, Bin Wang\",\"doi\":\"10.1109/ICDM51629.2021.00047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multivariate Hawkes processes (MHPs) are classic methods to learn temporal patterns in event sequences of different entities. Traditional MHPs with explicit parametric intensity functions are friendly to model interpretability. However, recent Deep MHPs which employ various variants of recurrent neural networks are hardly to understand, albeit more expressive towards event sequences. The lack of model interpretability of Deep MHPs leads to a limited comprehension of complicated dynamics between events. To this end, we present a new Disentangled Deep Multivariate Hawkes Process $(\\\\mathrm{D}^{2}$ MHP) to enhance model expressiveness and meanwhile maintain model interpretability. $\\\\mathrm{D}^{2}$ MHP achieves state disentanglement by disentangling the latent representation of an event sequence into static and dynamic latent variables, and matches these latent variables to interpretable factors in the intensity function. Moreover, considering that an entity typically has multiple identities, $\\\\mathrm{D}^{2}$ MHP further splits these latent variables into factorized representations, each of which is associated with a corresponding identity. Experiments on real-world datasets show that $\\\\mathrm{D}^{2}$ MHP yields significant and consistent improvements over state-of-the-art baselines. We also demonstrate model interpretability via the detailed analysis.\",\"PeriodicalId\":320970,\"journal\":{\"name\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM51629.2021.00047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

多元霍克斯过程(Multivariate Hawkes process, MHPs)是学习不同实体事件序列时间模式的经典方法。具有显式参数强度函数的传统MHPs对模型的可解释性很友好。然而,最近使用各种递归神经网络变体的Deep mhp很难理解,尽管对事件序列更具表现力。深层mhp缺乏模型可解释性,导致对事件之间复杂动态的理解有限。为此,我们提出了一种新的解纠缠深度多元Hawkes过程$(\ mathm {D}^{2}$ MHP),以增强模型的表达能力,同时保持模型的可解释性。$\ mathm {D}^{2}$ MHP通过将事件序列的潜在表示解纠缠为静态和动态潜在变量来实现状态解纠缠,并将这些潜在变量与强度函数中的可解释因素相匹配。此外,考虑到一个实体通常具有多个恒等式,$\ mathm {D}^{2}$ MHP进一步将这些潜在变量分解为分解表示,每个分解表示都与相应的恒等式相关联。在真实数据集上的实验表明,$\ mathm {D}^{2}$ MHP比最先进的基线产生了显著和一致的改进。我们还通过详细的分析证明了模型的可解释性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Disentangled Deep Multivariate Hawkes Process for Learning Event Sequences
Multivariate Hawkes processes (MHPs) are classic methods to learn temporal patterns in event sequences of different entities. Traditional MHPs with explicit parametric intensity functions are friendly to model interpretability. However, recent Deep MHPs which employ various variants of recurrent neural networks are hardly to understand, albeit more expressive towards event sequences. The lack of model interpretability of Deep MHPs leads to a limited comprehension of complicated dynamics between events. To this end, we present a new Disentangled Deep Multivariate Hawkes Process $(\mathrm{D}^{2}$ MHP) to enhance model expressiveness and meanwhile maintain model interpretability. $\mathrm{D}^{2}$ MHP achieves state disentanglement by disentangling the latent representation of an event sequence into static and dynamic latent variables, and matches these latent variables to interpretable factors in the intensity function. Moreover, considering that an entity typically has multiple identities, $\mathrm{D}^{2}$ MHP further splits these latent variables into factorized representations, each of which is associated with a corresponding identity. Experiments on real-world datasets show that $\mathrm{D}^{2}$ MHP yields significant and consistent improvements over state-of-the-art baselines. We also demonstrate model interpretability via the detailed analysis.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信