Xixun Lin, Jiangxia Cao, Peng Zhang, Chuan Zhou, Zhao Li, Jia Wu, Bin Wang
{"title":"事件序列学习的解纠缠深度多元Hawkes过程","authors":"Xixun Lin, Jiangxia Cao, Peng Zhang, Chuan Zhou, Zhao Li, Jia Wu, Bin Wang","doi":"10.1109/ICDM51629.2021.00047","DOIUrl":null,"url":null,"abstract":"Multivariate Hawkes processes (MHPs) are classic methods to learn temporal patterns in event sequences of different entities. Traditional MHPs with explicit parametric intensity functions are friendly to model interpretability. However, recent Deep MHPs which employ various variants of recurrent neural networks are hardly to understand, albeit more expressive towards event sequences. The lack of model interpretability of Deep MHPs leads to a limited comprehension of complicated dynamics between events. To this end, we present a new Disentangled Deep Multivariate Hawkes Process $(\\mathrm{D}^{2}$ MHP) to enhance model expressiveness and meanwhile maintain model interpretability. $\\mathrm{D}^{2}$ MHP achieves state disentanglement by disentangling the latent representation of an event sequence into static and dynamic latent variables, and matches these latent variables to interpretable factors in the intensity function. Moreover, considering that an entity typically has multiple identities, $\\mathrm{D}^{2}$ MHP further splits these latent variables into factorized representations, each of which is associated with a corresponding identity. Experiments on real-world datasets show that $\\mathrm{D}^{2}$ MHP yields significant and consistent improvements over state-of-the-art baselines. We also demonstrate model interpretability via the detailed analysis.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Disentangled Deep Multivariate Hawkes Process for Learning Event Sequences\",\"authors\":\"Xixun Lin, Jiangxia Cao, Peng Zhang, Chuan Zhou, Zhao Li, Jia Wu, Bin Wang\",\"doi\":\"10.1109/ICDM51629.2021.00047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multivariate Hawkes processes (MHPs) are classic methods to learn temporal patterns in event sequences of different entities. Traditional MHPs with explicit parametric intensity functions are friendly to model interpretability. However, recent Deep MHPs which employ various variants of recurrent neural networks are hardly to understand, albeit more expressive towards event sequences. The lack of model interpretability of Deep MHPs leads to a limited comprehension of complicated dynamics between events. To this end, we present a new Disentangled Deep Multivariate Hawkes Process $(\\\\mathrm{D}^{2}$ MHP) to enhance model expressiveness and meanwhile maintain model interpretability. $\\\\mathrm{D}^{2}$ MHP achieves state disentanglement by disentangling the latent representation of an event sequence into static and dynamic latent variables, and matches these latent variables to interpretable factors in the intensity function. Moreover, considering that an entity typically has multiple identities, $\\\\mathrm{D}^{2}$ MHP further splits these latent variables into factorized representations, each of which is associated with a corresponding identity. Experiments on real-world datasets show that $\\\\mathrm{D}^{2}$ MHP yields significant and consistent improvements over state-of-the-art baselines. We also demonstrate model interpretability via the detailed analysis.\",\"PeriodicalId\":320970,\"journal\":{\"name\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM51629.2021.00047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Disentangled Deep Multivariate Hawkes Process for Learning Event Sequences
Multivariate Hawkes processes (MHPs) are classic methods to learn temporal patterns in event sequences of different entities. Traditional MHPs with explicit parametric intensity functions are friendly to model interpretability. However, recent Deep MHPs which employ various variants of recurrent neural networks are hardly to understand, albeit more expressive towards event sequences. The lack of model interpretability of Deep MHPs leads to a limited comprehension of complicated dynamics between events. To this end, we present a new Disentangled Deep Multivariate Hawkes Process $(\mathrm{D}^{2}$ MHP) to enhance model expressiveness and meanwhile maintain model interpretability. $\mathrm{D}^{2}$ MHP achieves state disentanglement by disentangling the latent representation of an event sequence into static and dynamic latent variables, and matches these latent variables to interpretable factors in the intensity function. Moreover, considering that an entity typically has multiple identities, $\mathrm{D}^{2}$ MHP further splits these latent variables into factorized representations, each of which is associated with a corresponding identity. Experiments on real-world datasets show that $\mathrm{D}^{2}$ MHP yields significant and consistent improvements over state-of-the-art baselines. We also demonstrate model interpretability via the detailed analysis.