Fixed-lag particle filter for continuous context discovery using Indian Buffet Process

Nguyen Cong Thuong, S. Gupta, S. Venkatesh, Dinh Q. Phung
{"title":"Fixed-lag particle filter for continuous context discovery using Indian Buffet Process","authors":"Nguyen Cong Thuong, S. Gupta, S. Venkatesh, Dinh Q. Phung","doi":"10.1109/PERCOM.2014.6813939","DOIUrl":null,"url":null,"abstract":"Exploiting context from stream data in pervasive environments remains a challenge. We aim to extract proximal context from Bluetooth stream data, using an incremental, Bayesian nonparametric framework that estimates the number of contexts automatically. Unlike current approaches that can only provide final proximal grouping, our method provides proximal grouping and membership of users over time. Additionally, it provides an efficient online inference. We construct co-location matrix over time using Bluetooth data. A Poisson-exponential model is used to factorize this matrix into a factor matrix, interpreted as proximal groups, and a coefficient matrix that indicates factor usage. The coefficient matrix follows the Indian Buffet Process prior, which estimates the number of factors automatically. The non-negativity and sparsity of factors are enforced by using the exponential distribution to generate the factors. We propose a fixed-lag particle filter algorithm to process data incrementally. We compare the incremental inference (particle filter) with full batch inference (Gibbs sampling) in terms of normalized factorization error and execution time. The normalized error obtained through our incremental inference is comparable to that of full batch inference, whilst the execution time is more than 100 times faster. The discovered factors have similar meaning to the results of the popular Louvain method for community detection.","PeriodicalId":263520,"journal":{"name":"2014 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Pervasive Computing and Communications (PerCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOM.2014.6813939","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Exploiting context from stream data in pervasive environments remains a challenge. We aim to extract proximal context from Bluetooth stream data, using an incremental, Bayesian nonparametric framework that estimates the number of contexts automatically. Unlike current approaches that can only provide final proximal grouping, our method provides proximal grouping and membership of users over time. Additionally, it provides an efficient online inference. We construct co-location matrix over time using Bluetooth data. A Poisson-exponential model is used to factorize this matrix into a factor matrix, interpreted as proximal groups, and a coefficient matrix that indicates factor usage. The coefficient matrix follows the Indian Buffet Process prior, which estimates the number of factors automatically. The non-negativity and sparsity of factors are enforced by using the exponential distribution to generate the factors. We propose a fixed-lag particle filter algorithm to process data incrementally. We compare the incremental inference (particle filter) with full batch inference (Gibbs sampling) in terms of normalized factorization error and execution time. The normalized error obtained through our incremental inference is comparable to that of full batch inference, whilst the execution time is more than 100 times faster. The discovered factors have similar meaning to the results of the popular Louvain method for community detection.
使用印度自助餐过程的连续上下文发现的固定滞后粒子滤波器
在普遍环境中利用流数据中的上下文仍然是一个挑战。我们的目标是从蓝牙流数据中提取近端上下文,使用增量的贝叶斯非参数框架,自动估计上下文的数量。与目前只能提供最终近端分组的方法不同,我们的方法随时间提供近端分组和用户成员。此外,它提供了一个有效的在线推理。我们使用蓝牙数据构建随时间变化的共定位矩阵。泊松指数模型用于将该矩阵分解为因子矩阵(解释为近端群)和指示因子使用情况的系数矩阵。系数矩阵遵循印度自助餐过程先验,自动估计因素的数量。利用指数分布生成因子,保证了因子的非负性和稀疏性。我们提出了一种固定滞后的粒子滤波算法来对数据进行增量处理。我们比较了增量推理(粒子滤波)和全批推理(吉布斯抽样)在归一化分解误差和执行时间方面的差异。通过增量推理得到的归一化误差与全批推理相当,而执行时间却快了100倍以上。发现的因素与流行的鲁汶社区检测方法的结果具有相似的意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信