时间相干高斯混合模型的增量学习

Ognjen Arandjelovic, R. Cipolla
{"title":"时间相干高斯混合模型的增量学习","authors":"Ognjen Arandjelovic, R. Cipolla","doi":"10.5244/C.19.59","DOIUrl":null,"url":null,"abstract":"In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive oneby- one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed, which is increased (or reduced) when enough evidence for a new component is seen. This is deduced from the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions","PeriodicalId":196845,"journal":{"name":"Procedings of the British Machine Vision Conference 2005","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"77","resultStr":"{\"title\":\"Incremental Learning of Temporally-Coherent Gaussian Mixture Models\",\"authors\":\"Ognjen Arandjelovic, R. Cipolla\",\"doi\":\"10.5244/C.19.59\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive oneby- one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed, which is increased (or reduced) when enough evidence for a new component is seen. This is deduced from the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions\",\"PeriodicalId\":196845,\"journal\":{\"name\":\"Procedings of the British Machine Vision Conference 2005\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"77\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Procedings of the British Machine Vision Conference 2005\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5244/C.19.59\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedings of the British Machine Vision Conference 2005","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5244/C.19.59","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 77

摘要

本文主要研究高斯混合模型(GMMs)的增量学习问题。与以前的方法不同,以前的方法普遍假设新数据是由gmm表示的块,然后与当前模型估计合并,我们的方法适用于新数据点一个接一个到达的情况,同时需要很少的额外内存。我们在内存中只保留了两个gmm,没有历史数据。假设组件的数量是固定的,那么当前的拟合就会更新,当有足够的证据表明出现了一个新组件时,组件的数量就会增加(或减少)。这是从相同复杂性的最古老拟合的变化中推断出来的,称为历史GMM,其概念是我们方法的核心。在多个合成数据集和真实成像条件下获取的人脸视频序列上定性和定量地验证了该方法的性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Incremental Learning of Temporally-Coherent Gaussian Mixture Models
In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive oneby- one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed, which is increased (or reduced) when enough evidence for a new component is seen. This is deduced from the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信