Time-Varying Linear Autoregressive Models for Segmentation

Charles Florin, N. Paragios, G. Funka-Lea, James P. Williams
{"title":"Time-Varying Linear Autoregressive Models for Segmentation","authors":"Charles Florin, N. Paragios, G. Funka-Lea, James P. Williams","doi":"10.1109/ICIP.2007.4379003","DOIUrl":null,"url":null,"abstract":"Tracking highly deforming structures in space and time arises in numerous applications in computer vision. Static Models are often referred to as linear combinations of a mean model and modes of variation learned from training examples. In Dynamic Modeling, the shape is represented as a function of shapes at previous time steps. In this paper, we introduce a novel technique that uses the spatial and the temporal information on the object deformation. We reformulate tracking as a high order time series prediction mechanism that adapts itself on-line to the newest results. Samples (toward dimensionality reduction) are represented in an orthogonal basis, and are introduced in an auto-regressive model that is determined through an optimization process in appropriate metric spaces. Toward capturing evolving deformations as well as cases that have not been part of the learning stage, a process that updates on-line both the orthogonal basis decomposition and the parameters of the autoregressive model is proposed. Experimental results with a nonstationary dynamic system prove adaptive AR models give better results than both stationary models and models learned over the whole sequence.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE International Conference on Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2007.4379003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Tracking highly deforming structures in space and time arises in numerous applications in computer vision. Static Models are often referred to as linear combinations of a mean model and modes of variation learned from training examples. In Dynamic Modeling, the shape is represented as a function of shapes at previous time steps. In this paper, we introduce a novel technique that uses the spatial and the temporal information on the object deformation. We reformulate tracking as a high order time series prediction mechanism that adapts itself on-line to the newest results. Samples (toward dimensionality reduction) are represented in an orthogonal basis, and are introduced in an auto-regressive model that is determined through an optimization process in appropriate metric spaces. Toward capturing evolving deformations as well as cases that have not been part of the learning stage, a process that updates on-line both the orthogonal basis decomposition and the parameters of the autoregressive model is proposed. Experimental results with a nonstationary dynamic system prove adaptive AR models give better results than both stationary models and models learned over the whole sequence.
时变线性自回归分割模型
在空间和时间上跟踪高度变形的结构在计算机视觉中有许多应用。静态模型通常被称为均值模型和从训练样本中学习到的变化模式的线性组合。在动态建模中,形状表示为前一个时间步的形状的函数。在本文中,我们介绍了一种利用物体变形的空间和时间信息的新技术。我们将跟踪重新定义为一种高阶时间序列预测机制,它可以在线适应最新的结果。样本(向降维方向)以正交基表示,并引入通过适当度量空间中的优化过程确定的自回归模型。为了捕获演化变形以及未参与学习阶段的情况,提出了一种在线更新正交基分解和自回归模型参数的过程。对非平稳动态系统的实验结果表明,自适应增强现实模型比平稳模型和全序列学习模型具有更好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信