{"title":"Robust and fast visual tracking using constrained sparse coding and dictionary learning","authors":"Tianxiang Bai, Youfu Li, Xiaolong Zhou","doi":"10.1109/IROS.2012.6385459","DOIUrl":null,"url":null,"abstract":"We present a novel appearance model using sparse coding with online sparse dictionary learning techniques for robust visual tracking. In the proposed appearance model, the target appearance is modeled via online sparse dictionary learning technique with an “elastic-net constraint”. This scheme allows us to capture the characteristics of the target local appearance, and promotes the robustness against partial occlusions during tracking. Additionally, we unify the sparse coding and online dictionary learning by defining a “sparsity consistency constraint” that facilitates the generative and discriminative capabilities of the appearance model. Moreover, we propose a robust similarity metric that can eliminate the outliers from the corrupted observations. We then integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on publicly available benchmark video sequences demonstrate that the proposed appearance model improves the tracking performance compared with other state-of-the-art approaches.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"55 1","pages":"3824-3829"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2012.6385459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
We present a novel appearance model using sparse coding with online sparse dictionary learning techniques for robust visual tracking. In the proposed appearance model, the target appearance is modeled via online sparse dictionary learning technique with an “elastic-net constraint”. This scheme allows us to capture the characteristics of the target local appearance, and promotes the robustness against partial occlusions during tracking. Additionally, we unify the sparse coding and online dictionary learning by defining a “sparsity consistency constraint” that facilitates the generative and discriminative capabilities of the appearance model. Moreover, we propose a robust similarity metric that can eliminate the outliers from the corrupted observations. We then integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on publicly available benchmark video sequences demonstrate that the proposed appearance model improves the tracking performance compared with other state-of-the-art approaches.