{"title":"基于视觉注意的目标跟踪","authors":"Mingqiang Lin, Houde Dai","doi":"10.1109/ICINFA.2016.7832119","DOIUrl":null,"url":null,"abstract":"Humans have the capability to quickly prioritize external visual stimuli and localize their most interest in a scene. Inspired by this mechanism, we propose a robust object tracking algorithm based on visual attention. We fuse motion feature and color feature to estimate the target state under the guidance of saliency map. Principal Component Analysis method is used to compute saliency feature based on the dense appearance model generated from the background templates. Motion feature is extracted by using the method which is a Bayesian decision rule for classification of background and foreground. Numerous experiments demonstrate the proposed method performs well against state-of-the-art tracking methods when dealing with illumination change, pose variation, occlusion, and background clutter situations.","PeriodicalId":389619,"journal":{"name":"2016 IEEE International Conference on Information and Automation (ICIA)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Object tracking based on visual attention\",\"authors\":\"Mingqiang Lin, Houde Dai\",\"doi\":\"10.1109/ICINFA.2016.7832119\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Humans have the capability to quickly prioritize external visual stimuli and localize their most interest in a scene. Inspired by this mechanism, we propose a robust object tracking algorithm based on visual attention. We fuse motion feature and color feature to estimate the target state under the guidance of saliency map. Principal Component Analysis method is used to compute saliency feature based on the dense appearance model generated from the background templates. Motion feature is extracted by using the method which is a Bayesian decision rule for classification of background and foreground. Numerous experiments demonstrate the proposed method performs well against state-of-the-art tracking methods when dealing with illumination change, pose variation, occlusion, and background clutter situations.\",\"PeriodicalId\":389619,\"journal\":{\"name\":\"2016 IEEE International Conference on Information and Automation (ICIA)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Conference on Information and Automation (ICIA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICINFA.2016.7832119\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Information and Automation (ICIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICINFA.2016.7832119","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Humans have the capability to quickly prioritize external visual stimuli and localize their most interest in a scene. Inspired by this mechanism, we propose a robust object tracking algorithm based on visual attention. We fuse motion feature and color feature to estimate the target state under the guidance of saliency map. Principal Component Analysis method is used to compute saliency feature based on the dense appearance model generated from the background templates. Motion feature is extracted by using the method which is a Bayesian decision rule for classification of background and foreground. Numerous experiments demonstrate the proposed method performs well against state-of-the-art tracking methods when dealing with illumination change, pose variation, occlusion, and background clutter situations.