Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec
{"title":"使用无监督学习和多步集成的长期超像素跟踪","authors":"Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec","doi":"10.1109/ATSIP.2018.8364453","DOIUrl":null,"url":null,"abstract":"In this paper, we analyze how to accurately track superpixels over extended time periods for computer vision applications. A two-step video processing pipeline dedicated to long-term superpixel tracking is proposed based on unsupervised learning and temporal integration. First, unsupervised learning-based matching provides superpixel correspondences between consecutive and distant frames using context-rich features extended from greyscale to multi-channel. Resulting elementary matches are then combined along multi-step paths running through the whole sequence with various inter-frame distances. This produces a large set of candidate long-term superpixel pairings upon which majority voting is performed. Video object tracking experiments demonstrate the efficiency of this pipeline against state-of-the-art methods.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Long-term superpixel tracking using unsupervised learning and multi-step integration\",\"authors\":\"Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec\",\"doi\":\"10.1109/ATSIP.2018.8364453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we analyze how to accurately track superpixels over extended time periods for computer vision applications. A two-step video processing pipeline dedicated to long-term superpixel tracking is proposed based on unsupervised learning and temporal integration. First, unsupervised learning-based matching provides superpixel correspondences between consecutive and distant frames using context-rich features extended from greyscale to multi-channel. Resulting elementary matches are then combined along multi-step paths running through the whole sequence with various inter-frame distances. This produces a large set of candidate long-term superpixel pairings upon which majority voting is performed. Video object tracking experiments demonstrate the efficiency of this pipeline against state-of-the-art methods.\",\"PeriodicalId\":332253,\"journal\":{\"name\":\"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ATSIP.2018.8364453\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATSIP.2018.8364453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Long-term superpixel tracking using unsupervised learning and multi-step integration
In this paper, we analyze how to accurately track superpixels over extended time periods for computer vision applications. A two-step video processing pipeline dedicated to long-term superpixel tracking is proposed based on unsupervised learning and temporal integration. First, unsupervised learning-based matching provides superpixel correspondences between consecutive and distant frames using context-rich features extended from greyscale to multi-channel. Resulting elementary matches are then combined along multi-step paths running through the whole sequence with various inter-frame distances. This produces a large set of candidate long-term superpixel pairings upon which majority voting is performed. Video object tracking experiments demonstrate the efficiency of this pipeline against state-of-the-art methods.