{"title":"REFA3D:视频序列的鲁棒时空分析","authors":"M. Grand-Brochier, C. Tilmant, M. Dhome","doi":"10.5220/0003857203520357","DOIUrl":null,"url":null,"abstract":"This article proposes a generalization of our approach REFA (Grand-brochier et al., 2011) to spatio-temporal domain. Our new method REFA3D, is based mainly on hes-STIP detector and E-HOG3D. SIFT3D and HOG/HOF are the two must used methods for space-time analysis and give good results. So their studies allow us to understand their construction and to extract some components to improve our approach. The mask of analysis used by REFA is modified and therefore relies on the use of ellipsoids. The validation tests are based on video clips from synthetic transformations as well as real sequences from a simulator or an onboard camera. Our system (detection, description and matching) must be as invariant as possible for the image transformation (rotations, scales, time-scaling). We also study the performance obtained for registration of subsequence, a process often used for the location, for example. All the parameters (analysis shape, thresholds) and changes to the space-time generalization will be detailed in this article.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"133 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"REFA3D: Robust Spatio-temporal Analysis of Video Sequences\",\"authors\":\"M. Grand-Brochier, C. Tilmant, M. Dhome\",\"doi\":\"10.5220/0003857203520357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article proposes a generalization of our approach REFA (Grand-brochier et al., 2011) to spatio-temporal domain. Our new method REFA3D, is based mainly on hes-STIP detector and E-HOG3D. SIFT3D and HOG/HOF are the two must used methods for space-time analysis and give good results. So their studies allow us to understand their construction and to extract some components to improve our approach. The mask of analysis used by REFA is modified and therefore relies on the use of ellipsoids. The validation tests are based on video clips from synthetic transformations as well as real sequences from a simulator or an onboard camera. Our system (detection, description and matching) must be as invariant as possible for the image transformation (rotations, scales, time-scaling). We also study the performance obtained for registration of subsequence, a process often used for the location, for example. All the parameters (analysis shape, thresholds) and changes to the space-time generalization will be detailed in this article.\",\"PeriodicalId\":411140,\"journal\":{\"name\":\"International Conference on Computer Vision Theory and Applications\",\"volume\":\"133 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Computer Vision Theory and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0003857203520357\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Vision Theory and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0003857203520357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
本文将我们的方法REFA (Grand-brochier et al., 2011)推广到时空域。本文提出的REFA3D方法主要基于he - stip探测器和E-HOG3D。SIFT3D和HOG/HOF是两种常用的空时分析方法,并取得了良好的效果。因此,他们的研究让我们了解了它们的结构,并提取了一些成分来改进我们的方法。REFA使用的分析掩模经过了修改,因此依赖于椭球体的使用。验证测试是基于合成变换的视频剪辑以及来自模拟器或机载摄像机的真实序列。我们的系统(检测、描述和匹配)对于图像变换(旋转、缩放、时间缩放)必须尽可能保持不变。我们还研究了子序列注册的性能,例如,子序列注册通常用于定位。本文将详细介绍所有参数(分析形状、阈值)和时空概化的变化。
REFA3D: Robust Spatio-temporal Analysis of Video Sequences
This article proposes a generalization of our approach REFA (Grand-brochier et al., 2011) to spatio-temporal domain. Our new method REFA3D, is based mainly on hes-STIP detector and E-HOG3D. SIFT3D and HOG/HOF are the two must used methods for space-time analysis and give good results. So their studies allow us to understand their construction and to extract some components to improve our approach. The mask of analysis used by REFA is modified and therefore relies on the use of ellipsoids. The validation tests are based on video clips from synthetic transformations as well as real sequences from a simulator or an onboard camera. Our system (detection, description and matching) must be as invariant as possible for the image transformation (rotations, scales, time-scaling). We also study the performance obtained for registration of subsequence, a process often used for the location, for example. All the parameters (analysis shape, thresholds) and changes to the space-time generalization will be detailed in this article.