{"title":"A Comparative Study between Spatial-Temporal Orthogonal Moments for Volumes Description","authors":"Manel Boutaleb, I. Lassoued, E. Zagrouba","doi":"10.1109/iNCoS.2012.40","DOIUrl":null,"url":null,"abstract":"Object motion description in videos is one of the most active research topics in pattern recognition and computer vision. In this paper we study and compare Krawtchouk, Tchebytchev and Zernike spatial-temporal moments for volumes description. Indeed, reconstruction process of sequences volumes is elaborated to select the best moment descriptor for spatial-temporal volumes. Structural and temporal information of a video sequence can be captured by this moments. The first step of this method is to segment the video into volume space-time images. Then, all objects silhouettes will be extracted from these images. So this set will define the space-time form. The next step is to apply the orthogonal space-time moments on the resulting shape or just on the silhouette's defined patches around the interest detected points. This approach allows to define a descriptor for each video in the database. These descriptors will then rebuild the volumes of silhouettes with different orders to select the optimal for description process.","PeriodicalId":287478,"journal":{"name":"2012 Fourth International Conference on Intelligent Networking and Collaborative Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Fourth International Conference on Intelligent Networking and Collaborative Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iNCoS.2012.40","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Object motion description in videos is one of the most active research topics in pattern recognition and computer vision. In this paper we study and compare Krawtchouk, Tchebytchev and Zernike spatial-temporal moments for volumes description. Indeed, reconstruction process of sequences volumes is elaborated to select the best moment descriptor for spatial-temporal volumes. Structural and temporal information of a video sequence can be captured by this moments. The first step of this method is to segment the video into volume space-time images. Then, all objects silhouettes will be extracted from these images. So this set will define the space-time form. The next step is to apply the orthogonal space-time moments on the resulting shape or just on the silhouette's defined patches around the interest detected points. This approach allows to define a descriptor for each video in the database. These descriptors will then rebuild the volumes of silhouettes with different orders to select the optimal for description process.