{"title":"Modeling motion flow using tensor dynamic textures","authors":"Bingyin Zhou, Qingyun Ren, Ming Lu","doi":"10.1109/ICALIP.2016.7846642","DOIUrl":null,"url":null,"abstract":"As a family of visual patterns in moving scenes with certain temporal regularity, dynamic textures are powerful visual cues for people to understand things; hence, effective models are needed for relevant applications. Considering that image sequences are really tensor time series, this paper proposes a tensor dynamic texture model to represent dynamic texture videos, and a sub-optimal algorithm to estimate the model parameters. Our tensor-based method can capture multiple interactions and essential structures in videos. Experimental results on dynamic texture synthesis show that the proposed method not only achieved a better visual quality, but also a smaller model size and a less time cost. The maximum PSNR gain achieves 2.36 dB, and the maximum model size reduction achieves 49.68%.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICALIP.2016.7846642","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As a family of visual patterns in moving scenes with certain temporal regularity, dynamic textures are powerful visual cues for people to understand things; hence, effective models are needed for relevant applications. Considering that image sequences are really tensor time series, this paper proposes a tensor dynamic texture model to represent dynamic texture videos, and a sub-optimal algorithm to estimate the model parameters. Our tensor-based method can capture multiple interactions and essential structures in videos. Experimental results on dynamic texture synthesis show that the proposed method not only achieved a better visual quality, but also a smaller model size and a less time cost. The maximum PSNR gain achieves 2.36 dB, and the maximum model size reduction achieves 49.68%.