{"title":"Modeling textured motion : particle, wave and sketch","authors":"Yizhou Wang, Song-Chun Zhu","doi":"10.1109/ICCV.2003.1238343","DOIUrl":null,"url":null,"abstract":"We present a generative model for textured motion phenomena, such as falling snow, wavy river and dancing grass, etc. Firstly, we represent an image as a linear superposition of image bases selected from a generic and over-complete dictionary. The dictionary contains Gabor bases for point/particle elements and Fourier bases for wave-elements. These bases compete to explain the input images. The transform from a raw image to a base or a token representation leads to large dimension reduction. Secondly, we introduce a unified motion equation to characterize the motion of these bases and the interactions between waves and particles, e.g. a ball floating on water. We use statistical learning algorithm to identify the structure of moving objects and their trajectories automatically. Then novel sequences can be synthesized easily from the motion and image models. Thirdly, we replace the dictionary of Gabor and Fourier bases with symbolic sketches (also bases). With the same image and motion model, we can render realistic and stylish cartoon animation. In our view, cartoon and sketch are symbolic visualization of the inner representation for visual perception. The success of the cartoon animation, in turn, suggests that our image and motion models capture the essence of visual perception of textured motion.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"53","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Ninth IEEE International Conference on Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV.2003.1238343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 53
Abstract
We present a generative model for textured motion phenomena, such as falling snow, wavy river and dancing grass, etc. Firstly, we represent an image as a linear superposition of image bases selected from a generic and over-complete dictionary. The dictionary contains Gabor bases for point/particle elements and Fourier bases for wave-elements. These bases compete to explain the input images. The transform from a raw image to a base or a token representation leads to large dimension reduction. Secondly, we introduce a unified motion equation to characterize the motion of these bases and the interactions between waves and particles, e.g. a ball floating on water. We use statistical learning algorithm to identify the structure of moving objects and their trajectories automatically. Then novel sequences can be synthesized easily from the motion and image models. Thirdly, we replace the dictionary of Gabor and Fourier bases with symbolic sketches (also bases). With the same image and motion model, we can render realistic and stylish cartoon animation. In our view, cartoon and sketch are symbolic visualization of the inner representation for visual perception. The success of the cartoon animation, in turn, suggests that our image and motion models capture the essence of visual perception of textured motion.