{"title":"视频输入驱动动画(VIDA)","authors":"Mengqin Sun, A. Jepson, E. Fiume","doi":"10.1109/ICCV.2003.1238319","DOIUrl":null,"url":null,"abstract":"There are many challenges associated with the integration of synthetic and real imagery. One particularly difficult problem is the automatic extraction of salient parameters of natural phenomena in real video footage for subsequent application to synthetic objects. We can ensure that the hair and clothing of a synthetic actor placed in a meadow of swaying grass will move consistently with the wind that moved that grass. The video footage can be seen as a controller for the motion of synthetic features, a concept we call video input driven animation (VIDA). We propose a schema that analyzes an input video sequence, extracts parameters from the motion of objects in the video, and uses this information to drive the motion of synthetic objects. To validate the principles of VIDA, we approximate the inverse problem to harmonic oscillation, which we use to extract parameters of wind and of regular water waves. We observe the effect of wind on a tree in a video, estimate wind speed parameters from its motion, and then use this to make synthetic objects move. We also extract water elevation parameters from the observed motion of boats and apply the resulting water waves to synthetic boats.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":"{\"title\":\"Video input driven animation (VIDA)\",\"authors\":\"Mengqin Sun, A. Jepson, E. Fiume\",\"doi\":\"10.1109/ICCV.2003.1238319\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are many challenges associated with the integration of synthetic and real imagery. One particularly difficult problem is the automatic extraction of salient parameters of natural phenomena in real video footage for subsequent application to synthetic objects. We can ensure that the hair and clothing of a synthetic actor placed in a meadow of swaying grass will move consistently with the wind that moved that grass. The video footage can be seen as a controller for the motion of synthetic features, a concept we call video input driven animation (VIDA). We propose a schema that analyzes an input video sequence, extracts parameters from the motion of objects in the video, and uses this information to drive the motion of synthetic objects. To validate the principles of VIDA, we approximate the inverse problem to harmonic oscillation, which we use to extract parameters of wind and of regular water waves. We observe the effect of wind on a tree in a video, estimate wind speed parameters from its motion, and then use this to make synthetic objects move. We also extract water elevation parameters from the observed motion of boats and apply the resulting water waves to synthetic boats.\",\"PeriodicalId\":131580,\"journal\":{\"name\":\"Proceedings Ninth IEEE International Conference on Computer Vision\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"41\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Ninth IEEE International Conference on Computer Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCV.2003.1238319\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Ninth IEEE International Conference on Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV.2003.1238319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
There are many challenges associated with the integration of synthetic and real imagery. One particularly difficult problem is the automatic extraction of salient parameters of natural phenomena in real video footage for subsequent application to synthetic objects. We can ensure that the hair and clothing of a synthetic actor placed in a meadow of swaying grass will move consistently with the wind that moved that grass. The video footage can be seen as a controller for the motion of synthetic features, a concept we call video input driven animation (VIDA). We propose a schema that analyzes an input video sequence, extracts parameters from the motion of objects in the video, and uses this information to drive the motion of synthetic objects. To validate the principles of VIDA, we approximate the inverse problem to harmonic oscillation, which we use to extract parameters of wind and of regular water waves. We observe the effect of wind on a tree in a video, estimate wind speed parameters from its motion, and then use this to make synthetic objects move. We also extract water elevation parameters from the observed motion of boats and apply the resulting water waves to synthetic boats.