Y. Kamon, R. Yamane, Y. Mukaigawa, Takeshi Shakunaga
{"title":"传统舞蹈虚拟视图生成中外观与动作数据的协调","authors":"Y. Kamon, R. Yamane, Y. Mukaigawa, Takeshi Shakunaga","doi":"10.1109/3DIM.2005.28","DOIUrl":null,"url":null,"abstract":"A novel method is proposed for virtual view generation of traditional dances. In the proposed framework, a traditional dance is captured separately for appearance registration and motion registration. By coordinating the appearance and motion data, we can easily control virtual camera motion within a dancer-centered coordinate system. For this purpose, a coordination problem should be solved between the appearance and motion data, since they are captured separately and the dancer moves freely in the room. The present paper shows a practical algorithm to solve it. A set of algorithms are also provided for appearance and motion registration, and virtual view generation from archived data. In the appearance registration, a 3D human shape is recovered in each time from a set of input images after suppressing their backgrounds. By combining the recovered 3D shape and a set of images for each time, we can compose archived dance data. In the motion registration, stereoscopic tracking is accomplished for color markers placed on the dancer. A virtual view generation is formalized as a color blending among multiple views, and a novel and efficient algorithm is proposed for the composition of a natural virtual view from a set of images. In the proposed method, weightings of the linear combination are calculated from both an assumed viewpoint and a surface normal.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Coordination of appearance and motion data for virtual view generation of traditional dances\",\"authors\":\"Y. Kamon, R. Yamane, Y. Mukaigawa, Takeshi Shakunaga\",\"doi\":\"10.1109/3DIM.2005.28\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel method is proposed for virtual view generation of traditional dances. In the proposed framework, a traditional dance is captured separately for appearance registration and motion registration. By coordinating the appearance and motion data, we can easily control virtual camera motion within a dancer-centered coordinate system. For this purpose, a coordination problem should be solved between the appearance and motion data, since they are captured separately and the dancer moves freely in the room. The present paper shows a practical algorithm to solve it. A set of algorithms are also provided for appearance and motion registration, and virtual view generation from archived data. In the appearance registration, a 3D human shape is recovered in each time from a set of input images after suppressing their backgrounds. By combining the recovered 3D shape and a set of images for each time, we can compose archived dance data. In the motion registration, stereoscopic tracking is accomplished for color markers placed on the dancer. A virtual view generation is formalized as a color blending among multiple views, and a novel and efficient algorithm is proposed for the composition of a natural virtual view from a set of images. In the proposed method, weightings of the linear combination are calculated from both an assumed viewpoint and a surface normal.\",\"PeriodicalId\":170883,\"journal\":{\"name\":\"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DIM.2005.28\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DIM.2005.28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Coordination of appearance and motion data for virtual view generation of traditional dances
A novel method is proposed for virtual view generation of traditional dances. In the proposed framework, a traditional dance is captured separately for appearance registration and motion registration. By coordinating the appearance and motion data, we can easily control virtual camera motion within a dancer-centered coordinate system. For this purpose, a coordination problem should be solved between the appearance and motion data, since they are captured separately and the dancer moves freely in the room. The present paper shows a practical algorithm to solve it. A set of algorithms are also provided for appearance and motion registration, and virtual view generation from archived data. In the appearance registration, a 3D human shape is recovered in each time from a set of input images after suppressing their backgrounds. By combining the recovered 3D shape and a set of images for each time, we can compose archived dance data. In the motion registration, stereoscopic tracking is accomplished for color markers placed on the dancer. A virtual view generation is formalized as a color blending among multiple views, and a novel and efficient algorithm is proposed for the composition of a natural virtual view from a set of images. In the proposed method, weightings of the linear combination are calculated from both an assumed viewpoint and a surface normal.