{"title":"基于笼子的性能捕获","authors":"Yann Savoye","doi":"10.1145/3214834.3214836","DOIUrl":null,"url":null,"abstract":"Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire, and 3D Video has reached considerable attention in visual media productions. This lecture will address new paradigm to achieve performance capture using cage-based shapes in motion. We define cage-based performance capture as the non-invasive process of capturing non-rigid surface of actors from multi-view in the form of sparse control deformation handles trajectories and a laser-scanned static template shape. In this course, we address the hard problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations in four steps: (1) cage-based inverse kinematics, (2) conversion of surface performance capture into cage-based deformation, (3) cage-based cartoon surface exaggeration, and (4) cage-based registration of time-varying reconstructed point clouds. The key objective is to attract the interest of game programmers, digital artists and filmmakers in employing purely geometric and animator-friendly tools to capture and reuse surfaces in motion. Finally, a variety of advanced animation techniques and vision-based graphics applications could benefit from animatable coordinates-based sub-spaces as presented in this course. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. While abandoning the classical articulated skeleton as the underlying structure, we show that cage-based deformers offer a flexible design space abstraction to dynamically non-rigid surface motion through learning space-time shape variability. Registered cage-handles trajectories allow the reconstruction of complex mesh sequences by deforming an enclosed fine-detail mesh. Finally, cage-based performance capture techniques offer suitable and reusable outputs for animation transfer by decoupling the motion from the geometry.","PeriodicalId":107712,"journal":{"name":"ACM SIGGRAPH 2018 Courses","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Cage-based performance capture\",\"authors\":\"Yann Savoye\",\"doi\":\"10.1145/3214834.3214836\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire, and 3D Video has reached considerable attention in visual media productions. This lecture will address new paradigm to achieve performance capture using cage-based shapes in motion. We define cage-based performance capture as the non-invasive process of capturing non-rigid surface of actors from multi-view in the form of sparse control deformation handles trajectories and a laser-scanned static template shape. In this course, we address the hard problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations in four steps: (1) cage-based inverse kinematics, (2) conversion of surface performance capture into cage-based deformation, (3) cage-based cartoon surface exaggeration, and (4) cage-based registration of time-varying reconstructed point clouds. The key objective is to attract the interest of game programmers, digital artists and filmmakers in employing purely geometric and animator-friendly tools to capture and reuse surfaces in motion. Finally, a variety of advanced animation techniques and vision-based graphics applications could benefit from animatable coordinates-based sub-spaces as presented in this course. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. While abandoning the classical articulated skeleton as the underlying structure, we show that cage-based deformers offer a flexible design space abstraction to dynamically non-rigid surface motion through learning space-time shape variability. Registered cage-handles trajectories allow the reconstruction of complex mesh sequences by deforming an enclosed fine-detail mesh. Finally, cage-based performance capture techniques offer suitable and reusable outputs for animation transfer by decoupling the motion from the geometry.\",\"PeriodicalId\":107712,\"journal\":{\"name\":\"ACM SIGGRAPH 2018 Courses\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGGRAPH 2018 Courses\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3214834.3214836\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2018 Courses","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3214834.3214836","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire, and 3D Video has reached considerable attention in visual media productions. This lecture will address new paradigm to achieve performance capture using cage-based shapes in motion. We define cage-based performance capture as the non-invasive process of capturing non-rigid surface of actors from multi-view in the form of sparse control deformation handles trajectories and a laser-scanned static template shape. In this course, we address the hard problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations in four steps: (1) cage-based inverse kinematics, (2) conversion of surface performance capture into cage-based deformation, (3) cage-based cartoon surface exaggeration, and (4) cage-based registration of time-varying reconstructed point clouds. The key objective is to attract the interest of game programmers, digital artists and filmmakers in employing purely geometric and animator-friendly tools to capture and reuse surfaces in motion. Finally, a variety of advanced animation techniques and vision-based graphics applications could benefit from animatable coordinates-based sub-spaces as presented in this course. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. While abandoning the classical articulated skeleton as the underlying structure, we show that cage-based deformers offer a flexible design space abstraction to dynamically non-rigid surface motion through learning space-time shape variability. Registered cage-handles trajectories allow the reconstruction of complex mesh sequences by deforming an enclosed fine-detail mesh. Finally, cage-based performance capture techniques offer suitable and reusable outputs for animation transfer by decoupling the motion from the geometry.