Anastasia Moutafidou, Vasileios Toulatzis, Ioannis Fudos
{"title":"动画序列的深度可熔蒙皮","authors":"Anastasia Moutafidou, Vasileios Toulatzis, Ioannis Fudos","doi":"10.1007/s00371-023-03130-3","DOIUrl":null,"url":null,"abstract":"Abstract Animation compression is a key process in replicating and streaming animated 3D models. Linear Blend Skinning (LBS) facilitates the compression of an animated sequence while maintaining the capability of real-time streaming by deriving vertex to proxy bone assignments and per frame bone transformations. We introduce a innovative deep learning approach that learns how to assign vertices to proxy bones with persistent labeling. This is accomplished by learning how to correlate vertex trajectories to bones of fully rigged animated 3D models. Our method uses these pretrained networks on dynamic characteristics (vertex trajectories) of an unseen animation sequence (a sequence of meshes without skeleton or rigging information) to derive an LBS scheme that outperforms most previous competent approaches by offering better approximation of the original animation sequence with fewer bones, therefore offering better compression and smaller bandwidth requirements for streaming. This is substantiated by a thorough comparative performance evaluation using several error metrics, and compression/bandwidth measurements. In this paper, we have also introduced a persistent bone labeling scheme that (i) improves the efficiency of our method in terms of lower error values and better visual outcome and (ii) facilitates the fusion of two (or more) LBS schemes by an innovative algorithm that combines two arbitrary LBS schemes. To demonstrate the usefulness and potential of this fusion process, we have combined the outcome of our deep skinning method with that of Rignet—which is a state-of-the-art method that performs rigging on static meshes—with impressive results.","PeriodicalId":227044,"journal":{"name":"The Visual Computer","volume":"9 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep fusible skinning of animation sequences\",\"authors\":\"Anastasia Moutafidou, Vasileios Toulatzis, Ioannis Fudos\",\"doi\":\"10.1007/s00371-023-03130-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Animation compression is a key process in replicating and streaming animated 3D models. Linear Blend Skinning (LBS) facilitates the compression of an animated sequence while maintaining the capability of real-time streaming by deriving vertex to proxy bone assignments and per frame bone transformations. We introduce a innovative deep learning approach that learns how to assign vertices to proxy bones with persistent labeling. This is accomplished by learning how to correlate vertex trajectories to bones of fully rigged animated 3D models. Our method uses these pretrained networks on dynamic characteristics (vertex trajectories) of an unseen animation sequence (a sequence of meshes without skeleton or rigging information) to derive an LBS scheme that outperforms most previous competent approaches by offering better approximation of the original animation sequence with fewer bones, therefore offering better compression and smaller bandwidth requirements for streaming. This is substantiated by a thorough comparative performance evaluation using several error metrics, and compression/bandwidth measurements. In this paper, we have also introduced a persistent bone labeling scheme that (i) improves the efficiency of our method in terms of lower error values and better visual outcome and (ii) facilitates the fusion of two (or more) LBS schemes by an innovative algorithm that combines two arbitrary LBS schemes. To demonstrate the usefulness and potential of this fusion process, we have combined the outcome of our deep skinning method with that of Rignet—which is a state-of-the-art method that performs rigging on static meshes—with impressive results.\",\"PeriodicalId\":227044,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":\"9 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-023-03130-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-023-03130-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Abstract Animation compression is a key process in replicating and streaming animated 3D models. Linear Blend Skinning (LBS) facilitates the compression of an animated sequence while maintaining the capability of real-time streaming by deriving vertex to proxy bone assignments and per frame bone transformations. We introduce a innovative deep learning approach that learns how to assign vertices to proxy bones with persistent labeling. This is accomplished by learning how to correlate vertex trajectories to bones of fully rigged animated 3D models. Our method uses these pretrained networks on dynamic characteristics (vertex trajectories) of an unseen animation sequence (a sequence of meshes without skeleton or rigging information) to derive an LBS scheme that outperforms most previous competent approaches by offering better approximation of the original animation sequence with fewer bones, therefore offering better compression and smaller bandwidth requirements for streaming. This is substantiated by a thorough comparative performance evaluation using several error metrics, and compression/bandwidth measurements. In this paper, we have also introduced a persistent bone labeling scheme that (i) improves the efficiency of our method in terms of lower error values and better visual outcome and (ii) facilitates the fusion of two (or more) LBS schemes by an innovative algorithm that combines two arbitrary LBS schemes. To demonstrate the usefulness and potential of this fusion process, we have combined the outcome of our deep skinning method with that of Rignet—which is a state-of-the-art method that performs rigging on static meshes—with impressive results.