Stefano Esposito, Anpei Chen, Christian Reiser, Samuel Rota Bulò, Lorenzo Porzi, Katja Schwarz, Christian Richardt, Michael Zollhöfer, Peter Kontschieder, Andreas Geiger
{"title":"体积曲面:用多个网格表示模糊几何图形","authors":"Stefano Esposito, Anpei Chen, Christian Reiser, Samuel Rota Bulò, Lorenzo Porzi, Katja Schwarz, Christian Richardt, Michael Zollhöfer, Peter Kontschieder, Andreas Geiger","doi":"arxiv-2409.02482","DOIUrl":null,"url":null,"abstract":"High-quality real-time view synthesis methods are based on volume rendering,\nsplatting, or surface rendering. While surface-based methods generally are the\nfastest, they cannot faithfully model fuzzy geometry like hair. In turn,\nalpha-blending techniques excel at representing fuzzy materials but require an\nunbounded number of samples per ray (P1). Further overheads are induced by\nempty space skipping in volume rendering (P2) and sorting input primitives in\nsplatting (P3). These problems are exacerbated on low-performance graphics\nhardware, e.g. on mobile devices. We present a novel representation for\nreal-time view synthesis where the (P1) number of sampling locations is small\nand bounded, (P2) sampling locations are efficiently found via rasterization,\nand (P3) rendering is sorting-free. We achieve this by representing objects as\nsemi-transparent multi-layer meshes, rendered in fixed layer order from\noutermost to innermost. We model mesh layers as SDF shells with optimal spacing\nlearned during training. After baking, we fit UV textures to the corresponding\nmeshes. We show that our method can represent challenging fuzzy objects while\nachieving higher frame rates than volume-based and splatting-based methods on\nlow-end and mobile devices.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"60 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Volumetric Surfaces: Representing Fuzzy Geometries with Multiple Meshes\",\"authors\":\"Stefano Esposito, Anpei Chen, Christian Reiser, Samuel Rota Bulò, Lorenzo Porzi, Katja Schwarz, Christian Richardt, Michael Zollhöfer, Peter Kontschieder, Andreas Geiger\",\"doi\":\"arxiv-2409.02482\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High-quality real-time view synthesis methods are based on volume rendering,\\nsplatting, or surface rendering. While surface-based methods generally are the\\nfastest, they cannot faithfully model fuzzy geometry like hair. In turn,\\nalpha-blending techniques excel at representing fuzzy materials but require an\\nunbounded number of samples per ray (P1). Further overheads are induced by\\nempty space skipping in volume rendering (P2) and sorting input primitives in\\nsplatting (P3). These problems are exacerbated on low-performance graphics\\nhardware, e.g. on mobile devices. We present a novel representation for\\nreal-time view synthesis where the (P1) number of sampling locations is small\\nand bounded, (P2) sampling locations are efficiently found via rasterization,\\nand (P3) rendering is sorting-free. We achieve this by representing objects as\\nsemi-transparent multi-layer meshes, rendered in fixed layer order from\\noutermost to innermost. We model mesh layers as SDF shells with optimal spacing\\nlearned during training. After baking, we fit UV textures to the corresponding\\nmeshes. We show that our method can represent challenging fuzzy objects while\\nachieving higher frame rates than volume-based and splatting-based methods on\\nlow-end and mobile devices.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"60 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.02482\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Volumetric Surfaces: Representing Fuzzy Geometries with Multiple Meshes
High-quality real-time view synthesis methods are based on volume rendering,
splatting, or surface rendering. While surface-based methods generally are the
fastest, they cannot faithfully model fuzzy geometry like hair. In turn,
alpha-blending techniques excel at representing fuzzy materials but require an
unbounded number of samples per ray (P1). Further overheads are induced by
empty space skipping in volume rendering (P2) and sorting input primitives in
splatting (P3). These problems are exacerbated on low-performance graphics
hardware, e.g. on mobile devices. We present a novel representation for
real-time view synthesis where the (P1) number of sampling locations is small
and bounded, (P2) sampling locations are efficiently found via rasterization,
and (P3) rendering is sorting-free. We achieve this by representing objects as
semi-transparent multi-layer meshes, rendered in fixed layer order from
outermost to innermost. We model mesh layers as SDF shells with optimal spacing
learned during training. After baking, we fit UV textures to the corresponding
meshes. We show that our method can represent challenging fuzzy objects while
achieving higher frame rates than volume-based and splatting-based methods on
low-end and mobile devices.