{"title":"Enhancing 3D implicit shape representation by leveraging periodic activation functions","authors":"Kanika Singla, Parmanand Astya","doi":"10.1109/ISPCC53510.2021.9609352","DOIUrl":null,"url":null,"abstract":"Conventional discrete representations of 3D objects have been replaced by representations that are implicitly described and continuously differentiable. With the increase in popularity of deep neural networks, parameterization of these continuous functions has emerged as a powerful paradigm. Various machine learning problems like inferring information from 3D images, videos and scene reconstruction require continuous parameterization as they yield memory efficiency, allowing the model to produce finer details. In this paper, improvement of implicit shape representation has been proposed by investigating the neural architecture of periodic activation functions-based networks. To demonstrate the effect of network size and depth on shape quality and detail, we conduct both qualitative and quantitative experiments.","PeriodicalId":113266,"journal":{"name":"2021 6th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"35 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Signal Processing, Computing and Control (ISPCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPCC53510.2021.9609352","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Conventional discrete representations of 3D objects have been replaced by representations that are implicitly described and continuously differentiable. With the increase in popularity of deep neural networks, parameterization of these continuous functions has emerged as a powerful paradigm. Various machine learning problems like inferring information from 3D images, videos and scene reconstruction require continuous parameterization as they yield memory efficiency, allowing the model to produce finer details. In this paper, improvement of implicit shape representation has been proposed by investigating the neural architecture of periodic activation functions-based networks. To demonstrate the effect of network size and depth on shape quality and detail, we conduct both qualitative and quantitative experiments.