Graphical ModelsPub Date : 2023-04-01DOI: 10.1016/j.gmod.2023.101170
Abdullah Bulbul
{"title":"Procedural generation of semantically plausible small-scale towns","authors":"Abdullah Bulbul","doi":"10.1016/j.gmod.2023.101170","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101170","url":null,"abstract":"<div><p>Procedural techniques have been successfully utilized for generating various kinds of 3D models. In this study, we propose a procedural method to build 3D towns that can be manipulated by a set of high-level semantic principles namely security, privacy, sustainability, social-life, economy, and beauty. Based on the user defined weights of these principles, our method generates a 3D settlement to accommodate a desired population over a given terrain. Our approach firstly determines where to establish the settlement over the large terrain which is followed by iteratively constructing the town. In both steps, the principles guide the decisions and our method generates natural looking small-scale 3D residential regions similar to the cities of pre-industrial era. We demonstrate the effectiveness of the proposed approach to build semantically plausible town models by presenting sample results over real world based terrains.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101170"},"PeriodicalIF":1.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49882823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2023-04-01DOI: 10.1016/j.gmod.2023.101171
Andrew-Hieu Nguyen , Olivia Rees , Zhaoyang Wang
{"title":"Learning-based 3D imaging from single structured-light image","authors":"Andrew-Hieu Nguyen , Olivia Rees , Zhaoyang Wang","doi":"10.1016/j.gmod.2023.101171","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101171","url":null,"abstract":"<div><p>Integrating structured-light technique with deep learning for single-shot 3D imaging has recently gained enormous attention due to its unprecedented robustness. This paper presents an innovative technique of supervised learning-based 3D imaging from a single grayscale structured-light image. The proposed approach uses a single-input, double-output convolutional neural network to transform a regular fringe-pattern image into two intermediate quantities which facilitate the subsequent 3D image reconstruction with high accuracy. A few experiments have been conducted to demonstrate the validity and robustness of the proposed technique.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101171"},"PeriodicalIF":1.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49882824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2023-04-01DOI: 10.1016/j.gmod.2023.101172
Sam Schofield, Andrew Bainbridge-Smith, Richard Green
{"title":"An improved semi-synthetic approach for creating visual-inertial odometry datasets","authors":"Sam Schofield, Andrew Bainbridge-Smith, Richard Green","doi":"10.1016/j.gmod.2023.101172","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101172","url":null,"abstract":"<div><p>Capturing outdoor visual-inertial datasets is a challenging yet vital aspect of developing robust visual-inertial odometry (VIO) algorithms. A significant hurdle is that high-accuracy-ground-truth systems (e.g., motion capture) are not practical for outdoor use. One solution is to use a “semi-synthetic” approach that combines rendered images with real IMU data. This approach can produce sequences containing challenging imagery and accurate ground truth but with less simulated data than a fully synthetic sequence. Existing methods (used by popular tools/datasets) record IMU measurements from a visual-inertial system while measuring its trajectory using motion capture, then rendering images along that trajectory. This work identifies a major flaw in that approach, specifically that using motion capture alone to estimate the pose of the robot/system results in the generation of inconsistent visual-inertial data that is not suitable for evaluating VIO algorithms. However, we show that it is possible to generate high-quality semi-synthetic data for VIO algorithm evaluation. We do so using an open-source full-batch optimisation tool to incorporate both mocap and IMU measurements when estimating the IMU’s trajectory. We demonstrate that this improved trajectory results in better consistency between the IMU data and rendered images and that the resulting data improves VIO trajectory error by 79% compared to existing methods. Furthermore, we examine the effect of visual-inertial data inconsistency (as a result of trajectory noise) on VIO performance to provide a foundation for future work targeting real-time applications.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101172"},"PeriodicalIF":1.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49882825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2023-01-01DOI: 10.1016/j.gmod.2022.101168
Hyunjun Kim , Minho Kim
{"title":"Volume reconstruction based on the six-direction cubic box-spline","authors":"Hyunjun Kim , Minho Kim","doi":"10.1016/j.gmod.2022.101168","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101168","url":null,"abstract":"<div><p>We propose a new volume reconstruction technique based on the six-direction cubic box-spline <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span>. <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span> is <span><math><msup><mrow><mi>C</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span> continuous and possesses an approximation order of three, the same as that of the tri-quadratic B-spline but with much lower degree. In fact, <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span> has the lowest degree among the symmetric box-splines on <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mn>3</mn></mrow></msup></math></span> with at least <span><math><msup><mrow><mi>C</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span> continuity. We analyze the polynomial structure induced by the shifts of <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span> and propose an efficient analytic evaluation algorithm for splines and their derivatives (gradient and Hessian) based on the high symmetry of <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span>. To verify the evaluation algorithm, we implement a real-time GPU (graphics processing unit) isosurface raycaster which exhibits interactive performance (54.5 fps (frames per second) with <span><math><mrow><mn>24</mn><msup><mrow><mn>1</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span> dataset on <span><math><mrow><mn>51</mn><msup><mrow><mn>2</mn></mrow><mrow><mn>2</mn></mrow></msup></mrow></math></span> framebuffer) on a modern graphics hardware. Moreover, we analyze <span><math><msub><mrow><mi>M</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span> as a reconstruction filter and state that it is comparable to the tri-cubic B-spline, which possesses a higher approximation order.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"125 ","pages":"Article 101168"},"PeriodicalIF":1.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2022-11-01DOI: 10.1016/j.gmod.2022.101167
Zhaochen Zhang, Jianhui Nie, Mengjuan Yu, Xiao Liu
{"title":"SharpNet: A deep learning method for normal vector estimation of point cloud with sharp features","authors":"Zhaochen Zhang, Jianhui Nie, Mengjuan Yu, Xiao Liu","doi":"10.1016/j.gmod.2022.101167","DOIUrl":"10.1016/j.gmod.2022.101167","url":null,"abstract":"<div><p>The normal vector is a basic attribute of point clouds. Traditional estimation methods are susceptible to noise and outliers. Recently, it reported that estimation robustness can be greatly improved by introducing Deep Neural Network (DNN), but how to accurately obtain the normal vector of sharp features still needs to be further studied. This paper proposes SharpNet, a DNN framework specializing in sharp features of CAD-like models, to transform problems into feature classification by the discretization of normal vector space. In order to eliminate the discretization error, a normal vector refining method is presented, which uses the difference between the initial normal vectors to distinguish neighborhood points of different local surface patches. Finally, the normal vector can be estimated accurately from the refined neighborhood points. Experiments show that our algorithm can estimate the normal vector of sharp features of CAD-like models accurately in challenging situations, and is superior to other DNN-based methods in terms of efficiency.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"124 ","pages":"Article 101167"},"PeriodicalIF":1.7,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72590080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2022-09-01DOI: 10.1016/j.gmod.2022.101166
Aishwarya Venkataramanan , Antoine Richard , Cédric Pradalier
{"title":"A data driven approach to generate realistic 3D tree barks","authors":"Aishwarya Venkataramanan , Antoine Richard , Cédric Pradalier","doi":"10.1016/j.gmod.2022.101166","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101166","url":null,"abstract":"<div><p>3D models of trees are ubiquitous in video games<span>, movies, and simulators. It is of paramount importance to generate high quality 3D models to enhance the visual content, and increase the diversity of the available models. In this work, we propose a methodology to create realistic 3D models of tree barks from a consumer-grade hand-held camera. Additionally, we present a pipeline that makes use of multi-view 3D Reconstruction<span> and Generative Adversarial Networks (GANs) to generate the 3D models of the barks. We introduce a GAN referred to as the Depth-Reinforced-SPADE to generate the surfaces of the tree barks and the bark color concurrently. This GAN gives extensive control on what is being generated on the bark: moss, lichen, scars, etc. Finally, by testing our pipeline on different Northern-European trees whose barks exhibit radically different color patterns and surfaces, we show that our pipeline can be used to generate a broad panel of tree species’ bark.</span></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"123 ","pages":"Article 101166"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91754633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2022-09-01DOI: 10.1016/j.gmod.2022.101165
Zi-Xin Zou , Shi-Sheng Huang , Tai-Jiang Mu , Yu-Ping Wang
{"title":"ObjectFusion: Accurate object-level SLAM with neural object priors","authors":"Zi-Xin Zou , Shi-Sheng Huang , Tai-Jiang Mu , Yu-Ping Wang","doi":"10.1016/j.gmod.2022.101165","DOIUrl":"10.1016/j.gmod.2022.101165","url":null,"abstract":"<div><p><span>Previous object-level Simultaneous Localization and Mapping (SLAM) approaches still fail to create high quality object-oriented 3D map in an efficient way. The main challenges come from how to represent the object shape </span><em>effectively</em> and how to apply such object representation to accurate <em>online</em> camera tracking <em>efficiently</em>. In this paper, we provide <em>ObjectFusion</em> as a novel <em>object</em><span>-level SLAM in static scenes which efficiently creates object-oriented 3D map with high-quality object reconstruction, by leveraging neural object priors. We propose a neural object representation with only a single encoder–decoder network to effectively express the object shape across various categories, which benefits high quality reconstruction of object instance. More importantly, we propose to </span><em>convert</em> such neural object representation as precise measurements to jointly optimize the <em>object shape</em>, <em>object pose</em> and <em>camera pose</em><span> for the final accurate 3D object reconstruction. With extensive evaluations on synthetic and real-world RGB-D datasets, we show that our ObjectFusion outperforms previous approaches, with better object reconstruction quality, using much less memory footprint, and in a more efficient way, especially at the </span><em>object</em> level.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"123 ","pages":"Article 101165"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90153299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2022-09-01DOI: 10.1016/j.gmod.2022.101159
Yong-Xia Hao, Ting Li
{"title":"Construction of quasi-Bézier surfaces from boundary conditions","authors":"Yong-Xia Hao, Ting Li","doi":"10.1016/j.gmod.2022.101159","DOIUrl":"10.1016/j.gmod.2022.101159","url":null,"abstract":"<div><p>The quasi-Bézier surface is a kind of commonly used surfaces in CAGD/CAD systems. In this paper, we present a novel approach to construct quasi-Bézier surfaces from the boundary information based on a general second order functional. This functional includes many common functionals as special cases, such as the Dirichlet functional, the biharmonic functional and the quasi-harmonic functional etc. The problem turns into solving simple linear equations<span> about inner control points, and finally the internal control points of the resulting quasi-Bézier surface can be obtained as linear combinations of the given boundary control points. Some representative examples show the effectiveness of the presented method.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"123 ","pages":"Article 101159"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75966331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2022-09-01DOI: 10.1016/j.gmod.2022.101164
Amira Chekir
{"title":"A deep architecture for log-Euclidean Fisher vector end-to-end learning with application to 3D point cloud classification","authors":"Amira Chekir","doi":"10.1016/j.gmod.2022.101164","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101164","url":null,"abstract":"<div><p>Point clouds are a widely used form of 3D data, which can be produced by depth sensors, such as RGB-D cameras. The classification of common elements of 3D point clouds remains an open research problem.</p><p><span><span>We propose a new deep network approach for the end-to-end training of log-Euclidean Fisher vectors (LE-FVs), applied to the classification of 3D point clouds. Our method uses a log-Euclidean (LE) metric in order to extend the concept of Fisher vectors (FVs) to LE-FV encoding. The LE-FV was computed on </span>covariance matrices of local 3D point cloud descriptors, representing multiple features. Our architecture is composed of two blocks. The first one aims to map the covariance matrices representing the 3D point cloud descriptors to the </span>Euclidean space<span>. The second block allows for joint and simultaneous learning of LE-FV Gaussian Mixture Model (GMM) parameters, LE-FV dimensionality reduction, and multi-label classification.</span></p><p>Our LE-FV deep learning model is more accurate than the FV deep learning architecture. Additionally, the introduction of joint learning of 3D point cloud features in the log-Euclidean space, including LE-FV GMM parameters, LE-FV dimensionality reduction, and multi-label classification greatly improves the accuracy of classification. Our method has also been compared with the most popular methods in the literature for 3D point cloud classification, and it achieved good performance. The quantitative evidence will be shown through different experiments.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"123 ","pages":"Article 101164"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91718989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2022-09-01DOI: 10.1016/j.gmod.2022.101163
Hui Wang , Bitao Ma , Junjie Cao , Xiuping Liu , Hui Huang
{"title":"Deep functional maps for simultaneously computing direct and symmetric correspondences of 3D shapes","authors":"Hui Wang , Bitao Ma , Junjie Cao , Xiuping Liu , Hui Huang","doi":"10.1016/j.gmod.2022.101163","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101163","url":null,"abstract":"<div><p><span>We introduce a novel method of isometric correspondences for 3D shapes, designed to address the problem of multiple solutions associated with deep functional maps when matching shapes with left-to-right reflectional intrinsic symmetries. Unlike the existing methods that only find the direct correspondences using single </span>Siamese network, our proposed method is able to detect both the direct and symmetric correspondences among shapes simultaneously. Furthermore, our method detects the reflectional intrinsic symmetry of each shape. Key to our method is the using of two Siamese networks that learn consistent direct descriptors and their symmetric ones, combined with carefully designed regularized functional maps and supervised loss. This leads to the first deep functional map capable of both producing two high-quality correspondences of shapes and detecting the left-to-right reflectional intrinsic symmetry of each shape. Extensive experiments demonstrate that the proposed method obtains more accurate results than state-of-the-art methods for shape correspondences and reflectional intrinsic symmetries detection.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"123 ","pages":"Article 101163"},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91718990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}