Graphical Models最新文献

筛选
英文 中文
Two-step techniques for accurate selection of small elements in VR environments 在VR环境中精确选择小元素的两步技术
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101183
Elena Molina, Pere-Pau Vázquez
{"title":"Two-step techniques for accurate selection of small elements in VR environments","authors":"Elena Molina,&nbsp;Pere-Pau Vázquez","doi":"10.1016/j.gmod.2023.101183","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101183","url":null,"abstract":"<div><p>One of the key interactions in 3D environments is target acquisition, which can be challenging when targets are small or in cluttered scenes. Here, incorrect elements may be selected, leading to frustration and wasted time. The accuracy is further hindered by the physical act of selection itself, typically involving pressing a button. This action reduces stability, increasing the likelihood of erroneous target acquisition. We focused on molecular visualization and on the challenge of selecting atoms, rendered as small spheres. We present two techniques that improve upon previous progressive selection techniques. They facilitate the acquisition of neighbors after an initial selection, providing a more comfortable experience compared to using classical ray-based selection, particularly with occluded elements. We conducted a pilot study followed by two formal user studies. The results indicated that our approaches were highly appreciated by the participants. These techniques could be suitable for other crowded environments as well.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101183"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient collision detection using hybrid medial axis transform and BVH for rigid body simulation 基于混合中轴变换和BVH的刚体仿真高效碰撞检测
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101180
Xingxin Li, Shibo Song, Junfeng Yao, Hanyin Zhang, Rongzhou Zhou, Qingqi Hong
{"title":"Efficient collision detection using hybrid medial axis transform and BVH for rigid body simulation","authors":"Xingxin Li,&nbsp;Shibo Song,&nbsp;Junfeng Yao,&nbsp;Hanyin Zhang,&nbsp;Rongzhou Zhou,&nbsp;Qingqi Hong","doi":"10.1016/j.gmod.2023.101180","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101180","url":null,"abstract":"<div><p>Medial Axis Transform (MAT) has been recently adopted as the acceleration structure of broad-phase collision detection. Compared to traditional BVH-based methods, MAT can provide a high-fidelity volumetric approximation of 3D complex objects, resulting in higher collision culling efficiency. However, due to MAT’s non-hierarchical structure, it may be outperformed in collision-light scenarios because several cullings at the top level of a BVH may take a large number of cullings with MAT. We propose a collision detection method that combines MAT and BVH to address the above problem. Our technique efficiently culls collisions between dynamic and static objects. Experimental results show that our method has higher culling efficiency than pure BVH or MAT methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101180"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust workflow for b-rep generation from image masks 从图像蒙版生成b-rep的健壮工作流
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101174
Omar M. Hafez, Mark M. Rashid
{"title":"A robust workflow for b-rep generation from image masks","authors":"Omar M. Hafez,&nbsp;Mark M. Rashid","doi":"10.1016/j.gmod.2023.101174","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101174","url":null,"abstract":"<div><p>A novel approach to generating watertight, manifold boundary representations from noisy binary image masks of MRI or CT scans is presented. The method samples an input segmented image and locally approximates the material boundary. Geometric error metrics between the voxelated boundary and an approximating template surface are minimized, and boundary point/normals are correspondingly generated. Voronoi partitioning is employed to perform surface reconstruction on the resulting oriented point cloud. The method performs competitively against other approaches, both in comparisons of shape and volume error metrics to a canonical image mask, and in qualitative comparisons using noisy image masks from real scans. The framework readily admits enhancements for capturing sharp edges and corners. The approach robustly produces high-quality b-reps that may be inserted into an image-based meshing pipeline for purposes of physics-based simulation.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101174"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SPIDER: A framework for processing, editing and presenting immersive high-resolution spherical indoor scenes SPIDER:用于处理、编辑和呈现沉浸式高分辨率球形室内场景的框架
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101182
M. Tukur , G. Pintore , E. Gobbetti , J. Schneider , M. Agus
{"title":"SPIDER: A framework for processing, editing and presenting immersive high-resolution spherical indoor scenes","authors":"M. Tukur ,&nbsp;G. Pintore ,&nbsp;E. Gobbetti ,&nbsp;J. Schneider ,&nbsp;M. Agus","doi":"10.1016/j.gmod.2023.101182","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101182","url":null,"abstract":"<div><p>Today’s Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360° cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image processing, editing and rendering system named <strong>SPIDER</strong>, that takes a spherical 360° indoor scene as input. The system is composed of a novel integrated deep learning architecture for extracting geometric and semantic information of full and empty rooms, based on gated and dilated convolutions, followed by a super-resolution module for improving the resolution of the color and depth signals. The obtained high resolution representations allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: (i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) (ii) refurnishing (transferring portions of rooms) (iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings. We report on the performance improvement of the various processing components on public domain spherical image indoor datasets.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101182"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HMDO : Markerless multi-view hand manipulation capture with deformable objects 具有可变形对象的无标记多视图手操作捕获
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-05-01 DOI: 10.1016/j.gmod.2023.101178
Wei Xie, Zhipeng Yu, Zimeng Zhao, Binghui Zuo, Yangang Wang
{"title":"HMDO : Markerless multi-view hand manipulation capture with deformable objects","authors":"Wei Xie,&nbsp;Zhipeng Yu,&nbsp;Zimeng Zhao,&nbsp;Binghui Zuo,&nbsp;Yangang Wang","doi":"10.1016/j.gmod.2023.101178","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101178","url":null,"abstract":"<div><p>We construct the first markerless deformable interaction dataset recording interactive motions of the hands and deformable objects, called HMDO (Hand Manipulation with Deformable Objects). With our built multi-view capture system, it captures the deformable interactions with multiple perspectives, various object shapes, and diverse interactive forms. Our motivation is the current lack of hand and deformable object interaction datasets, as 3D hand and deformable object reconstruction is challenging. Mainly due to mutual occlusion, the interaction area is difficult to observe, the visual features between the hand and the object are entangled, and the reconstruction of the interaction area deformation is difficult. To tackle this challenge, we propose a method to annotate our captured data. Our key idea is to collaborate with estimated hand features to guide the object global pose estimation, and then optimize the deformation process of the object by analyzing the relationship between the hand and the object. Through comprehensive evaluation, the proposed method can reconstruct interactive motions of hands and deformable objects with high quality. HMDO currently consists of 21600 frames over 12 sequences. In the future, this dataset could boost the research of learning-based reconstruction of deformable interaction scenes.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"127 ","pages":"Article 101178"},"PeriodicalIF":1.7,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49701216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated generation of floorplans with non-rectangular rooms 自动生成非矩形房间的平面图
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-05-01 DOI: 10.1016/j.gmod.2023.101175
Krishnendra Shekhawat, Rohit Lohani, Chirag Dasannacharya, Sumit Bisht, Sujay Rastogi
{"title":"Automated generation of floorplans with non-rectangular rooms","authors":"Krishnendra Shekhawat,&nbsp;Rohit Lohani,&nbsp;Chirag Dasannacharya,&nbsp;Sumit Bisht,&nbsp;Sujay Rastogi","doi":"10.1016/j.gmod.2023.101175","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101175","url":null,"abstract":"<div><p>Existing approaches (in particular graph theoretic) for generating floorplans focus on constructing floorplans for given adjacencies without considering boundary layout or room shapes. With recent developments in designs, it is demanding to consider multiple constraints while generating floorplan layouts. In this paper, we study graph theoretic properties which guarantee the presence of different shaped rooms within the floorplans. Further, we present a graph-algorithms based application, developed in Python, for generating floorplans with given input room shapes. The proposed application is useful in creating floorplans for a given graph with desired room shapes mainly, L, T, F, C, staircase, and plus-shape. Here, the floorplan boundary is always rectangular. In future,we aim to extend this work to generate any (rectilinear) room shape and floor plan boundary for a given graph.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"127 ","pages":"Article 101175"},"PeriodicalIF":1.7,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49702992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Camera distance helps 3D hand pose estimated from a single RGB image 相机距离有助于从单个RGB图像估计3D手姿势
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-05-01 DOI: 10.1016/j.gmod.2023.101179
Yuan Cui , Moran Li , Yuan Gao , Changxin Gao , Fan Wu , Hao Wen , Jiwei Li , Nong Sang
{"title":"Camera distance helps 3D hand pose estimated from a single RGB image","authors":"Yuan Cui ,&nbsp;Moran Li ,&nbsp;Yuan Gao ,&nbsp;Changxin Gao ,&nbsp;Fan Wu ,&nbsp;Hao Wen ,&nbsp;Jiwei Li ,&nbsp;Nong Sang","doi":"10.1016/j.gmod.2023.101179","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101179","url":null,"abstract":"<div><p>Most existing methods for RGB hand pose estimation use root-relative 3D coordinates for supervision. However, such supervision neglects the distance between the camera and the object (i.e., the hand). The camera distance is especially important under a perspective camera, which controls the depth-dependent scaling of the perspective projection. As a result, the same hand pose, with different camera distances can be projected into different 2D shapes by the same perspective camera. Neglecting such important information results in ambiguities in recovering 3D poses from 2D images. In this article, we propose a camera projection learning module (CPLM) that uses the scale factor contained in the camera distance to associate 3D hand pose with 2D UV coordinates, which facilities to further optimize the accuracy of the estimated hand joints. Specifically, following the previous work, we use a two-stage RGB-to-2D and 2D-to-3D method to estimate 3D hand pose and embed a graph convolutional network in the second stage to leverage the information contained in the complex non-Euclidean structure of 2D hand joints. Experimental results demonstrate that our proposed method surpasses state-of-the-art methods on the benchmark dataset RHD and obtains competitive results on the STB and D+O datasets.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"127 ","pages":"Article 101179"},"PeriodicalIF":1.7,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49702994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-fidelity point cloud completion with low-resolution recovery and noise-aware upsampling 具有低分辨率恢复和噪声感知上采样的高保真点云完成
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-04-01 DOI: 10.1016/j.gmod.2023.101173
Ren-Wu Li , Bo Wang , Lin Gao , Ling-Xiao Zhang , Chun-Peng Li
{"title":"High-fidelity point cloud completion with low-resolution recovery and noise-aware upsampling","authors":"Ren-Wu Li ,&nbsp;Bo Wang ,&nbsp;Lin Gao ,&nbsp;Ling-Xiao Zhang ,&nbsp;Chun-Peng Li","doi":"10.1016/j.gmod.2023.101173","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101173","url":null,"abstract":"<div><p>Completing an unordered partial point cloud is a challenging task. Existing approaches that rely on decoding a latent feature to recover the complete shape, often lead to the completed point cloud being over-smoothing, losing details, and noisy. Instead of decoding a whole shape, we propose to decode and refine a low-resolution (low-res) point cloud first, and then perform a patch-wise noise-aware upsampling rather than interpolating the whole sparse point cloud at once, which tends to lose details. Regarding the possibility of lacking details of the initially decoded low-res point cloud, we propose an iterative refinement to recover the geometric details and a symmetrization process to preserve the trustworthy information from the input partial point cloud. After obtaining a sparse and complete point cloud, we propose a patch-wise upsampling strategy. Patch-based upsampling allows to recover fine details better rather than decoding a whole shape. The patch extraction approach is to generate training patch pairs between the sparse and ground-truth point clouds with an outlier removal step to suppress the noisy points from the sparse point cloud. Together with the low-res recovery, our whole pipeline can achieve high-fidelity point cloud completion. Comprehensive evaluations are provided to demonstrate the effectiveness of the proposed method and its components.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101173"},"PeriodicalIF":1.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49882826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Procedural generation of semantically plausible small-scale towns 语义上合理的小规模城镇的程序生成
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-04-01 DOI: 10.1016/j.gmod.2023.101170
Abdullah Bulbul
{"title":"Procedural generation of semantically plausible small-scale towns","authors":"Abdullah Bulbul","doi":"10.1016/j.gmod.2023.101170","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101170","url":null,"abstract":"<div><p>Procedural techniques have been successfully utilized for generating various kinds of 3D models. In this study, we propose a procedural method to build 3D towns that can be manipulated by a set of high-level semantic principles namely security, privacy, sustainability, social-life, economy, and beauty. Based on the user defined weights of these principles, our method generates a 3D settlement to accommodate a desired population over a given terrain. Our approach firstly determines where to establish the settlement over the large terrain which is followed by iteratively constructing the town. In both steps, the principles guide the decisions and our method generates natural looking small-scale 3D residential regions similar to the cities of pre-industrial era. We demonstrate the effectiveness of the proposed approach to build semantically plausible town models by presenting sample results over real world based terrains.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101170"},"PeriodicalIF":1.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49882823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-based 3D imaging from single structured-light image 基于学习的单结构光图像三维成像
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-04-01 DOI: 10.1016/j.gmod.2023.101171
Andrew-Hieu Nguyen , Olivia Rees , Zhaoyang Wang
{"title":"Learning-based 3D imaging from single structured-light image","authors":"Andrew-Hieu Nguyen ,&nbsp;Olivia Rees ,&nbsp;Zhaoyang Wang","doi":"10.1016/j.gmod.2023.101171","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101171","url":null,"abstract":"<div><p>Integrating structured-light technique with deep learning for single-shot 3D imaging has recently gained enormous attention due to its unprecedented robustness. This paper presents an innovative technique of supervised learning-based 3D imaging from a single grayscale structured-light image. The proposed approach uses a single-input, double-output convolutional neural network to transform a regular fringe-pattern image into two intermediate quantities which facilitate the subsequent 3D image reconstruction with high accuracy. A few experiments have been conducted to demonstrate the validity and robustness of the proposed technique.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101171"},"PeriodicalIF":1.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49882824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信