Graphics and Visual Computing最新文献

筛选
英文 中文
Editorial Note 编辑注意
Graphics and Visual Computing Pub Date : 2022-12-01 DOI: 10.1016/j.gvc.2022.200062
Joaquim Jorge (Editor-in-Chief)
{"title":"Editorial Note","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.gvc.2022.200062","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200062","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"7 ","pages":"Article 200062"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000158/pdfft?md5=3a0f16f247daf3b3bf63a96dcf41ebb1&pid=1-s2.0-S2666629422000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137150076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient structuring of the latent space for controllable data reconstruction and compression 有效构造潜在空间,实现可控数据重构和压缩
Graphics and Visual Computing Pub Date : 2022-12-01 DOI: 10.1016/j.gvc.2022.200059
Elena Trunz , Michael Weinmann , Sebastian Merzbach , Reinhard Klein
{"title":"Efficient structuring of the latent space for controllable data reconstruction and compression","authors":"Elena Trunz ,&nbsp;Michael Weinmann ,&nbsp;Sebastian Merzbach ,&nbsp;Reinhard Klein","doi":"10.1016/j.gvc.2022.200059","DOIUrl":"10.1016/j.gvc.2022.200059","url":null,"abstract":"<div><p>Explainable neural models have gained a lot of attention in recent years. However, conventional encoder–decoder models do not capture information regarding the importance of the involved latent variables and rely on a heuristic a-priori specification of the dimensionality of the latent space or its selection based on multiple trainings. In this paper, we focus on the efficient structuring of the latent space of encoder–decoder approaches for explainable data reconstruction and compression. For this purpose, we leverage the concept of Shapley values to determine the contribution of the latent variables on the model’s output and rank them according to decreasing importance. As a result, a truncation of the latent dimensions to those that contribute the most to the overall reconstruction allows a trade-off between model compactness (i.e. dimensionality of the latent space) and representational power (i.e. reconstruction quality). In contrast to other recent autoencoder variants that incorporate a PCA-based ordering of the latent variables, our approach does not require time-consuming training processes and does not introduce additional weights. This makes our approach particularly valuable for compact representation and compression. We validate our approach at the examples of representing and compressing images as well as high-dimensional reflectance data.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"7 ","pages":"Article 200059"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000122/pdfft?md5=2b8c29d0049e663e614057a005fc7cdf&pid=1-s2.0-S2666629422000122-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128372024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Geometric models for plant leaf area estimation from 3D point clouds: A comparative study 基于三维点云的植物叶面积估算几何模型的比较研究
Graphics and Visual Computing Pub Date : 2022-12-01 DOI: 10.1016/j.gvc.2022.200057
Mélinda Boukhana , Joris Ravaglia , Franck Hétroy-Wheeler , Benoît De Solan
{"title":"Geometric models for plant leaf area estimation from 3D point clouds: A comparative study","authors":"Mélinda Boukhana ,&nbsp;Joris Ravaglia ,&nbsp;Franck Hétroy-Wheeler ,&nbsp;Benoît De Solan","doi":"10.1016/j.gvc.2022.200057","DOIUrl":"10.1016/j.gvc.2022.200057","url":null,"abstract":"<div><p>Measuring leaf area is a critical task in plant biology. Meshing techniques, parametric surface modelling and implicit surface modelling allow estimating plant leaf area from acquired 3D point clouds. However, there is currently no consensus on the best approach because of little comparative evaluation. In this paper, we provide evidence about the performance of each approach, through a comparative study of four meshing, three parametric modelling and one implicit modelling methods. All selected methods are freely available and easy to use. We have also performed a parameter sensitivity analysis for each method in order to optimise its results and fully automate its use. We identified nine criteria affecting the robustness of the studied methods. These criteria are related to either the leaf shape (length/width ratio, curviness, concavity) or the acquisition process (e.g. sampling density, noise, misalignment, holes). We used synthetic data to quantitatively evaluate the robustness of the selected approaches with respect to each criterion. In addition we evaluated the results of these approaches on five tree and crop datasets acquired with laser scanners or photogrammetry. This study allows us to highlight the benefits and drawbacks of each method and evaluate its appropriateness in a given scenario. Our main conclusion is that fitting a Bézier surface is the most robust and accurate approach to estimate plant leaf area in most cases.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"7 ","pages":"Article 200057"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000109/pdfft?md5=25be04dddcbb2772af70ad36542ad31c&pid=1-s2.0-S2666629422000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122030983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Locally-guided neural denoising 局部引导神经去噪
Graphics and Visual Computing Pub Date : 2022-12-01 DOI: 10.1016/j.gvc.2022.200058
Lukas Bode , Sebastian Merzbach , Julian Kaltheuner , Michael Weinmann , Reinhard Klein
{"title":"Locally-guided neural denoising","authors":"Lukas Bode ,&nbsp;Sebastian Merzbach ,&nbsp;Julian Kaltheuner ,&nbsp;Michael Weinmann ,&nbsp;Reinhard Klein","doi":"10.1016/j.gvc.2022.200058","DOIUrl":"10.1016/j.gvc.2022.200058","url":null,"abstract":"<div><p>Noise-like artifacts are common in measured or fitted data across various domains, e.g. photography, geometric reconstructions in terms of point clouds or meshes, as well as reflectance measurements and the respective fitting of commonly used reflectance models to them. State-of-the-art denoising approaches focus on specific noise characteristics usually observed in photography. However, these approaches do not perform well if data is corrupted with location-dependent noise. A typical example is the acquisition of heterogeneous materials, which leads to different noise levels due to different behavior of the components either during acquisition or during reconstruction. We address this problem by first automatically determining location-dependent noise levels in the input data and demonstrate that state-of-the-art denoising algorithms can usually benefit from this guidance with only minor modifications to their loss function or employed regularization mechanisms. To generate this information for guidance, we analyze patchwise variances and subsequently derive per-pixel importance values. We demonstrate the benefits of such locally-guided denoising at the examples of the Deep Image Prior method and the Self2Self method.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"7 ","pages":"Article 200058"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000110/pdfft?md5=ac4f74b1a813cbd8b6c7cb0a2ea607d0&pid=1-s2.0-S2666629422000110-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130109872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiresolution surface blending for detail reconstruction 用于细节重建的多分辨率表面混合
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200043
Hono Salval, Andy Keane, David Toal
{"title":"Multiresolution surface blending for detail reconstruction","authors":"Hono Salval,&nbsp;Andy Keane,&nbsp;David Toal","doi":"10.1016/j.gvc.2022.200043","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200043","url":null,"abstract":"<div><p>While performing mechanical reverse engineering, 3D reconstruction processes often encounter difficulties capturing small, highly localized surface information. This can be the case if a physical part is 3D scanned for life-cycle management or robust design purposes, with interest in corroded areas or scratched coatings. The limitation partly is due to insufficient automated frameworks for handling – localized – surface information during the reverse engineering pipeline. We have developed a tool for blending surface patches with arbitrary irregularities, into a base body that can resemble a CAD design. The resulting routine preserves the shape of the transferred features and relies on the user only to set some positional references and parameter adjustments for partitioning the surface features.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200043"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000018/pdfft?md5=41925f363b54292416db9aa69bdd4c18&pid=1-s2.0-S2666629422000018-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91682175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Precomputed fast rejection ray-triangle intersection 预先计算的快速拒绝射线-三角形相交
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200047
Thomas Alois Pichler , Andrej Ferko , Michal Ferko , Peter Kán , Hannes Kaufmann
{"title":"Precomputed fast rejection ray-triangle intersection","authors":"Thomas Alois Pichler ,&nbsp;Andrej Ferko ,&nbsp;Michal Ferko ,&nbsp;Peter Kán ,&nbsp;Hannes Kaufmann","doi":"10.1016/j.gvc.2022.200047","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200047","url":null,"abstract":"<div><p>We propose a ray-triangle intersection algorithm with fast-rejection strategies. We intersect the ray with the triangle plane, then transform the intersection problem into 2D by applying a transformation matrix to the ray-plane intersection point. For 2D transformation, we study two different approaches. The first approach uses a transformation matrix which transforms the triangle into a unit triangle. Then, simple 2D tests are performed. The second approach transforms the triangle into a 2D triangle while preserving similarity. This allows us to prune (i.e., to clip away) areas surrounding the triangle, determining whether the transformed intersection point lies within the triangle. We discuss several optimizations for this pruning approach. We implemented both approaches into the CPU-based ray-tracing framework PBRT, version 3, and we performed a time-based comparison against PBRT’s default intersection algorithm and Baldwin and Weber’s algorithm. The results show that our algorithms are faster than the default algorithm. They are comparable to or slightly slower than Baldwin and Weber’s algorithm, however, the pruning approach produces watertight results and may be further optimized. Moreover, additional CPU/GPU experiments outside of PBRT document promising speedup over the standard Möller–Trumbore algorithm in areas like ray-casting or collision detection.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200047"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000031/pdfft?md5=5e3bada7afa09685d487d9affa26921f&pid=1-s2.0-S2666629422000031-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91682174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring topological operations on generalized maps: Application to subdivision schemes 广义映射上的推断拓扑运算:在细分方案中的应用
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200049
Romain Pascual , Hakim Belhaouari , Agnès Arnould , Pascale Le Gall
{"title":"Inferring topological operations on generalized maps: Application to subdivision schemes","authors":"Romain Pascual ,&nbsp;Hakim Belhaouari ,&nbsp;Agnès Arnould ,&nbsp;Pascale Le Gall","doi":"10.1016/j.gvc.2022.200049","DOIUrl":"10.1016/j.gvc.2022.200049","url":null,"abstract":"<div><p>The design of correct topological modeling operations is known to be a time-consuming and challenging task. However, these operations are intuitively understood via simple drawings of a representative object before and after modification. We propose to infer topological modeling operations from an application example. Our algorithm exploits a compact and expressive graph-based language. In this framework, topological modeling operations on generalized maps are represented as rules from the theory of graph transformations. Most of the time, operations are generic up to a topological cell (vertex, face, volume). Thus, the rules are parameterized with orbit types indicating which kind of cell is involved. Our main idea is to infer a generic rule by folding a graph comprising a copy of the object before modification, a copy after modification, and information about the modification. We fold this graph according to the cell parametrization of the operation under design. We illustrate our approach with some subdivision schemes because their symmetry simplifies the operation inference.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200049"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000043/pdfft?md5=5638caa54cccab653bc1e1497f0094a6&pid=1-s2.0-S2666629422000043-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123250983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRSI Best Paper Award GRSI最佳论文奖
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2021.200035
Mashhuda Glencross, Daniele Panozzou, Joaquim Jorge
{"title":"GRSI Best Paper Award","authors":"Mashhuda Glencross,&nbsp;Daniele Panozzou,&nbsp;Joaquim Jorge","doi":"10.1016/j.gvc.2021.200035","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200035","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200035"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000176/pdfft?md5=9f3ef15488b7d7973a00a30415799d44&pid=1-s2.0-S2666629421000176-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note 编辑注意
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200052
Joaquim Jorge (Editor-in-Chief)
{"title":"Editorial Note","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.gvc.2022.200052","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200052","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200052"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000067/pdfft?md5=aedf0ec279114f9db70784e4b7249e59&pid=1-s2.0-S2666629422000067-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robotic system for images on carpet surface 一种用于地毯表面图像的机器人系统
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200045
Takumi Yamamoto, Yuta Sugiura
{"title":"A robotic system for images on carpet surface","authors":"Takumi Yamamoto,&nbsp;Yuta Sugiura","doi":"10.1016/j.gvc.2022.200045","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200045","url":null,"abstract":"<div><p>In this study, we propose a system that uses a carpet as a non-luminescent display by utilizing the phenomenon in which different shades of traces can be created by changing the direction of the fibers using a motor. The advantage of this method is that it can be used with existing cloth, does not require ink for drawing, and can be rewritten many times. The contributions of this work are (1) the ability to display images in high resolution and grayscale, and (2) the ability to automatically draw large pictures.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200045"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266662942200002X/pdfft?md5=aca02375e11c1c4892e432b325ff5953&pid=1-s2.0-S266662942200002X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信