Visual Informatics最新文献

筛选
英文 中文
Example-based large-scale marine scene authoring using Wang Cubes 基于实例的大规模海洋场景创作使用王立方
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.004
Siyuan Zhu , Xinjie Wang , Ming Wang , Yucheng Wang , Zhiqiang Wei , Bo Yin , Xiaogang Jin
{"title":"Example-based large-scale marine scene authoring using Wang Cubes","authors":"Siyuan Zhu ,&nbsp;Xinjie Wang ,&nbsp;Ming Wang ,&nbsp;Yucheng Wang ,&nbsp;Zhiqiang Wei ,&nbsp;Bo Yin ,&nbsp;Xiaogang Jin","doi":"10.1016/j.visinf.2022.05.004","DOIUrl":"10.1016/j.visinf.2022.05.004","url":null,"abstract":"<div><p>Virtual marine scene authoring plays an important role in generating large-scale 3D scenes and it has a wide range of applications in computer animation and simulation. Existing marine scene authoring methods either produce periodic patterns or generate unnatural group distributions when tiling marine entities such as schools of fish and groups of reefs. To this end, we propose a new large-scale marine scene authoring method based on real examples in order to create more natural and realistic results. Our method first extracts the distribution of multiple marine entities from real images to create Octahedral Blocks, and then we use a modified Wang Cubes algorithm to quickly tile the 3D marine scene. As a result, our method is able to generate aperiodic tiling results with diverse distributions of density and orientation of entities. We validate the effectiveness of our method through intensive comparative experiments. User study results show that our method can generate satisfactory results which are in accord with human preferences.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 3","pages":"Pages 23-34"},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000390/pdfft?md5=bf50cf17a37fe76c7b7f34f471917347&pid=1-s2.0-S2468502X22000390-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114343809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FORSETI: A visual analysis environment enabling provenance awareness for the accountability of e-autopsy reports FORSETI:一个可视化的分析环境,为电子尸检报告的责任性提供了来源意识
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.005
Baoqing Wang , Noboru Adachi , Issei Fujishiro
{"title":"FORSETI: A visual analysis environment enabling provenance awareness for the accountability of e-autopsy reports","authors":"Baoqing Wang ,&nbsp;Noboru Adachi ,&nbsp;Issei Fujishiro","doi":"10.1016/j.visinf.2022.05.005","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.05.005","url":null,"abstract":"<div><p>Autopsy reports play a pivotal role in forensic science. Medical examiners (MEs) and diagnostic radiologists (DRs) cross-reference autopsy results in the form of autopsy reports, while judicial personnel derive legal documents from final autopsy reports. In our prior study, we presented a visual analysis system called the forensic autopsy system for e-court instruments (FORSETI) with an extended legal medicine markup language (x-LMML) that enables MEs and DRs to author and review e-autopsy reports. In this paper, we present our extended work to incorporate provenance infrastructure with authority management into FORSETI for forensic data accountability, which contains two features. The first is a novel provenance management mechanism that combines the forensic autopsy workflow management system (FAWfMS) and a version control system called <span>lmmlgit</span> for x-LMML files. This management mechanism allows much provenance data on e-autopsy reports and their documented autopsy processes to be individually parsed. The second is provenance-supported immersive analytics, which is intended to ensure that the DRs’ and MEs’ autopsy provenances can be viewed, listed, and analyzed so that a principal ME can author their own report through accountable autopsy referencing in an augmented reality setting. A fictitious case with a synthetic wounded body is used to demonstrate the effectiveness of the provenance-aware FORSETI system in terms of data accountability through the experience of experts in legal medicine.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 3","pages":"Pages 69-80"},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000407/pdfft?md5=26d0d079fbe11ae06f644d2b72b8895e&pid=1-s2.0-S2468502X22000407-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91620074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P-Lite: A study of parallel coordinate plot literacy P-Lite:平行坐标情节素养的研究
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.002
Elif E. Firat , Alena Denisova , Max L. Wilson , Robert S. Laramee
{"title":"P-Lite: A study of parallel coordinate plot literacy","authors":"Elif E. Firat ,&nbsp;Alena Denisova ,&nbsp;Max L. Wilson ,&nbsp;Robert S. Laramee","doi":"10.1016/j.visinf.2022.05.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.05.002","url":null,"abstract":"<div><p>Visualization literacy, the ability to interpret and comprehend visual designs, is recognized as an essential skill by the visualization community. We identify and investigate barriers to comprehending parallel coordinates plots (PCPs), one of the advanced graphical representations for the display of multivariate and high-dimensional data. We develop a parallel coordinates literacy test with diverse images generated using popular PCP software tools. The test improves PCP literacy and evaluates the user’s literacy skills. We introduce an interactive educational tool that assists the teaching and learning of parallel coordinates by offering a more active learning experience. Using this pedagogical tool, we aim to advance novice users’ parallel coordinates literacy skills. Based on the hypothesis that an interactive tool that links traditional Cartesian Coordinates with PCPs interactively will enhance PCP literacy further than static slides, we compare the learning experience using traditional slides with our novel software tool and investigate the efficiency of the educational software with an online, crowdsourced user-study. User-study results show that our pedagogical tool positively impacts a user’s PCP comprehension.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 3","pages":"Pages 81-99"},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000377/pdfft?md5=260f2284f0a28077d7ff152561ef3e4a&pid=1-s2.0-S2468502X22000377-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91620114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
New guidance for using t-SNE: Alternative defaults, hyperparameter selection automation, and comparative evaluation 使用t-SNE的新指南:可选默认值、超参数选择自动化和比较评估
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.003
Robert Gove, Lucas Cadalzo, Nicholas Leiby, Jedediah M. Singer, Alexander Zaitzeff
{"title":"New guidance for using t-SNE: Alternative defaults, hyperparameter selection automation, and comparative evaluation","authors":"Robert Gove,&nbsp;Lucas Cadalzo,&nbsp;Nicholas Leiby,&nbsp;Jedediah M. Singer,&nbsp;Alexander Zaitzeff","doi":"10.1016/j.visinf.2022.04.003","DOIUrl":"10.1016/j.visinf.2022.04.003","url":null,"abstract":"<div><p>We present new guidelines for choosing hyperparameters for t-SNE and an evaluation comparing these guidelines to current ones. These guidelines include a proposed empirically optimum guideline derived from a t-SNE hyperparameter grid search over a large collection of data sets. We also introduce a new method to featurize data sets using graph-based metrics called scagnostics; we use these features to train a neural network that predicts optimal t-SNE hyperparameters for the respective data set. This neural network has the potential to simplify the use of t-SNE by removing guesswork about which hyperparameters will produce the best embedding. We evaluate and compare our neural network-derived and empirically optimum hyperparameters to several other t-SNE hyperparameter guidelines from the literature on 68 data sets. The hyperparameters predicted by our neural network yield embeddings with similar accuracy as the best current t-SNE guidelines. Using our empirically optimum hyperparameters is simpler than following previously published guidelines but yields more accurate embeddings, in some cases by a statistically significant margin. We find that the useful ranges for t-SNE hyperparameters are narrower and include smaller values than previously reported in the literature. Importantly, we also quantify the potential for future improvements in this area: using data from a grid search of t-SNE hyperparameters we find that an optimal selection method could improve embedding accuracy by up to two percentage points over the methods examined in this paper.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 87-97"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000201/pdfft?md5=d092541f65d22cc8dfb4e8ef46a1293b&pid=1-s2.0-S2468502X22000201-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134068322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Time analysis of regional structure of large-scale particle using an interactive visual system 基于交互式视觉系统的大尺度粒子区域结构时间分析
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.004
Yihan Zhang , Guan Li , Guihua Shan
{"title":"Time analysis of regional structure of large-scale particle using an interactive visual system","authors":"Yihan Zhang ,&nbsp;Guan Li ,&nbsp;Guihua Shan","doi":"10.1016/j.visinf.2022.03.004","DOIUrl":"10.1016/j.visinf.2022.03.004","url":null,"abstract":"<div><p>N-body numerical simulation is an important tool in astronomy. Scientists used this method to simulate the formation of structure of the universe, which is key to understanding how the universe formed. As research on this subject further develops, astronomers require a more precise method that enables expansion of the simulation and an increase in the number of simulation particles. However, retaining all temporal information is infeasible due to a lack of computer storage. In the circumstances, astronomers reserve temporal data at intervals, merging rough and baffling animations of universal evolution. In this study, we propose a deep-learning-assisted interpolation application to analyze the structure formation of the universe. First, we evaluate the feasibility of applying interpolation to generate an animation of the universal evolution through an experiment. Then, we demonstrate the superiority of deep convolutional neural network (DCNN) method by comparing its quality and performance with the actual results together with the results generated by other popular interpolation algorithms. In addition, we present PRSVis, an interactive visual analytics system that supports global volume rendering, local area magnification, and temporal animation generation. PRSVis allows users to visualize a global volume rendering, interactively select one cubic region from the rendering and intelligently produce a time-series animation of the high-resolution region using the deep-learning-assisted method. In summary, we propose an interactive visual system, integrated with the DCNN interpolation method that is validated through experiments, to help scientists easily understand the evolution of the particle region structure.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 14-24"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000171/pdfft?md5=d3e25d7a79a6452e30ca6c3511bd690a&pid=1-s2.0-S2468502X22000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123011836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VCNet: A generative model for volume completion VCNet:卷补全的生成模型
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.004
Jun Han, Chaoli Wang
{"title":"VCNet: A generative model for volume completion","authors":"Jun Han,&nbsp;Chaoli Wang","doi":"10.1016/j.visinf.2022.04.004","DOIUrl":"10.1016/j.visinf.2022.04.004","url":null,"abstract":"<div><p>We present VCNet, a new deep learning approach for volume completion by synthesizing missing subvolumes. Our solution leverages a generative adversarial network (GAN) that learns to complete volumes using the adversarial and volumetric losses. The core design of VCNet features a dilated residual block and long-term connection. During training, VCNet first randomly masks basic subvolumes (e.g., cuboids, slices) from complete volumes and learns to recover them. Moreover, we design a two-stage algorithm for stabilizing and accelerating network optimization. Once trained, VCNet takes an incomplete volume as input and automatically identifies and fills in the missing subvolumes with high quality. We quantitatively and qualitatively test VCNet with volumetric data sets of various characteristics to demonstrate its effectiveness. We also compare VCNet against a diffusion-based solution and two GAN-based solutions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 62-73"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000213/pdfft?md5=2cafa6586ad2e597b6694ededebdd295&pid=1-s2.0-S2468502X22000213-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127673002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
VETA: Visual eye-tracking analytics for the exploration of gaze patterns and behaviours VETA:用于探索凝视模式和行为的视觉眼动追踪分析
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.02.004
Sarah Goodwin , Arnaud Prouzeau , Ryan Whitelock-Jones , Christophe Hurter , Lee Lawrence , Umair Afzal , Tim Dwyer
{"title":"VETA: Visual eye-tracking analytics for the exploration of gaze patterns and behaviours","authors":"Sarah Goodwin ,&nbsp;Arnaud Prouzeau ,&nbsp;Ryan Whitelock-Jones ,&nbsp;Christophe Hurter ,&nbsp;Lee Lawrence ,&nbsp;Umair Afzal ,&nbsp;Tim Dwyer","doi":"10.1016/j.visinf.2022.02.004","DOIUrl":"10.1016/j.visinf.2022.02.004","url":null,"abstract":"<div><p>Eye tracking is growing in popularity for multiple application areas, yet analysing and exploring the large volume of complex data remains difficult for most users. We present a comprehensive eye tracking visual analytics system to enable the exploration and presentation of eye-tracking data across time and space in an efficient manner. The application allows the user to gain an overview of general patterns and perform deep visual analysis of local gaze exploration. The ability to link directly to the video of the underlying scene allows the visualisation insights to be verified on the fly. The system was motivated by the need to analyse eye-tracking data collected from an ‘in the wild’ study with energy network operators and has been further evaluated via interviews with 14 eye-tracking experts in multiple domains. Results suggest that, thanks to state-of-the-art visualisation techniques and by providing context with videos, our system could enable an improved analysis of eye-tracking data through interactive exploration, facilitating comparison between different participants or conditions, thus enhancing the presentation of complex data analysis to non-experts. This research paper provides four contributions: (1) analysis of a motivational use case demonstrating the need for rich visual-analytics workflow tools for eye-tracking data; (2) a highly dynamic system to visually explore and present complex eye-tracking data; (3) insights from our applied use case evaluation and interviews with experienced users demonstrating the potential for the system and visual analytics for the wider eye-tracking community.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 1-13"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000122/pdfft?md5=61f32cb9f0d63c98d7bd5bb3f5a44b85&pid=1-s2.0-S2468502X22000122-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Trinary tools for continuously valued binary classifiers 连续值二元分类器的三元工具
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.002
Michael Gleicher, Xinyi Yu, Yuheng Chen
{"title":"Trinary tools for continuously valued binary classifiers","authors":"Michael Gleicher,&nbsp;Xinyi Yu,&nbsp;Yuheng Chen","doi":"10.1016/j.visinf.2022.04.002","DOIUrl":"10.1016/j.visinf.2022.04.002","url":null,"abstract":"<div><p>Classification methods for binary (yes/no) tasks often produce a continuously valued score. Machine learning practitioners must perform model selection, calibration, discretization, performance assessment, tuning, and fairness assessment. Such tasks involve examining classifier results, typically using summary statistics and manual examination of details. In this paper, we provide an interactive visualization approach to support such continuously-valued classifier examination tasks. Our approach addresses the three phases of these tasks: calibration, operating point selection, and examination. We enhance standard views and introduce task-specific views so that they can be integrated into a multi-view coordination (MVC) system. We build on an existing comparison-based approach, extending it to continuous classifiers by treating the continuous values as trinary (positive, unsure, negative) even if the classifier will not ultimately use the 3-way classification. We provide use cases that demonstrate how our approach enables machine learning practitioners to accomplish key tasks.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 74-86"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000195/pdfft?md5=6a0480389b3bd0b919007d8d1decc35d&pid=1-s2.0-S2468502X22000195-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85347931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Color and Shape efficiency for outlier detection from automated to user evaluation 颜色和形状的效率异常检测从自动到用户评估
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.001
Loann Giovannangeli, Romain Bourqui, Romain Giot, David Auber
{"title":"Color and Shape efficiency for outlier detection from automated to user evaluation","authors":"Loann Giovannangeli,&nbsp;Romain Bourqui,&nbsp;Romain Giot,&nbsp;David Auber","doi":"10.1016/j.visinf.2022.03.001","DOIUrl":"10.1016/j.visinf.2022.03.001","url":null,"abstract":"<div><p>The design of efficient representations is well established as a fruitful way to explore and analyze complex or large data. In these representations, data are encoded with various visual attributes depending on the needs of the representation itself. To make coherent design choices about visual attributes, the visual search field proposes guidelines based on the human brain’s perception of features. However, information visualization representations frequently need to depict more data than the amount these guidelines have been validated on. Since, the information visualization community has extended these guidelines to a wider parameter space.</p><p>This paper contributes to this theme by extending visual search theories to an information visualization context. We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractors. Stimuli are defined by color and shape features for the purpose of visually encoding categorical data. The experimental protocol is made of a parameters space reduction step (<em>i.e.</em>, sub-sampling) based on a machine learning model, and a user evaluation to validate hypotheses and measure capacity limits. The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier. When redundantly encoded, the display heterogeneity has no effect on the task. When encoded with one attribute, the difficulty depends on that attribute heterogeneity until its capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded with two attributes simultaneously, performances drop drastically even with minor heterogeneity.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 25-40"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000146/pdfft?md5=3a4ee1c7cac8f90eeb5e72a02337dd27&pid=1-s2.0-S2468502X22000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124374144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MDISN: Learning multiscale deformed implicit fields from single images MDISN:从单个图像中学习多尺度变形隐式场
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.003
Yujie Wang , Yixin Zhuang , Yunzhe Liu , Baoquan Chen
{"title":"MDISN: Learning multiscale deformed implicit fields from single images","authors":"Yujie Wang ,&nbsp;Yixin Zhuang ,&nbsp;Yunzhe Liu ,&nbsp;Baoquan Chen","doi":"10.1016/j.visinf.2022.03.003","DOIUrl":"10.1016/j.visinf.2022.03.003","url":null,"abstract":"<div><p>We present a multiscale deformed implicit surface network (MDISN) to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image. The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image. And with multi-resolution feature maps, the implicit field is refined progressively, such that lower resolutions outline the main object components, and higher resolutions reveal fine-grained geometric details. To better explore the changes in feature maps, we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 41-49"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200016X/pdfft?md5=7a2c3ab7456139b67e5be7c06fdac2f5&pid=1-s2.0-S2468502X2200016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122912047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信