IEEE Transactions on Visualization and Computer Graphics最新文献

筛选
英文 中文
The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements. 虚拟卡尺:从3D测量快速创建度量精确的化身。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-21 DOI: 10.1109/TVCG.2019.2898748
Sergi Pujades, Betty Mohler, Anne Thaler, Joachim Tesch, Naureen Mahmood, Nikolas Hesse, Heinrich H Bulthoff, Michael J Black
{"title":"The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements.","authors":"Sergi Pujades,&nbsp;Betty Mohler,&nbsp;Anne Thaler,&nbsp;Joachim Tesch,&nbsp;Naureen Mahmood,&nbsp;Nikolas Hesse,&nbsp;Heinrich H Bulthoff,&nbsp;Michael J Black","doi":"10.1109/TVCG.2019.2898748","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898748","url":null,"abstract":"<p><p>Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating \"The Virtual Caliper\", which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1887-1897"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36990225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies. 虚拟环境中回声导航的听觉反馈:训练程序和定向策略。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-18 DOI: 10.1109/TVCG.2019.2898787
Anastassia Andreasen, Michele Geronazzo, Niels Christian Nilsson, Jelizaveta Zovnercuka, Kristian Konovalov, Stefania Serafin
{"title":"Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies.","authors":"Anastassia Andreasen,&nbsp;Michele Geronazzo,&nbsp;Niels Christian Nilsson,&nbsp;Jelizaveta Zovnercuka,&nbsp;Kristian Konovalov,&nbsp;Stefania Serafin","doi":"10.1109/TVCG.2019.2898787","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898787","url":null,"abstract":"<p><p>Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users' navigation strategies in the VE. Moreover, the paper presents features that increase users' performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1876-1886"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898787","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Light Attenuation Display: Subtractive See-Through Near-Eye Display via Spatial Color Filtering. 光衰减显示:通过空间色彩过滤的减色法透视近眼显示。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 DOI: 10.1109/TVCG.2019.2899229
Yuta Itoh, Tobias Langlotz, Daisuke Iwai, Kiyoshi Kiyokawa, Toshiyuki Amano
{"title":"Light Attenuation Display: Subtractive See-Through Near-Eye Display via Spatial Color Filtering.","authors":"Yuta Itoh,&nbsp;Tobias Langlotz,&nbsp;Daisuke Iwai,&nbsp;Kiyoshi Kiyokawa,&nbsp;Toshiyuki Amano","doi":"10.1109/TVCG.2019.2899229","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899229","url":null,"abstract":"<p><p>We present a display for optical see-through near-eye displays based on light attenuation, a new paradigm that forms images by spatially subtracting colors of light. Existing optical see-through head-mounted displays (OST-HMDs) form virtual images in an additive manner-they optically combine the light from an embedded light source such as a microdisplay into the users' field of view (FoV). Instead, our light attenuation display filters the color of the real background light pixel-wise in the users' see-through view, resulting in an image as a spatial color filter. Our image formation is complementary to existing light-additive OST-HMDs. The core optical component in our system is a phase-only spatial light modulator (PSLM), a liquid crystal module that can control the phase of the light in each pixel. By combining PSLMs with polarization optics, our system realizes a spatially programmable color filter. In this paper, we introduce our optics design, evaluate the spatial color filter, consider applications including image rendering and FoV color control, and discuss the limitations of the current prototype.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1951-1960"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899229","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37295093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Audio-Visual-Olfactory Resource Allocation for Tri-modal Virtual Environments. 三模态虚拟环境的视听嗅觉资源分配。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898823
E Doukakis, K Debattista, T Bashford-Rogers, A Dhokia, A Asadipour, A Chalmers, C Harvey
{"title":"Audio-Visual-Olfactory Resource Allocation for Tri-modal Virtual Environments.","authors":"E Doukakis,&nbsp;K Debattista,&nbsp;T Bashford-Rogers,&nbsp;A Dhokia,&nbsp;A Asadipour,&nbsp;A Chalmers,&nbsp;C Harvey","doi":"10.1109/TVCG.2019.2898823","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898823","url":null,"abstract":"<p><p>Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants ( N=25) were asked, across a fixed number of budgets ( M=5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1865-1875"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898823","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40559342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Implementation and Evaluation of a 50 kHz, 28μs Motion-to-Pose Latency Head Tracking Instrument. 一种50 kHz、28μs动作-姿态延迟头部跟踪仪的实现与评价。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-03-04 DOI: 10.1109/TVCG.2019.2899233
Alex Blate, Mary Whitton, Montek Singh, Greg Welch, Andrei State, Turner Whitted, Henry Fuchs
{"title":"Implementation and Evaluation of a 50 kHz, 28μs Motion-to-Pose Latency Head Tracking Instrument.","authors":"Alex Blate,&nbsp;Mary Whitton,&nbsp;Montek Singh,&nbsp;Greg Welch,&nbsp;Andrei State,&nbsp;Turner Whitted,&nbsp;Henry Fuchs","doi":"10.1109/TVCG.2019.2899233","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899233","url":null,"abstract":"<p><p>This paper presents the implementation and evaluation of a 50,000-pose-sample-per-second, 6-degree-of-freedom optical head tracking instrument with motion-to-pose latency of 28μs and dynamic precision of 1-2 arcminutes. The instrument uses high-intensity infrared emitters and two duo-lateral photodiode-based optical sensors to triangulate pose. This instrument serves two purposes: it is the first step towards the requisite head tracking component in sub- 100μs motion-to-photon latency optical see-through augmented reality (OST AR) head-mounted display (HMD) systems; and it enables new avenues of research into human visual perception - including measuring the thresholds for perceptible real-virtual displacement during head rotation and other human research requiring high-sample-rate motion tracking. The instrument's tracking volume is limited to about 120×120×250 but allows for the full range of natural head rotation and is sufficient for research involving seated users. We discuss how the instrument's tracking volume is scalable in multiple ways and some of the trade-offs involved therein. Finally, we introduce a novel laser-pointer-based measurement technique for assessing the instrument's tracking latency and repeatability. We show that the instrument's motion-to-pose latency is 28μs and that it is repeatable within 1-2 arcminutes at mean rotational velocities (yaw) in excess of 500°/sec.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1970-1980"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899233","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37032479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Table of Contents 目录表
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 DOI: 10.1109/tvcg.2019.2902969
{"title":"Table of Contents","authors":"","doi":"10.1109/tvcg.2019.2902969","DOIUrl":"https://doi.org/10.1109/tvcg.2019.2902969","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/tvcg.2019.2902969","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48522039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion Sickness Prediction in Stereoscopic Videos using 3D Convolutional Neural Networks. 利用三维卷积神经网络预测立体视频中的晕动病。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-15 DOI: 10.1109/TVCG.2019.2899186
Tae Min Lee, Jong-Chul Yoon, In-Kwon Lee
{"title":"Motion Sickness Prediction in Stereoscopic Videos using 3D Convolutional Neural Networks.","authors":"Tae Min Lee,&nbsp;Jong-Chul Yoon,&nbsp;In-Kwon Lee","doi":"10.1109/TVCG.2019.2899186","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899186","url":null,"abstract":"<p><p>In this paper, we propose a three-dimensional (3D) convolutional neural network (CNN)-based method for predicting the degree of motion sickness induced by a 360° stereoscopic video. We consider the user's eye movement as a new feature, in addition to the motion velocity and depth features of a video used in previous work. For this purpose, we use saliency, optical flow, and disparity maps of an input video, which represent eye movement, velocity, and depth, respectively, as the input of the 3D CNN. To train our machine-learning model, we extend the dataset established in the previous work using two data augmentation techniques: frame shifting and pixel shifting. Consequently, our model can predict the degree of motion sickness more precisely than the previous method, and the results have a more similar correlation to the distribution of ground-truth sickness.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1919-1927"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
A Coloring Algorithm for Disambiguating Graph and Map Drawings. 图与地图图消歧的着色算法。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-02-01 Epub Date: 2018-01-25 DOI: 10.1007/978-3-662-45803-7_8
Yifan Hu, Lei Shi, Qingsong Liu
{"title":"A Coloring Algorithm for Disambiguating Graph and Map Drawings.","authors":"Yifan Hu,&nbsp;Lei Shi,&nbsp;Qingsong Liu","doi":"10.1007/978-3-662-45803-7_8","DOIUrl":"https://doi.org/10.1007/978-3-662-45803-7_8","url":null,"abstract":"<p><p>Drawings of non-planar graphs always result in edge crossings. When there are many edges crossing at small angles, it is often difficult to follow these edges, because of the multiple visual paths resulted from the crossings that slow down eye movements. In this paper we propose an algorithm that disambiguates the edges with automatic selection of distinctive colors. Our proposed algorithm computes a near optimal color assignment of a dual collision graph, using a novel branch-and-bound procedure applied to a space decomposition of the color gamut. We give examples demonstrating this approach in real world graphs and maps, as well as a user study to establish its effectiveness and limitations.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 2","pages":"1321-1335"},"PeriodicalIF":5.2,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36301634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VIS Conference Committee VIS会议委员会
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-01-01 DOI: 10.1109/scivis.2015.7429479
H. Theisel, Petra Specht, H. Hege, B. Preim, G. Scheuermann, Remco Chang, Huamin Qu, T. Schreck, Tim Dwyer, S. Franconeri, Petra Isenberg, I. Fujishiro, Gunther H. Weber, D. Weiskopf, Wenwen Dou, T. V. Landesberger, M. Meyer, N. Riche, Jian Chen, A. Endert, Chaoli Wang, N. Andrienko, Peter Lindstrom, Berk Geveci, L. G. Nonato, T. Nagel, Jordan Crouser, G. Grinstein, M. Whiting, J. Patchett, T. Wischgoll, Torsten Möller, D. Staheli, C. Turkay, Daniela Oelke, M. Brehmer, B. Hentschel, Fanny Chevalier, T. Ropinski, K. Vrotsou, Z. Liu, Ayan Biswas, Aashish Chaudhary, Weiwei Cui, J. Woodring, Tim Gerrits, T. Luciani, John E. Wenskovitch, Virginia Tech, Fumeng Yang, M. Behrisch, D. Archambault, Katie Osterdahl, M. Borkin, K. Gaither, Lisa Avila, S. Miksch, Melanie Tory
{"title":"VIS Conference Committee","authors":"H. Theisel, Petra Specht, H. Hege, B. Preim, G. Scheuermann, Remco Chang, Huamin Qu, T. Schreck, Tim Dwyer, S. Franconeri, Petra Isenberg, I. Fujishiro, Gunther H. Weber, D. Weiskopf, Wenwen Dou, T. V. Landesberger, M. Meyer, N. Riche, Jian Chen, A. Endert, Chaoli Wang, N. Andrienko, Peter Lindstrom, Berk Geveci, L. G. Nonato, T. Nagel, Jordan Crouser, G. Grinstein, M. Whiting, J. Patchett, T. Wischgoll, Torsten Möller, D. Staheli, C. Turkay, Daniela Oelke, M. Brehmer, B. Hentschel, Fanny Chevalier, T. Ropinski, K. Vrotsou, Z. Liu, Ayan Biswas, Aashish Chaudhary, Weiwei Cui, J. Woodring, Tim Gerrits, T. Luciani, John E. Wenskovitch, Virginia Tech, Fumeng Yang, M. Behrisch, D. Archambault, Katie Osterdahl, M. Borkin, K. Gaither, Lisa Avila, S. Miksch, Melanie Tory","doi":"10.1109/scivis.2015.7429479","DOIUrl":"https://doi.org/10.1109/scivis.2015.7429479","url":null,"abstract":"Papers Chairs Remco Chang Tufts University (VAST) Huamin Qu Hong Kong University of Science and Technology (VAST) Tobias Schreck Graz University of Technology (VAST) Tim Dwyer Monash University (InfoVis) Steve Franconeri Northwestern University (InfoVis) Petra Isenberg Inria (InfoVis) Issei Fujishiro Keio University (SciVis) Gunther Weber Lawrence Berkeley National Laboratory (SciVis) Daniel Weiskopf University of Stuttgart (SciVis)","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/scivis.2015.7429479","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47595940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Further Towards Unambiguous Edge Bundling: Investigating Power-Confluent Drawings for Network Visualization 进一步走向明确的边缘捆绑:研究网络可视化的功率融合图
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2018-10-23 DOI: 10.1109/TVCG.2019.2944619
Jonathan X. Zheng, S. Pawar, Dan F. M. Goodman
{"title":"Further Towards Unambiguous Edge Bundling: Investigating Power-Confluent Drawings for Network Visualization","authors":"Jonathan X. Zheng, S. Pawar, Dan F. M. Goodman","doi":"10.1109/TVCG.2019.2944619","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2944619","url":null,"abstract":"Bach et al. [1] recently presented an algorithm for constructing confluent drawings, by leveraging power graph decomposition to generate an auxiliary routing graph. We identify two issues with their method which we call the node split and short-circuit problems, and solve both by modifying the routing graph to retain the hierarchical structure of power groups. We also classify the exact type of confluent drawings that the algorithm can produce as ‘power-confluent’, and prove that it is a subclass of the previously studied ‘strict confluent’ drawing. A description and source code of our implementation is also provided, which additionally includes an improved method for power graph construction.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"27 1","pages":"2244-2249"},"PeriodicalIF":5.2,"publicationDate":"2018-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2944619","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49193531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信