IEEE Transactions on Visualization and Computer Graphics最新文献

筛选
英文 中文
Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts 使用深度门控专家混合的高效反射捕获
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-03-29 DOI: 10.48550/arXiv.2203.15258
Xiaohe Ma, Ya-Qi Yu, Hongzhi Wu, Kun Zhou
{"title":"Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts","authors":"Xiaohe Ma, Ya-Qi Yu, Hongzhi Wu, Kun Zhou","doi":"10.48550/arXiv.2203.15258","DOIUrl":"https://doi.org/10.48550/arXiv.2203.15258","url":null,"abstract":"We present a novel framework to efficiently acquire anisotropic reflectance in a pixel-independent fashion, using a deep gated mixture-of-experts. While existing work employs a unified network to handle all possible input, our network automatically learns to condition on the input for enhanced reconstruction. We train a gating module that takes photometric measurements as input and selects one out of a number of specialized decoders for reflectance reconstruction, essentially trading generality for quality. A common pre-trained latent-transform module is also appended to each decoder, to offset the burden of the increased number of decoders. In addition, the illumination conditions during acquisition can be jointly optimized. The effectiveness of our framework is validated on a wide variety of challenging near-planar samples with a lightstage. Compared with the state-of-the-art technique, our quality is improved with the same number of input images, and our input image number can be reduced to about 1/3 for equal-quality results. We further generalize the framework to enhance a state-of-the-art technique on non-planar reflectance scanning.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48268495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting the Design Patterns of Composite Visualizations 复合可视化设计模式再探
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-03-20 DOI: 10.48550/arXiv.2203.10476
Dazhen Deng, Weiwei Cui, Xiyu Meng, Mengye Xu, Yu Liao, Haidong Zhang, Yingcai Wu
{"title":"Revisiting the Design Patterns of Composite Visualizations","authors":"Dazhen Deng, Weiwei Cui, Xiyu Meng, Mengye Xu, Yu Liao, Haidong Zhang, Yingcai Wu","doi":"10.48550/arXiv.2203.10476","DOIUrl":"https://doi.org/10.48550/arXiv.2203.10476","url":null,"abstract":"Composite visualization is a popular design strategy that represents complex datasets by integrating multiple visualizations in a meaningful and aesthetic layout, such as juxtaposition, overlay, and nesting. With this strategy, numerous novel designs have been proposed in visualization publications to accomplish various visual analytic tasks. However, there is a lack of understanding of design patterns of composite visualization, thus failing to provide holistic design space and concrete examples for practical use. In this paper, we opted to revisit the composite visualizations in IEEE VIS publications and answered what and how visualizations of different types are composed together. To achieve this, we first constructed a corpus of composite visualizations from the publications and analyzed common practices, such as the pattern distributions and co-occurrence of visualization types. From the analysis, we obtained insights into different design patterns on the utilities and their potential pros and cons. Furthermore, we discussed usage scenarios of our taxonomy and corpus and how future research on visualization composition can be conducted on the basis of this study.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43462263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN drawinginstyle:肖像图像生成和编辑与空间条件的风格
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-03-05 DOI: 10.48550/arXiv.2203.02762
Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, Hongbo Fu
{"title":"DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN","authors":"Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, Hongbo Fu","doi":"10.48550/arXiv.2203.02762","DOIUrl":"https://doi.org/10.48550/arXiv.2203.02762","url":null,"abstract":"The research topic of sketch-to-portrait generation has witnessed a boost of progress with deep learning techniques. The recently proposed StyleGAN architectures achieve state-of-the-art generation ability but the original StyleGAN is not friendly for sketch-based creation due to its unconditional generation nature. To address this issue, we propose a direct conditioning strategy to better preserve the spatial information under the StyleGAN framework. Specifically, we introduce Spatially Conditioned StyleGAN (SC-StyleGAN for short), which explicitly injects spatial constraints to the original StyleGAN generation process. We explore two input modalities, sketches and semantic maps, which together allow users to express desired generation results more precisely and easily. Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images with precise control, either from scratch or editing existing ones. Qualitative and quantitative evaluations show the superior generation ability of our method to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47627988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics. 虚拟现实中的距离感知:头戴式显示器特性影响的元分析。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-02-12 DOI: 10.31234/osf.io/6fps2
Jonathan W. Kelly
{"title":"Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics.","authors":"Jonathan W. Kelly","doi":"10.31234/osf.io/6fps2","DOIUrl":"https://doi.org/10.31234/osf.io/6fps2","url":null,"abstract":"Distances are commonly underperceived in virtual reality (VR), and this finding has been documented repeatedly over more than two decades of research. Yet, there is evidence that perceived distance is more accurate in modern compared to older head-mounted displays (HMDs). This meta-analysis of 131 studies describes egocentric distance perception across 20 HMDs, and also examines the relationship between perceived distance and technical HMD characteristics. Judged distance was positively associated with HMD field of view (FOV), positively associated with HMD resolution, and negatively associated with HMD weight. The effects of FOV and resolution were more pronounced among heavier HMDs. These findings suggest that future improvements in these technical characteristics may be central to resolving the problem of distance underperception in VR.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49640191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
How Does Automation Shape the Process of Narrative Visualization: A Survey on Tools 自动化如何塑造叙事可视化的过程:对工具的调查
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-01-01 DOI: 10.48550/arXiv.2206.12118
Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao
{"title":"How Does Automation Shape the Process of Narrative Visualization: A Survey on Tools","authors":"Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao","doi":"10.48550/arXiv.2206.12118","DOIUrl":"https://doi.org/10.48550/arXiv.2206.12118","url":null,"abstract":"—In recent years, narrative visualization has gained a lot of attention. Researchers have proposed different design spaces for various narrative visualization types and scenarios to facilitate the creation process. As users’ needs grow and automation technologies advance, more and more tools have been designed and developed. In this paper, we surveyed 122 papers and tools to study how automation can progressively engage in the visualization design and narrative process. By investigating the narrative strengths and the drawing efforts of various visualizations, we created a two-dimensional coordinate to map different visualization types. Our resulting taxonomy is organized by the seven types of narrative visualization on the +x-axis of the coordinate and the four automation levels (i.e., design space, authoring tool, AI-supported tool, and AI-generator tool) we identified from the collected work. The taxonomy aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"8 1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70568051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
2021 VGTC Visualization Significant New Researcher Award—Michelle Borkin, Northeastern University and Benjamin Bach, University of Edinburgh 2021年VGTC可视化重要新研究员奖-东北大学的michelle Borkin和爱丁堡大学的Benjamin Bach
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2022-01-01 DOI: 10.1109/tvcg.2021.3114605
{"title":"2021 VGTC Visualization Significant New Researcher Award—Michelle Borkin, Northeastern University and Benjamin Bach, University of Edinburgh","authors":"","doi":"10.1109/tvcg.2021.3114605","DOIUrl":"https://doi.org/10.1109/tvcg.2021.3114605","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62600291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages. Kine附件:通过虚拟附件的转换来增强徒手VR交互。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2021-12-13 DOI: 10.36227/techrxiv.17152460.v1
Hualong Bai, Yang Tian, Shengdong Zhao, Chi-Wing Fu, Qiong Wang, P. Heng
{"title":"Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages.","authors":"Hualong Bai, Yang Tian, Shengdong Zhao, Chi-Wing Fu, Qiong Wang, P. Heng","doi":"10.36227/techrxiv.17152460.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.17152460.v1","url":null,"abstract":"Kinesthetic feedback, the feeling of restriction or resistance when hands contact objects, is essential for natural freehand interaction in VR. However, inducing kinesthetic feedback using mechanical hardware can be cumbersome and hard to control in commodity VR systems. We propose the kine-appendage concept to compensate for the loss of kinesthetic feedback in virtual environments, i.e., a virtual appendage is added to the user's avatar hand; when the appendage contacts a virtual object, it exhibits transformations (rotation and deformation); when it disengages from the contact, it recovers its original appearance. A proof-of-concept kine-appendage technique, BrittleStylus, was designed to enhance isomorphic typing. Our empirical evaluations demonstrated that (i) BrittleStylus significantly reduced the uncorrected error rate of naive isomorphic typing from 6.53% to 1.92% without compromising the typing speed; (ii) BrittleStylus could induce the sense of kinesthetic feedback, the degree of which was parity with that induced by pseudo-haptic (+ visual cue) methods; and (iii) participants preferred BrittleStylus over pseudo-haptic (+ visual cue) methods because of not only good performance but also fluent hand movements.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47985780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces 虚拟现实中移动接口的远程研究:基于实验室的远程传输接口研究的复制
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2021-12-03 DOI: 10.31234/osf.io/wqcuf
Jonathan W. Kelly, Melynda Hoover, Taylor A. Doty, A. Renner, L. Cherep, Stephen B Gilbert
{"title":"Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces","authors":"Jonathan W. Kelly, Melynda Hoover, Taylor A. Doty, A. Renner, L. Cherep, Stephen B Gilbert","doi":"10.31234/osf.io/wqcuf","DOIUrl":"https://doi.org/10.31234/osf.io/wqcuf","url":null,"abstract":"The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"13 4","pages":"2037-2046"},"PeriodicalIF":5.2,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41306297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, $(SGD)^{2}$(SGD)2 基于随机梯度下降的多准则可伸缩图形绘制,$(SGD)^{2}$(SGD)2</
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2021-12-02 DOI: 10.1109/TVCG.2022.3155564
R. Ahmed, Felice De Luca, S. Devkota, S. Kobourov, Mingwei Li
{"title":"Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, $(SGD)^{2}$(SGD)2","authors":"R. Ahmed, Felice De Luca, S. Devkota, S. Kobourov, Mingwei Li","doi":"10.1109/TVCG.2022.3155564","DOIUrl":"https://doi.org/10.1109/TVCG.2022.3155564","url":null,"abstract":"Readability criteria, such as distance or neighborhood preservation, are often used to optimize node-link representations of graphs to enable the comprehension of the underlying data. With few exceptions, graph drawing algorithms typically optimize one such criterion, usually at the expense of others. We propose a layout approach, Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq2-3155564.gif\"/></alternatives></inline-formula>, that can handle multiple readability criteria. <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq3-3155564.gif\"/></alternatives></inline-formula> can optimize any criterion that can be described by a differentiable function. Our approach is flexible and can be used to optimize several criteria that have already been considered earlier (e.g., obtaining ideal edge lengths, stress, neighborhood preservation) as well as other criteria which have not yet been explicitly optimized in such fashion (e.g., node resolution, angular resolution, aspect ratio). The approach is scalable and can handle large graphs. A variation of the underlying approach can also be used to optimize many desirable properties in planar graphs, while maintaining planarity. Finally, we provide quantitative and qualitative evidence of the effectiveness of <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq4-3155564.gif\"/></alternatives></inline-formula>: we analyze the interactions between criteria, measure the quality of layouts generated from <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq5-3155564.gif\"/></alternatives></inline-formula> as well as the runtime behavior, and analyze the impact of sample sizes. The source code is available on github and we also provide an interactive demo for small graphs.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"28 1","pages":"2388-2399"},"PeriodicalIF":5.2,"publicationDate":"2021-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62600458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content 基于自然语言描述的可访问可视化:语义内容的四级模型
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2021-09-30 DOI: 10.1109/TVCG.2021.3114770/
Alan Lundgard, Arvind Satyanarayan
{"title":"Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content","authors":"Alan Lundgard, Arvind Satyanarayan","doi":"10.1109/TVCG.2021.3114770/","DOIUrl":"https://doi.org/10.1109/TVCG.2021.3114770/","url":null,"abstract":"Natural language descriptions sometimes accompany visualizations to better communicate and contextualize their insights, and to improve their accessibility for readers with disabilities. However, it is difficult to evaluate the usefulness of these descriptions, and how effectively they improve access to meaningful information, because we have little understanding of the semantic content they convey, and how different readers receive this content. In response, we introduce a conceptual model for the semantic content conveyed by natural language descriptions of visualizations. Developed through a grounded theory analysis of 2,147 sentences, our model spans four levels of semantic content: enumerating visualization construction properties (e.g., marks and encodings); reporting statistical concepts and relations (e.g., extrema and correlations); identifying perceptual and cognitive phenomena (e.g., complex trends and patterns); and elucidating domain-specific insights (e.g., social and political context). To demonstrate how our model can be applied to evaluate the effectiveness of visualization descriptions, we conduct a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that these reader groups differ significantly on which semantic content they rank as most useful. Together, our model and findings suggest that access to meaningful information is strongly reader-specific, and that research in automatic visualization captioning should orient toward descriptions that more richly communicate overall trends and statistics, sensitive to reader preferences. Our work further opens a space of research on natural language as a data interface coequal with visualization.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"PP 1","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42824618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信