IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Visualization-Driven Illumination for Density Plots. 可视化驱动的密度图照明。
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-11 DOI: 10.1109/TVCG.2024.3495695
Xin Chen, Yunhai Wang, Huaiwei Bao, Kecheng Lu, Jaemin Jo, Chi-Wing Fu, Jean-Daniel Fekete
{"title":"Visualization-Driven Illumination for Density Plots.","authors":"Xin Chen, Yunhai Wang, Huaiwei Bao, Kecheng Lu, Jaemin Jo, Chi-Wing Fu, Jean-Daniel Fekete","doi":"10.1109/TVCG.2024.3495695","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3495695","url":null,"abstract":"<p><p>We present a novel visualization-driven illumination model for density plots, a new technique to enhance density plots by effectively revealing the detailed structures in high- and medium-density regions and outliers in low-density regions, while avoiding artifacts in the density field's colors. When visualizing large and dense discrete point samples, scatterplots and dot density maps often suffer from overplotting, and density plots are commonly employed to provide aggregated views while revealing underlying structures. Yet, in such density plots, existing illumination models may produce color distortion and hide details in low-density regions, making it challenging to look up density values, compare them, and find outliers. The key novelty in this work includes (i) a visualization-driven illumination model that inherently supports density-plot-specific analysis tasks and (ii) a new image composition technique to reduce the interference between the image shading and the color-encoded density values. To demonstrate the effectiveness of our technique, we conducted a quantitative study, an empirical evaluation of our technique in a controlled study, and two case studies, exploring twelve datasets with up to two million data point samples.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Potential of Haptic Props for 3D Object Manipulation in Handheld AR. 探究手持式 AR 中触觉道具对 3D 物体操作的潜力
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-11 DOI: 10.1109/TVCG.2024.3495021
Jonathan Wieland, Maximilian Durr, Rebecca Frisch, Melissa Michalke, Dominik Morgenstern, Harald Reiterer, Tiare Feuchtner
{"title":"Investigating the Potential of Haptic Props for 3D Object Manipulation in Handheld AR.","authors":"Jonathan Wieland, Maximilian Durr, Rebecca Frisch, Melissa Michalke, Dominik Morgenstern, Harald Reiterer, Tiare Feuchtner","doi":"10.1109/TVCG.2024.3495021","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3495021","url":null,"abstract":"<p><p>The manipulation of virtual 3D objects is essential for a variety of handheld AR scenarios. However, the mapping of commonly supported 2D touch gestures to manipulations in 3D space is not trivial. As an alternative, our work explores the use of haptic props that facilitate direct manipulation of virtual 3D objects with 6 degrees of freedom. In an experiment, we instructed 20 participants to solve 2D and 3D docking tasks in AR, to compare traditional 2D touch gestures with prop-based interactions using three prop shapes (cube, rhombicuboctahedron, sphere). Our findings highlight benefits of haptic props for 3D manipulation tasks with respect to task performance, user experience, preference, and workload. For 2D tasks, the benefits of haptic props are less pronounced. Finally, while we found no significant impact of prop shape on task performance, this appears to be subject to personal preference.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"where Did My Apps Go?" Supporting Scalable and Transition-Aware Access to Everyday Applications in Head-Worn Augmented Reality. "我的应用程序去哪儿了?在头戴式增强现实中支持可扩展和过渡感知的日常应用访问。
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-08 DOI: 10.1109/TVCG.2024.3493115
Feiyu Lu, Leonardo Pavanatto, Shakiba Davari, Lei Zhang, Lee Lisle, Doug A Bowman
{"title":"\"where Did My Apps Go?\" Supporting Scalable and Transition-Aware Access to Everyday Applications in Head-Worn Augmented Reality.","authors":"Feiyu Lu, Leonardo Pavanatto, Shakiba Davari, Lei Zhang, Lee Lisle, Doug A Bowman","doi":"10.1109/TVCG.2024.3493115","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3493115","url":null,"abstract":"<p><p>Future augmented reality (AR) glasses empower users to view personal applications and services anytime and anywhere without being restricted by physical locations and the availability of physical screens. In typical everyday activities, people move around to carry out different tasks and need a variety of information on the go. Existing interfaces in AR do not support these use cases well, especially when the number of applications increases. We explore the usability of three world-referenced approaches that move AR applications with users as they transition among different locations, featuring different levels of AR app availability: (1) always using a menu to manually open an app when needed; (2) automatically suggesting a relevant subset of all apps; and (3) carrying all apps with the users to the new location. Through a controlled study and a relatively more ecologically-valid study in AR, we reached better understandings on the performance trade-offs and observed the impact of various everyday contextual factors on these interfaces in more realistic AR settings. Our results shed light on how to better support the mobile information needs of users in everyday life in future AR interfaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction. PGSR:基于平面高斯拼接技术的高效高保真曲面重构。
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-07 DOI: 10.1109/TVCG.2024.3494046
Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang
{"title":"PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction.","authors":"Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang","doi":"10.1109/TVCG.2024.3494046","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3494046","url":null,"abstract":"<p><p>Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory. To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering. Specifically, we first introduce an unbiased depth rendering method, which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud, and divides the two to obtain the unbiased depth. We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy. We also propose a camera exposure compensation model to cope with scenes with large illumination variations. Experiments on indoor and outdoor scenes show that the proposed method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods. Our code will be made publicly available, and more information can be found on our project page (https://zju3dv.github.io/pgsr/).</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Dashboard Zoo to Census: A Case Study With Tableau Public. 从仪表盘动物园到人口普查:使用 Tableau Public 的案例研究。
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-06 DOI: 10.1109/TVCG.2024.3490259
Arjun Srinivasan, Joanna Purich, Michael Correll, Leilani Battle, Vidya Setlur, Anamaria Crisan
{"title":"From Dashboard Zoo to Census: A Case Study With Tableau Public.","authors":"Arjun Srinivasan, Joanna Purich, Michael Correll, Leilani Battle, Vidya Setlur, Anamaria Crisan","doi":"10.1109/TVCG.2024.3490259","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3490259","url":null,"abstract":"<p><p>Dashboards remain ubiquitous tools for analyzing data and disseminating the findings. Understanding the range of dashboard designs, from simple to complex, can support development of authoring tools that enable end-users to meet their analysis and communication goals. Yet, there has been little work that provides a quantifiable, systematic, and descriptive overview of dashboard design patterns. Instead, existing approaches only consider a handful of designs, which limits the breadth of patterns that can be surfaced. More quantifiable approaches, inspired by machine learning (ML), are presently limited to single visualizations or capture narrow features of dashboard designs. To address this gap, we present an approach for modeling the content and composition of dashboards using a graph representation. The graph decomposes dashboard designs into nodes featuring content \"blocks'; and uses edges to model \"relationships\", such as layout proximity and interaction, between nodes. To demonstrate the utility of this approach, and its extension over prior work, we apply this representation to derive a census of 25,620 dashboards from Tableau Public, providing a descriptive overview of the core building blocks of dashboards in the wild and summarizing prevalent dashboard design patterns. We discuss concrete applications of both a graph representation for dashboard designs and the resulting census to guide the development of dashboard authoring tools, making dashboards accessible, and for leveraging AI/ML techniques. Our findings underscore the importance of meeting users where they are by broadly cataloging dashboard designs, both common and exotic.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authoring Data-Driven Chart Animations. 制作数据驱动的图表动画
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-05 DOI: 10.1109/TVCG.2024.3491504
Yuancheng Shen, Yue Zhao, Yunhai Wang, Tong Ge, Haoyan Shi, Bongshin Lee
{"title":"Authoring Data-Driven Chart Animations.","authors":"Yuancheng Shen, Yue Zhao, Yunhai Wang, Tong Ge, Haoyan Shi, Bongshin Lee","doi":"10.1109/TVCG.2024.3491504","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3491504","url":null,"abstract":"<p><p>We present an authoring tool, called CAST+ (Canis Studio Plus), that enables the interactive creation of chart animations through the direct manipulation of keyframes. It introduces the visual specification of chart animations consisting of keyframes that can be played sequentially or simultaneously, and animation parameters (e.g., duration, delay). Building on Canis [1], a declarative chart animation grammar that leverages data-enriched SVG charts, CAST+ supports auto-completion for constructing both keyframes and keyframe sequences. It also enables users to refine the animation specification (e.g., aligning keyframes across tracks to play them together, adjusting delay) with direct manipulation. We report a user study conducted to assess the visual specification and system usability with its initial version. We enhanced the system's expressiveness and usability: CAST+ now supports the animation of multiple types of visual marks in the same keyframe group with new auto-completion algorithms based on generalized selection. This enables the creation of more expressive animations, while reducing the number of interactions needed to create comparable animations. We present a gallery of examples and four usage scenarios to demonstrate the expressiveness of CAST+. Finally, we discuss the limitations, comparison, and potentials of CAST+ as well as directions for future research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iceberg Sensemaking: A Process Model for Critical Data Analysis. 冰山感知:关键数据分析过程模型。
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-04 DOI: 10.1109/TVCG.2024.3486613
Charles Berret, Tamara Munzner
{"title":"Iceberg Sensemaking: A Process Model for Critical Data Analysis.","authors":"Charles Berret, Tamara Munzner","doi":"10.1109/TVCG.2024.3486613","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3486613","url":null,"abstract":"<p><p>We offer a new model of the sensemaking process for data analysis and visualization. Whereas past sensemaking models have been grounded in positivist assumptions about the nature of knowledge, we reframe data sensemaking in critical, humanistic terms by approaching it through an interpretivist lens. Our three-phase process model uses the analogy of an iceberg, where data is the visible tip of underlying schemas. In the Add phase, the analyst acquires data, incorporates explicit schemas from the data, and absorbs the tacit schemas of both data and people. In the Check phase, the analyst interprets the data with respect to the current schemas and evaluates whether the schemas match the data. In the Refine phase, the analyst considers the role of power, articulates what was tacit into explicitly stated schemas, updates data, and formulates findings. Our model has four important distinguishing features: Tacit and Explicit Schemas, Schemas First and Always, Data as a Schematic Artifact, and Schematic Multiplicity. We compare the roles of schemas in past sensemaking models and draw conceptual distinctions based on a historical review of schemas in different academic traditions. We validate the descriptive and prescriptive power of our model through four analysis scenarios: noticing uncollected data, learning to wrangle data, downplaying inconvenient data, and measuring with sensors. We conclude by discussing the value of interpretivism, the virtue of epistemic humility, and the pluralism this sensemaking model can foster.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-NeRF: View-consistent Detail Generation for NeRF Super-resolution. 超级 NeRF:针对 NeRF 超级分辨率的视图一致性细节生成。
IEEE transactions on visualization and computer graphics Pub Date : 2024-11-04 DOI: 10.1109/TVCG.2024.3490840
Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai
{"title":"Super-NeRF: View-consistent Detail Generation for NeRF Super-resolution.","authors":"Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai","doi":"10.1109/TVCG.2024.3490840","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3490840","url":null,"abstract":"<p><p>The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on making full use of high-resolution images to generate high-resolution novel views, but less considering the generation of high-resolution details given only low-resolution images. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate low-resolution-guided high-resolution 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a multi-view consistency-controlling super-resolution module to generate various view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each input view to control the generated reasonable high-resolution 2D images satisfying view consistency. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and even AI-generated NeRFs. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142574886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CATOM : Causal Topology Map for Spatiotemporal Traffic Analysis with Granger Causality in Urban Areas. CATOM:用于城市地区格兰杰因果关系时空交通分析的因果拓扑图。
IEEE transactions on visualization and computer graphics Pub Date : 2024-10-31 DOI: 10.1109/TVCG.2024.3489676
Chanyoung Jung, Soobin Yim, Giwoong Park, Simon Oh, Yun Jang
{"title":"CATOM : Causal Topology Map for Spatiotemporal Traffic Analysis with Granger Causality in Urban Areas.","authors":"Chanyoung Jung, Soobin Yim, Giwoong Park, Simon Oh, Yun Jang","doi":"10.1109/TVCG.2024.3489676","DOIUrl":"10.1109/TVCG.2024.3489676","url":null,"abstract":"<p><p>The transportation network is an important element in an urban system that supports daily activities, enabling people to travel from one place to another. One of the key challenges is the network complexity, which is composed of many node pairs distributed over the area. This spatial characteristic results in the high dimensional network problem in understanding the 'cause' of problems such as traffic congestion. Recent studies have proposed visual analytics systems aimed at understanding these underlying causes. Despite these efforts, the analysis of such causes is limited to identified patterns. However, given the intricate distribution of roads and their mutual influence, new patterns continuously emerge across all roads within urban transportation. At this stage, a well-defined visual analytics system can be a good solution for transportation practitioners. In this paper, we propose a system, CATOM (Causal Topology Map), for the cause-effect analysis of traffic patterns based on Granger causality for extracting causal topology maps. CATOM discovers causal relationships between roads through the Granger causality test and quantifies these relationships through the causal density. During the design process, the system was developed to fully utilize spatial information with visualization techniques to overcome the previous problems in the literature. We also evaluate the usability of our approach by conducting a SUS(System Usability Scale) test and traffic cause analysis with the real-world data from two study sites in collaboration with domain experts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Fidelity and High-Efficiency Talking Portrait Synthesis With Detail-Aware Neural Radiance Fields. 利用感知细节的神经辐射场实现高保真、高效的会话肖像合成。
IEEE transactions on visualization and computer graphics Pub Date : 2024-10-31 DOI: 10.1109/TVCG.2024.3488960
Muyu Wang, Sanyuan Zhao, Xingping Dong, Jianbing Shen
{"title":"High-Fidelity and High-Efficiency Talking Portrait Synthesis With Detail-Aware Neural Radiance Fields.","authors":"Muyu Wang, Sanyuan Zhao, Xingping Dong, Jianbing Shen","doi":"10.1109/TVCG.2024.3488960","DOIUrl":"10.1109/TVCG.2024.3488960","url":null,"abstract":"<p><p>In this paper, we propose a novel rendering framework based on neural radiance fields (NeRF) named HH-NeRF that can generate high-resolution audio-driven talking portrait videos with high fidelity and fast rendering. Specifically, our framework includes a detail-aware NeRF module and an efficient conditional super-resolution module. Firstly, a detail-aware NeRF is proposed to efficiently generate a high-fidelity low-resolution talking head, by using the encoded volume density estimation and audio-eye-aware color calculation. This module can capture natural eye blinks and high-frequency details, and maintain a similar rendering time as previous fast methods. Secondly, we present an efficient conditional super-resolution module on the dynamic scene to directly generate the high-resolution portrait with our low-resolution head. Incorporated with the prior information, such as depth map and audio features, our new proposed efficient conditional super resolution module can adopt a lightweight network to efficiently generate realistic and distinct high-resolution videos. Extensive experiments demonstrate that our method can generate more distinct and fidelity talking portraits on high resolution (900 × 900) videos compared to state-of-the-art methods. Our code is available at https://github.com/muyuWang/HHNeRF.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信