IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Personalized Dual-Level Color Grading for 360-degree Images in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549886
Lin-Ping Yuan, John J Dudley, Per Ola Kristensson, Huamin Qu
{"title":"Personalized Dual-Level Color Grading for 360-degree Images in Virtual Reality.","authors":"Lin-Ping Yuan, John J Dudley, Per Ola Kristensson, Huamin Qu","doi":"10.1109/TVCG.2025.3549886","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549886","url":null,"abstract":"<p><p>The rising popularity of 360-degree images and virtual reality (VR) has spurred a growing interest among creators in producing visually appealing content through effective color grading processes. Although existing computational approaches have simplified the global color adjustment for entire images with Preferential Bayesian Optimization (PBO), they neglect local colors for points of interest and are not optimized for the immersive nature of VR. In response, we propose a dual-level PBO framework that integrates global and local color adjustments tailored for VR environments. We design and evaluate a novel context-aware preferential Gaussian Process (GP) to learn contextual preferences for local colors, taking into account the dynamic contexts of previously established global colors. Additionally, recognizing the limitations of desktop-based interfaces for comparing 360-degree images, we design three VR interfaces for color comparison. We conduct a controlled user study to investigate the effectiveness of the three VR interface designs and find that users prefer to be enveloped by one 360-degree image at a time and to compare two rather than four color-graded options.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TextIR: A Simple Framework for Text-based Editable Image Restoration.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3550844
Yunpeng Bai, Cairong Wang, Shuzhao Xie, Chao Dong, Chun Yuan, Zhi Wang
{"title":"TextIR: A Simple Framework for Text-based Editable Image Restoration.","authors":"Yunpeng Bai, Cairong Wang, Shuzhao Xie, Chao Dong, Chun Yuan, Zhi Wang","doi":"10.1109/TVCG.2025.3550844","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3550844","url":null,"abstract":"<p><p>Many current image restoration approaches utilize neural networks to acquire robust image-level priors from extensive datasets, aiming to reconstruct missing details. Nevertheless, these methods often falter with images that exhibit significant information gaps. While incorporating external priors or leveraging reference images can provide supplemental information, these strategies are limited in their practical scope. Alternatively, textual inputs offer greater accessibility and adaptability. In this study, we develop a sophisticated framework enabling users to guide the restoration of deteriorated images via textual descriptions. Utilizing the text-image compatibility feature of CLIP enhances the integration of textual and visual data. Our versatile framework supports multiple restoration activities such as image inpainting, super-resolution, and colorization. Comprehensive testing validates our technique's efficacy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In Touch We Decide: Physical Touch by Embodied Virtual Agent Increases the Acceptability of Advice
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549559
Atsuya Matsumoto;Takashige Suzuki;Chi-Lan Yang;Takuji Narumi;Hideaki Kuzuoka
{"title":"In Touch We Decide: Physical Touch by Embodied Virtual Agent Increases the Acceptability of Advice","authors":"Atsuya Matsumoto;Takashige Suzuki;Chi-Lan Yang;Takuji Narumi;Hideaki Kuzuoka","doi":"10.1109/TVCG.2025.3549559","DOIUrl":"10.1109/TVCG.2025.3549559","url":null,"abstract":"Trust in agents within Virtual Reality is becoming increasingly important, as they provide advice and influence people's decision-making. However, previous studies show that encountering speech recognition errors can reduce users' trust in agents. Such errors lead users to ignore the agent's advice and make suboptimal decisions. While agents can offer an apology to repair trust, its effectiveness is often limited because it fails to fully repair the original level of trust. Therefore, we examined the use of social touch, a social interaction involving physical interaction between users and the virtual agent, to enhance the effect of an apology on trust repair and to increase the acceptability of its advice. In a controlled experiment (N=24), participants experienced a robotic arm touching the back of their hands while interacting with the agent before decision-making. The results showed that social touch did not repair participants' trust in agents. However, participants were more likely to accept the agent's advice when they experienced touch with physical feedback, regardless of the level of trust in the agent. We discuss the role of presenting physical haptic feedback and its influence on human-agent interactions in VR.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2839-2847"},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10924669","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reaction Time as a Proxy for Presence in Mixed Reality with Distraction.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549575
Yasra Chandio, Victoria Interrante, Fatima M Anwar
{"title":"Reaction Time as a Proxy for Presence in Mixed Reality with Distraction.","authors":"Yasra Chandio, Victoria Interrante, Fatima M Anwar","doi":"10.1109/TVCG.2025.3549575","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549575","url":null,"abstract":"<p><p>Distractions in mixed reality (MR) environments can significantly influence user experience, affecting key factors such as presence, reaction time, cognitive load, and Break in Presence (BIP). Presence measures immersion, reaction time captures user responsiveness, cognitive load reflects mental effort, and BIP represents moments when attention shifts from the virtual to the real world, breaking immersion. While prior work has established that distractions impact these factors individually, the relationship between these constructs remains underexplored, particularly in MR environments where users engage with both real and virtual stimuli. To address this gap, we have presented a theoretical model to understand how congruent and incongruent distractions affect all these constructs. We conducted a within-subject study (N = 54) where participants performed image-sorting tasks under different distraction conditions. Our findings show that incongruent distractions significantly increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimalism or Creative Chaos? On the Arrangement and Analysis of Numerous Scatterplots in Immersive 3D Knowledge Spaces
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549546
Melanie Derksen;Torsten Kuhlen;Mario Botsch;Tim Weissker
{"title":"Minimalism or Creative Chaos? On the Arrangement and Analysis of Numerous Scatterplots in Immersive 3D Knowledge Spaces","authors":"Melanie Derksen;Torsten Kuhlen;Mario Botsch;Tim Weissker","doi":"10.1109/TVCG.2025.3549546","DOIUrl":"10.1109/TVCG.2025.3549546","url":null,"abstract":"Working with scatterplots is a classic everyday task for data analysts, which gets increasingly complex the more plots are required to form an understanding of the underlying data. To help analysts retrieve relevant plots more quickly when they are needed, immersive virtual environments (iVEs) provide them with the option to freely arrange scatterplots in the 3D space around them. In this paper, we investigate the impact of different virtual environments on the users' ability to quickly find and retrieve individual scatterplots from a larger collection. We tested three different scenarios, all having in common that users were able to position the plots freely in space according to their own needs, but each providing them with varying numbers of landmarks serving as visual cues: an Empty scene as a baseline condition, a single landmark condition with one prominent visual cue being a Desk, and a multiple landmarks condition being a virtual Office. Results from a between-subject investigation with 45 participants indicate that the time and effort users invest in arranging their plots within an iVE had a greater impact on memory performance than the design of the iVE itself. We report on the individual arrangement strategies that participants used to solve the task effectively and underline the importance of an active arrangement phase for supporting the spatial memorization of scatterplots in iVEs.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3003-3013"},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10924456","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Display to Interaction: Design Patterns for Cross-Reality Systems.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549893
Robbe Cools, Jihae Han, Augusto Esteves, Adalberto L Simeone
{"title":"From Display to Interaction: Design Patterns for Cross-Reality Systems.","authors":"Robbe Cools, Jihae Han, Augusto Esteves, Adalberto L Simeone","doi":"10.1109/TVCG.2025.3549893","DOIUrl":"10.1109/TVCG.2025.3549893","url":null,"abstract":"<p><p>Cross-reality is an emerging research area concerned with systems operating across different points on the reality-virtuality continuum. These systems are often complex, involving multiple realities and users, and thus there is a need for an overarching design framework, which, despite growing interest has yet to be developed. This paper addresses this need by presenting eleven design patterns for cross-reality applications across the following four categories: fundamental, origin, display, and interaction patterns. To develop these design patterns we analysed a sample of 60 papers, with the goal of identifying recurring solutions. These patterns were then described in form of intent, solution, and application examples, accompanied by a diagram and archetypal example. This paper provides designers with a comprehensive set of patterns that they can use and draw inspiration from when creating cross-reality systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalSelect: Improving Occluded Objects Acquisition with Heuristic Selection and Disambiguation in Virtual Reality. FocalSelect:利用虚拟现实中的启发式选择和消歧技术改进隐蔽物体的获取。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549554
Duotun Wang, Linjie Qiu, Boyu Li, Qianxi Liu, Xiaoying Wei, Jianhao Chen, Zeyu Wang, Mingming Fan
{"title":"FocalSelect: Improving Occluded Objects Acquisition with Heuristic Selection and Disambiguation in Virtual Reality.","authors":"Duotun Wang, Linjie Qiu, Boyu Li, Qianxi Liu, Xiaoying Wei, Jianhao Chen, Zeyu Wang, Mingming Fan","doi":"10.1109/TVCG.2025.3549554","DOIUrl":"10.1109/TVCG.2025.3549554","url":null,"abstract":"<p><p>In recent years, various head-worn virtual reality (VR) techniques have emerged to enhance object selection for occluded or distant targets. However, many approaches focus solely on ray-casting inputs, restricting their use with other input methods, such as bare hands. Additionally, some techniques speed up selection by changing the user's perspective or modifying the scene context, which may complicate interactions when users plan to resume or manipulate the scene afterward. To address these challenges, we present FocalSelect, a heuristic selection technique that builds 3D disambiguation through head-hand coordination and scoring-based functions. Our interaction design adheres to the principle that the intended selection range is a small sector of the headset's viewing frustum, allowing optimal targets to be identified within this scope. We also introduce a density-aware adjustable occlusion plane for effective depth culling of rendered objects. Two experiments are conducted to assess the adaptability of FocalSelect across different input modalities and its performance against five selection techniques. The results indicate that FocalSelect enhances selection experiences in occluded and remote scenarios while preserving the spatial context among objects. This preservation helps maintain users' understanding of the original scene and facilitates further manipulation. We also explore potential applications and enhancements to demonstrate more practical implementations of FocalSelect.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Early Warning System Based on Visual Feedback for Light-Based Hand Tracking Failures in VR Head-Mounted Displays
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549544
Mohammad Raihanul Bashar;Anil Ufuk Batmaz
{"title":"An Early Warning System Based on Visual Feedback for Light-Based Hand Tracking Failures in VR Head-Mounted Displays","authors":"Mohammad Raihanul Bashar;Anil Ufuk Batmaz","doi":"10.1109/TVCG.2025.3549544","DOIUrl":"10.1109/TVCG.2025.3549544","url":null,"abstract":"State-of-the-art Virtual Reality (VR) Head-Mounted Displays (HMDs) enable users to interact with virtual objects using their hands via built-in camera systems. However, the accuracy of the hand movement detection algorithm is often affected by limitations in both camera hardware and software, including image processing & machine learning algorithms used for hand skeleton detection. In this work, we investigated a visual feedback mechanism to create an early warning system that detects hand skeleton recognition failures in VR HMDs and warns users in advance. We conducted two user studies to evaluate the system's effectiveness. The first study involved a cup stacking task, where participants stacked virtual cups. In the second study, participants performed a ball sorting task, picking and placing colored balls into corresponding baskets. During both of the studies, we monitored the built-in hand tracking confidence of the VR HMD system and provided visual feedback to the user to warn them when the tracking confidence is ‘low’. The results showed that warning users before the hand tracking algorithm fails improved the system's usability while reducing frustration. The impact of our results extends beyond VR HMDs, any system that uses hand tracking, such as robotics, can benefit from this approach.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3645-3655"},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TeamPortal: Exploring Virtual Reality Collaboration Through Shared and Manipulating Parallel Views
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549569
Xian Wang;Luyao Shen;Lei Chen;Mingming Fan;Lik–Hang Lee
{"title":"TeamPortal: Exploring Virtual Reality Collaboration Through Shared and Manipulating Parallel Views","authors":"Xian Wang;Luyao Shen;Lei Chen;Mingming Fan;Lik–Hang Lee","doi":"10.1109/TVCG.2025.3549569","DOIUrl":"10.1109/TVCG.2025.3549569","url":null,"abstract":"Virtual Reality (VR) offers a unique collaborative experience, with parallel views playing a pivotal role in Collaborative Virtual Environments by supporting the transfer and delivery of items. Sharing and manipulating partners' views provides users with a broader perspective that helps them identify the targets and partner actions. We proposed TeamPortal accordingly and conducted two user studies with 72 participants (36 pairs) to investigate the potential benefits of interactive, shared perspectives in VR collaboration. Our first study compared ShaView and TeamPortal against a baseline in a collaborative task that encompassed a series of searching and manipulation tasks. The results show that TeamPortal significantly reduced movement and increased collaborative efficiency and social presence in complex tasks. Following the results, the second study evaluated three variants: TeamPortal+, SnapTeamPortal+, and DropTeamPortal+. The results show that both SnapTeamPortal+ and DropTeamPortal+ improved task efficiency and willingness to further adopt these technologies, though SnapTeamPortal+ reduced co-presence. Based on the findings, we proposed three design implications to inform the development of future VR collaboration systems.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3314-3324"},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PwP: Permutating with Probability for Efficient Group Selection in VR. PwP:在虚拟现实中通过概率排列实现高效的分组选择。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549560
Jian Wu, Weicheng Zhang, Handong Chen, Wei Lin, Xuehuai Shi, Lili Wang
{"title":"PwP: Permutating with Probability for Efficient Group Selection in VR.","authors":"Jian Wu, Weicheng Zhang, Handong Chen, Wei Lin, Xuehuai Shi, Lili Wang","doi":"10.1109/TVCG.2025.3549560","DOIUrl":"10.1109/TVCG.2025.3549560","url":null,"abstract":"<p><p>Group selection in virtual reality is an important means of multi-object selection, which allows users to quickly group multiple objects and can significantly improve the operation efficiency of multiple types of objects. In this paper, we propose a group selection method based on multiple rounds of probability permutation, in which the efficiency of group selection is substantially improved by making the object layout of the next round easier to be batch-selected through interactive selection, object grouping probability computation, and position rearrangement in each round of the selection process. We conducted ablation experiments to determine the algorithm coefficients and validate the effectiveness of the algorithm. In addition, an empirical user study was conducted to evaluate the ability of our method to significantly improve the efficiency of the group selection task in an immersive virtual reality environment. The reduced operations also indirectly reduce the user task load and improve usability.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信