{"title":"Visual Features Involved in Determining Apparent Elasticity Elicit Touch Desire.","authors":"Takahiro Kawabe, Yusuke Ujitoko","doi":"10.1109/TVCG.2025.3590469","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3590469","url":null,"abstract":"<p><p>Elastic materials often invite the direct touch of users. It is an open question how seeing elastic materials invokes touch desire. The present study proposes a novel visual feature that modulates apparent elasticity and touch desire. The stimulus for our experiment was a clip in which a computerrendered elastic surface was indented by a needle-like bar. The features of this stimulus that we focused on were spatial deformation range and indentation depth. Observers rated the following three impressions: apparent elasticity, touch desire, and anticipated touch pleasantness. The results showed that both apparent elasticity and touch desire peaked in the middle of the spatial deformation range. The two impressions also depended on indentation depth and were highly correlated with each other. Anticipated touch pleasantness showed a different peak tendency than the other two. An additional block showed that the deformation realism was not related to the above three impressions. The results suggest that apparent elasticity eliciting touch desire can be determined in the parameter space defined by the spatial deformation range and the indentation depth.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing the Effectiveness of Mixed Reality as a Simulation Tool for Augmented Reality Office Applications.","authors":"Tianyu Liu, Weiping He, Mark Billinghurst","doi":"10.1109/TVCG.2025.3590002","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3590002","url":null,"abstract":"<p><p>Virtual content in augmented reality (AR) applications can be tailored to a designer's specifications. However, real-world environments are challenging to control precisely or replicate fully. Consequently, prototyping AR applications for specific environments is often difficult. One potential solution is employing mixed reality (MR) to simulate an AR system, enabling controlled experiments. Nevertheless, the effectiveness of using MR to simulate AR office work remains underexplored. In this paper, we report the results of a user study (N = 40) that investigated the impact of an MR simulation of an AR office on participants' task performance and cognitive workload (CWL). Participants completed several office tasks in both an AR scene featuring a virtual monitor and an MR-simulated AR scene. During these tasks, CWL was measured using electroencephalography (EEG) and a subjective questionnaire. The results show that the performance of the pass-through window is a major constraint on the effectiveness of the MR simulation office. Finally, we discuss the study's limitations and directions for future research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Pavanatto, Verena Biener, Jennifer Chandran, Snehanjali Kalamkar, Feiyu Lu, John J Dudley, Jinghui Hu, G Nikki Ramirez-Saffy, Per Ola Kristensson, Alexander Giovannelli, Luke Schlueter, Jorg Muller, Jens Grubert, Doug A Bowman
{"title":"Working in Extended Reality in the Wild: Worker and Bystander Experiences of XR Virtual Displays in Public Real-World Settings.","authors":"Leonardo Pavanatto, Verena Biener, Jennifer Chandran, Snehanjali Kalamkar, Feiyu Lu, John J Dudley, Jinghui Hu, G Nikki Ramirez-Saffy, Per Ola Kristensson, Alexander Giovannelli, Luke Schlueter, Jorg Muller, Jens Grubert, Doug A Bowman","doi":"10.1109/TVCG.2025.3589283","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3589283","url":null,"abstract":"<p><p>Although access to sufficient screen space is crucial to knowledge work, workers often find themselves with limited access to display infrastructure in remote or public settings. While virtual displays can be used to extend the available screen space through extended reality (XR) head-worn displays (HWD), we must better understand the implications of working with them in public settings from both users' and bystanders' viewpoints. To this end, we conducted two user studies. We first explored the usage of a hybrid AR display across real-world settings and tasks. We focused on how users take advantage of virtual displays and what social and environmental factors impact their usage of the system. A second study investigated the differences between working with a laptop, an AR system, or a VR system in public. We focused on a single location and participants performed a predefined task to enable direct comparisons between the conditions while also gathering data from bystanders. The combined results suggest a positive acceptance of XR technology in public settings and show that virtual displays can be used to accompany existing devices. We highlighted some environmental and social factors. We saw that previous XR experience and personality can influence how people perceive the use of XR in public. In addition, we confirmed that using XR in public still makes users stand out and that bystanders are curious about the devices, yet have no clear understanding of how they can be used.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam Stepanek, David Kutak, Barbora Kozlikova, Jan Byska
{"title":"Helveg: Diagrams for Software Documentation.","authors":"Adam Stepanek, David Kutak, Barbora Kozlikova, Jan Byska","doi":"10.1109/TVCG.2025.3589748","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3589748","url":null,"abstract":"<p><p>Software developers often have to gain an understanding of a codebase. Be it programmers getting onboarded onto a team project or, for example, developers striving to grasp an external open-source library. In either case, they frequently turn to the project's documentation. However, documentation in its traditional textual form is ill-suited for this kind of highlevel exploratory analysis, since it is immutable from the readers' perspective and thus forces them to follow a predefined path. We have designed an approach bringing aspects of software architecture visualization to API reference documentation. It utilizes a highly interactive node-link diagram with expressive node glyphs and flexible filtering capabilities, providing a high-level overview of the codebase as well as details on demand. To test our design, we have implemented a prototype named Helveg, capable of automatically generating diagrams of C# codebases. User testing of Helveg confirmed its potential, but it also revealed problems with the readability, intuitiveness, and user experience of our tool. Therefore, in this paper, which is an extended version of our VISSOFT paper with DOI 10.1109/VISSOFT64034.2024.00012, we address many of these problems through major changes to the glyph design, means of interaction, and user interface of the tool. To assess the improvements, this new version of Helveg was evaluated again with the same group of participants as the previous version.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DCUDF2: Improving Efficiency and Accuracy in Extracting Zero Level Sets from Unsigned Distance Fields.","authors":"Xuhui Chen, Fugang Yu, Fei Hou, Wencheng Wang, Zhebin Zhang, Ying He","doi":"10.1109/TVCG.2025.3588659","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3588659","url":null,"abstract":"<p><p>Unsigned distance fields (UDFs) provide a flexible representation for models with complex topologies, but accurately extracting their zero level sets remains challenging, particularly in preserving topological correctness and fine geometric details. We present DCUDF2, an enhanced method that builds upon DCUDF to address these limitations. Our approach introduces an accuracy-aware loss function with self-adaptive weights, enabling precise geometric fitting while avoiding over-smoothing. To improve robustness, we propose a topology correction strategy that reduces the sensitivity to hyper-parameter settings. Furthermore, we develop new operations leveraging self-adaptive weights to accelerate convergence and improve runtime efficiency. Extensive experiments on diverse datasets demonstrate that DCUDF2 consistently outperforms DCUDF and existing methods in both geometric fidelity and topological accuracy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144644374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhihui He, Chengyuan Wang, Shidong Yang, Li Chen, Yanheng Zhou, Shuo Wang
{"title":"Differentiable Collision-Supervised Tooth Arrangement Network With a Decoupling Perspective.","authors":"Zhihui He, Chengyuan Wang, Shidong Yang, Li Chen, Yanheng Zhou, Shuo Wang","doi":"10.1109/TVCG.2025.3589215","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3589215","url":null,"abstract":"<p><p>Tooth arrangement is an essential step in the digital orthodontic planning process. Existing learning-based methods use hidden teeth features to directly regress teeth motions, which couples target pose perception and motion regression. It could lead to poor perceptions of three-dimensional transformation. They also ignore the possible overlaps or gaps between teeth of predicted dentition, which is generally unacceptable. Therefore, we propose DTAN, a differentiable collision-supervised tooth arrangement network, decoupling predicting tasks and feature modeling. DTAN decouples the tooth arrangement task by first predicting the hidden features of the final teeth poses and then using them to assist in regressing the motions between the beginning and target teeth. To learn the hidden features better, DTAN also decouples the teeth-hidden features into geometric and positional features, which are further supervised by feature consistency constraints. Furthermore, we propose a novel differentiable collision loss function for point cloud data to constrain the related gestures between teeth, which can be easily extended to other 3D point cloud tasks. We propose an arch-width guided tooth arrangement network, named C-DTAN, to make the results controllable. We construct three different tooth arrangement datasets and achieve drastically improved performance on accuracy and speed compared with existing methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144644375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GeneticPrism: Multifaceted Visualization of Citation-based Scholarly Research Evolution.","authors":"Ye Sun, Zipeng Liu, Yuankai Luo, Lei Xia, Lei Shi","doi":"10.1109/TVCG.2025.3589485","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3589485","url":null,"abstract":"<p><p>Understanding the evolution of scholarly research is essential for many real-life decision-making processes in academia, such as research planning, frontier exploration, and award selection. Popular platforms like Google Scholar and Web of Science rely on numerical indicators that are too abstract to convey the context and content of scientific research, while most existing visualization approaches on mapping science do not consider the presentation of individual scholars' research evolution using curated self-citation data. This paper builds on our previous work and proposes an integrated pipeline to visualize a scholar's research evolution from multiple topic facets. A novel 3D prism-shaped visual metaphor is introduced as the overview of a scholar's research profile, whilst their scientific evolution on each topic is displayed in a more structured manner. Additional designs by topic chord diagram, streamgraph visualization, and inter-topic flow map, optimized by an elaborate layout algorithm, assist in perceiving the scholar's scientific evolution across topics. A new six-degree-impact glyph metaphor highlights key interdisciplinary works driving the evolution. The proposed visualization methods are evaluated through case studies analyzing the careers of prestigious Turing award laureates, one major visualization venue, and a focused user study. The GeneticPrism web application is publicly available at https://genetic-flow.com.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144644387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of the Sense of Embodiment in Virtual Co-embodiment Rehabilitation: A Structural Equation Modeling Approach.","authors":"Chengjie Zhang, Suiran Yu","doi":"10.1109/TVCG.2025.3589111","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3589111","url":null,"abstract":"<p><p>Sense of embodiment (SoE) refers to the participant's perception of a virtual avatar as an extension of their own body, involving both physical and functional aspects. Research has highlighted the importance of SoE for the effectiveness of virtual rehabilitation. Virtual co-embodiment technology, an emerging virtual reality (VR) application, has the potential to enhance users' engagement and SoE, which demonstrates significant promise for motor rehabilitation. However, the exploration of factors influencing embodiment in virtual co-embodiment is still limited, particularly regarding both internal and external factors, which constrains its rehabilitation applications. This study investigates factors influencing SoE changes in Virtual Co-embodiment Rehabilitation by developing a theoretical model, based on 859 valid trials collected from 40 healthy participants, and analyzing the data using Structural Equation Modeling (SEM) and Partial Least Squares Structural Equation Modeling (PLS-SEM). The results suggest that both \"visual consistency\" (external factor) and \"individual sensitivity\" (internal factor) may influence changes in SoE, with \"visual consistency\" appearing to have a predominant effect. These findings contribute to understanding changes in SoE in virtual co-embodiment and offer insights for optimizing the technology in rehabilitation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144644373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Head-EyeK: Head-eye Coordination and Control Learned in Virtual Reality.","authors":"Yifang Pan, Ludwig Sidenmark, Karan Singh","doi":"10.1109/TVCG.2025.3589333","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3589333","url":null,"abstract":"<p><p>Human head-eye coordination is a complex behavior, shaped by physiological constraints, psychological context, and gaze intent. Current context-specific gaze models in both psychology and graphics fail to produce plausible head-eye coordination for general patterns of human gaze behavior. In this paper, we: 1) propose and validate an experimental protocol to collect head-eye motion data during sequential look-at tasks in Virtual Reality; 2) identify factors influencing head-eye coordination using this data; and 3) introduce a head-eye coordinated Inverse Kinematic gaze model Head-EyeK that integrates these insights. Our evaluation of Head-EyeK is three-fold: we show the impact of algorithmic parameters on gaze behavior; we show a favorable comparison to prior art both quantitatively against ground-truth data, and qualitatively using a perceptual study; and we show multiple scenarios of complex gaze behavior credibly animated using Head-EyeK.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144644388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yin Wang, Mu Li, Zhiying Leng, Frederick W B Li, Xiaohui Liang
{"title":"MOST: Motion Diffusion Model for Rare Text via Temporal Clip Banzhaf Interaction.","authors":"Yin Wang, Mu Li, Zhiying Leng, Frederick W B Li, Xiaohui Liang","doi":"10.1109/TVCG.2025.3588509","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3588509","url":null,"abstract":"<p><p>We introduce MOST, a novel MOtion diffuSion model via Temporal clip Banzhaf interaction, aimed at addressing the persistent challenge of generating human motion from rare language prompts. While previous approaches struggle with coarse-grained matching and overlook important semantic cues due to motion redundancy, our key insight lies in leveraging fine-grained clip relationships to mitigate these issues. MOST's retrieval stage presents the first formulation of its kind - temporal clip Banzhaf interaction - which precisely quantifies textualmotion coherence at the clip level. This facilitates direct, finegrained text-to-motion clip matching and eliminates prevalent redundancy. In the generation stage, a motion prompt module effectively utilizes retrieved motion clips to produce semantically consistent movements. Extensive evaluations confirm that MOST achieves state-of-the-art text-to-motion retrieval and generation performance by comprehensively addressing previous challenges, as demonstrated through quantitative and qualitative results highlighting its effectiveness, especially for rare prompts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144612768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}