{"title":"An embodied body morphology task for investigating self-avatar proportions perception in Virtual Reality.","authors":"Loen Boban, Ronan Boulic, Bruno Herbelin","doi":"10.1109/TVCG.2025.3549123","DOIUrl":"10.1109/TVCG.2025.3549123","url":null,"abstract":"<p><p>The perception of one's own body is subject to systematic distortions and can be influenced by exposure to visual stimuli showing distorted bodies. In Virtual Reality (VR), echoing such body judgment inaccuracies, avatars with strong appearance dissimilarities with respect to users' bodies can be successfully embodied. The present experimental work investigates, in the healthy population, the perception of the own body in immersive and embodied VR, as well as the impact of being co-present with virtual humans on such self-perception. Participants were successively presented with different avatars, corresponding to various upper- and lower-body proportions, and were asked to compare them with their perceived own body morphology. To investigate the influence of co-present virtual humans on this judgment, the task was performed in co-presence with virtual agents corresponding to various body appearances. Results show an overall overestimation of one's leg length and no influence of the co-present agent's appearance. Importantly, the embodiment scores reflect such body morphology judgment inaccuracy, with participants reporting lower levels of embodiment for avatars with very short legs than for avatars with very long legs. Our findings suggest specifics of embodied body judgment methods, likely resulting from the experience of embodying the avatar as compared to visual appreciation only.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design.","authors":"Qipeng Wang, Rui Sheng, Shaolun Ruan, Xiaofu Jin, Chuhan Shi, Min Zhu","doi":"10.1109/TVCG.2025.3552134","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552134","url":null,"abstract":"<p><p>Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling.","authors":"Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu","doi":"10.1109/TVCG.2025.3552017","DOIUrl":"10.1109/TVCG.2025.3552017","url":null,"abstract":"<p><p>This paper explores the potential for human-AI collaboration in the context of data storytelling for data workers. Data storytelling communicates insights and knowledge from data analysis. It plays a vital role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers need to spend tremendous effort on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies focus more on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To address this gap, we conducted an interview study with 18 data workers to explore their preferences for AI collaboration in the planning, implementation, and communication stages of their workflow. We propose a framework for expected AI collaborators' roles, categorize people's expectations for the level of automation for different tasks, and delve into the reasons behind them. Our research provides insights and suggestions for the design of future AI-powered data storytelling tools.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha
{"title":"Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance.","authors":"Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha","doi":"10.1109/TVCG.2025.3549908","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549908","url":null,"abstract":"<p><p>We study the correlations between redirected walking (RDW) rotation gains and patterns in users' posture and gaze data during locomotion in virtual reality (VR). To do this, we conducted a psychophysical experiment to measure users' sensitivity to RDW rotation gains and collect gaze and posture data during the experiment. Using multilevel modeling, we studied how different factors of the VR system and user affected their physiological signals. In particular, we studied the effects of redirection gain, trial duration, trial number (i.e., time spent in VR), and participant gender on postural sway, gaze velocity (a proxy for gaze stability), and saccade and blink rate. Our results showed that, in general, physiological signals were significantly positively correlated with the strength of redirection gain, the duration of trials, and the trial number. Gaze velocity was negatively correlated with trial duration. Additionally, we measured users' sensitivity to rotation gains in well-lit (photopic) and dimly-lit (mesopic) virtual lighting conditions. Results showed that there were no significant differences in RDW detection thresholds between the photopic and mesopic luminance conditions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Empathy for Visual Impairments: A Multi-Modal Approach in VR Serious Games","authors":"Yuexi Dong;Haonan Guo;Jingya Li","doi":"10.1109/TVCG.2025.3549900","DOIUrl":"10.1109/TVCG.2025.3549900","url":null,"abstract":"Visual impairments significantly impact individuals' ability to perceive their surroundings, affecting everyday tasks and spatial navigation. This study explores SEEK VR,s a multi-modal virtual reality game designed to foster empathy and raise awareness about the challenges faced by visually impaired individuals. By integrating visual feedback, 3D spatial audio, and haptic feedback, the game provides an immersive experience that helps participants understand the physical and emotional struggles of visual impairment. The paper includes a review of related work on empathy-driven VR games, a detailed description of the design and implementation of SEEK VR, and the technical aspects of its multimodal interactions. A user study with 24 participants demonstrated significant increases in empathy, particularly in empathy and willingness to help visually impaired individuals in real-world scenarios. These findings highlight the potential of VR serious games to promote social awareness and empathy through immersive, multi-modal interactions.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2954-2963"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LAPIG: Language Guided Projector Image Generation with Surface Adaptation and Stylization","authors":"Yuchen Deng;Haibin Ling;Bingyao Huang","doi":"10.1109/TVCG.2025.3549859","DOIUrl":"10.1109/TVCG.2025.3549859","url":null,"abstract":"We propose LAPIG, a language guided projector image generation method with surface adaptation and stylization. LAPIG consists of a projector-camera system and a target textured projection surface. LAPIG takes the user text prompt as input and aims to transform the surface style using the projector. LAPIG's key challenge is that due to the projector's physical brightness limitation and the surface texture, the viewer's perceived projection may suffer from color saturation and artifacts in both dark and bright regions, such that even with the state-of-the-art projector compensation techniques, the viewer may see clear surface texture-related artifacts. Therefore, how to generate a projector image that follows the user's instruction while also displaying minimum surface artifacts is an open problem. To address this issue, we propose projection surface adaptation (PSA) that can generate compensable surface stylization. We first train two networks to simulate the projector compensation and project-and-capture processes, this allows us to find a satisfactory projector image without real project-and-capture and utilize gradient descent for fast convergence. Then, we design content and saturation losses to guide the projector image generation, such that the generated image shows no clearly perceivable artifacts when projected. Finally, the generated image is projected for visually pleasing surface style morphing effects. The source code and more results are available on the project page: https://Yu-chen-Deng.github.io/LAPIG/.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2515-2524"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matteo Filosa, Alexandra Plexousaki, Matteo Di Stadio, Francesco Bovi, Dario Benvenuti, Tiziana Catarci, Marco Angelini
{"title":"TraVIS: A User Trace Analyzer to Support User-Centered Design of Visual Analytics Solutions.","authors":"Matteo Filosa, Alexandra Plexousaki, Matteo Di Stadio, Francesco Bovi, Dario Benvenuti, Tiziana Catarci, Marco Angelini","doi":"10.1109/TVCG.2025.3546863","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3546863","url":null,"abstract":"<p><p>Visual Analytics (VA) has become a paramount discipline in supporting data analysis in many scientific domains, empowering the human user with automatic capabilities while keeping the lead in the analysis. At the same time, designing an effective VA solution is not a simple task, requiring its adaptation to the problem at hand and the intended user of the system. In this scenario, the User-Centered Design (UCD) methodology provides the framework to incorporate user needs into the design of a VA solution. On the other hand, its implementation mainly relies on qualitative feedback, with the designer missing tools supporting her in quantitatively reporting the user feedback and using it to hypothesize and test the successive changes to the VA solution. To overcome this limitation, we propose TraVIS, a Visual Analytics solution allowing the loading of a web-based VA system, collecting user traces, and analyzing them with respect to the system at hand. In this process, the designer can leverage the collected traces and relate them to the tasks the VA solution supports and how those can be achieved. Using TraVIS, the designer can identify ineffective interaction paths, analyze the user traces support to task completion, hypothesize corrections to the design, and evaluate the effect of changes. We evaluated TraVIS through experimentation with 11 VA systems from literature, a use case, and user evaluation with five experts. Results show the benefits that TraVIS provides in terms of identifying design problems and efficient support for UCD. TraVIS is available at: https://github.com/XAIber-lab/TraVIS.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie Luisa Fiedler;Mario Botsch;Carolin Wienrich;Marc Erich Latoschik
{"title":"Self-Similarity Beats Motor Control in Augmented Reality Body Weight Perception","authors":"Marie Luisa Fiedler;Mario Botsch;Carolin Wienrich;Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3549851","DOIUrl":"10.1109/TVCG.2025.3549851","url":null,"abstract":"This paper investigates if and how self-similarity and having motor control impact sense of embodiment, self-identification, and body weight perception in Augmented Reality (AR). We conducted a 2x2 mixed design experiment involving 60 participants who interacted with either synchronously moving virtual humans or independently moving ones, each with self-similar or generic appearances, across two consecutive AR sessions. Participants evaluated their sense of embodiment, self-identification, and body weight perception of the virtual human. Our results show that self-similarity significantly enhanced sense of embodiment, self-identification, and the accuracy of body weight estimates with the virtual human. However, the effects of having motor control over the virtual human movements were notably weaker in these measures than in similar VR studies. Further analysis indicated that not only the virtual human itself but also the participants' body weight, self-esteem, and body shape concerns predict body weight estimates across all conditions. Our work advances the understanding of virtual human body weight perception in AR systems, emphasizing the importance of factors such as coherence with the real-world environment.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2828-2838"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10930320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ShiftingGolf: Gross Motor Skill Correction Using Redirection in VR","authors":"Chen-Chieh Liao;Zhihao Yu;Hideki Koike","doi":"10.1109/TVCG.2025.3549170","DOIUrl":"10.1109/TVCG.2025.3549170","url":null,"abstract":"Sports performance is often hindered by unintentional habits, particularly in golf, where achieving a consistent and correct swing is crucial yet challenging due to ingrained swing path habits. This study explores redirection approaches in virtual reality (VR) to correct golfers' swing paths through strategic ball shifting. By initiating a forward ball shift just before impact, we aim to prompt golfers to react and modify their swing motion, thereby eliminating undesirable swing habits. Building on recent research, our VR-based methods incorporate a gradual transformation of visuomotor associations to enhance motor skill learning. In this study, we develop three ball shift patterns, including a novel pattern that employs gradual ball shifts with interspersed normal conditions, designed to retain learning effects post-training. A preliminary study, including expert interviews, assesses the feasibility of various ball-shifting directions. Subsequently, a comprehensive user study measures the learning effects across different ball shift modes. The results indicate that our proposed redirection mode effectively corrects swing paths and yields a sustained learning effect.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3429-3439"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10930710","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Severin Engert, Andreas Peetz, Konstantin Klamka, Pierre Surer, Tobias Isenberg, Raimund Dachselt
{"title":"Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization.","authors":"Severin Engert, Andreas Peetz, Konstantin Klamka, Pierre Surer, Tobias Isenberg, Raimund Dachselt","doi":"10.1109/TVCG.2025.3547432","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3547432","url":null,"abstract":"<p><p>We investigate the concept of Augmented Dynamic Data Physicalization, the combination of shape-changing physical data representations with high-resolution virtual content. Tangible data sculptures, for example using mid-air shape-changing interfaces, are aesthetically appealing and persistent, but also technically and spatially limited. Blending them with Augmented Reality overlays such as scales, labels, or other contextual information opens up new possibilities. We explore the potential of this promising combination and propose a set of essential visualization components and interaction principles. They facilitate sophisticated hybrid data visualizations, for example Overview & Detail techniques or 3D view aggregations. We discuss three implemented applications that demonstrate how our approach can be used for personal information hubs, interactive exhibitions, and immersive data analytics. Based on these use cases, we conducted hands-on sessions with external experts, resulting in valuable feedback and insights. They highlight the potential of combining dynamic physicalizations with dynamic AR overlays to create rich and engaging data experiences.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}