Marie Luisa Fiedler, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik
{"title":"Self-Similarity Beats Motor Control in Augmented Reality Body Weight Perception.","authors":"Marie Luisa Fiedler, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3549851","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549851","url":null,"abstract":"<p><p>This paper investigates if and how self-similarity and having motor control impact sense of embodiment, self-identification, and body weight perception in Augmented Reality (AR). We conducted a 2x2 mixed design experiment involving 60 participants who interacted with either synchronously moving virtual humans or independently moving ones, each with self-similar or generic appearances, across two consecutive AR sessions. Participants evaluated their sense of embodiment, self-identification, and body weight perception of the virtual human. Our results show that self-similarity significantly enhanced sense of embodiment, self-identification, and the accuracy of body weight estimates with the virtual human. However, the effects of having motor control over the virtual human movements were notably weaker in these measures than in similar VR studies. Further analysis indicated that not only the virtual human itself but also the participants' body weight, self-esteem, and body shape concerns predict body weight estimates across all conditions. Our work advances the understanding of virtual human body weight perception in AR systems, emphasizing the importance of factors such as coherence with the real-world environment.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ShiftingGolf: Gross Motor Skill Correction using Redirection in VR.","authors":"Chen-Chieh Liao, Zhihao Yu, Hideki Koike","doi":"10.1109/TVCG.2025.3549170","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549170","url":null,"abstract":"<p><p>Sports performance is often hindered by unintentional habits, particularly in golf, where achieving a consistent and correct swing is crucial yet challenging due to ingrained swing path habits. This study explores redirection approaches in virtual reality (VR) to correct golfers' swing paths through strategic ball shifting. By initiating a forward ball shift just before impact, we aim to prompt golfers to react and modify their swing motion, thereby eliminating undesirable swing habits. Building on recent research, our VR-based methods incorporate a gradual transformation of visuomotor associations to enhance motor skill learning. In this study, we develop three ball shift patterns, including a novel pattern that employs gradual ball shifts with interspersed normal conditions, designed to retain learning effects post-training. A preliminary study, including expert interviews, assesses the feasibility of various ball-shifting directions. Subsequently, a comprehensive user study measures the learning effects across different ball shift modes. The results indicate that our proposed redirection mode effectively corrects swing paths and yields a sustained learning effect.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Redirection Detection Thresholds for Avatar Manipulation with Different Body Parts.","authors":"Ryutaro Watanabe, Azumi Maekawa, Michiteru Kitazaki, Yasuaki Monnai, Masahiko Inami","doi":"10.1109/TVCG.2025.3549161","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549161","url":null,"abstract":"<p><p>This study investigates how both the body part used to control a VR avatar and the avatar's appearance affect redirection detection thresholds. We conducted experiments comparing hand and foot manipulation of two types of avatars: a hand-shaped avatar and an abstract spherical avatar. Our results show that, irrespective of the body part used, the redirection detection threshold increased by 21% when using the hand avatar compared to the abstract avatar. Additionally, when the avatar's position was redirected toward the body midline, the detection threshold increased by 49% compared to redirection away from the midline. No significant differences in detection thresholds were observed between the hand and foot manipulations. These findings suggest that avatar appearance and redirection direction significantly influence user perception in VR environments, offering valuable insights for the design of full-body VR interactions and human augmentation systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DPCS: Path Tracing-Based Differentiable Projector-Camera Systems.","authors":"Jijiang Li, Qingyue Deng, Haibin Ling, Bingyao Huang","doi":"10.1109/TVCG.2025.3549890","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549890","url":null,"abstract":"<p><p>Projector-camera systems (ProCams) simulation aims to model the physical project-and-capture process and associated scene parameters of a ProCams, and is crucial for spatial augmented reality (SAR) applications such as ProCams relighting and projector compensation. Recent advances use an end-to-end neural network to learn the project-and-capture process. However, these neural network-based methods often implicitly encapsulate scene parameters, such as surface material, gamma, and white balance in the network parameters, and are less interpretable and hard for novel scene simulation. Moreover, neural networks usually learn the indirect illumination implicitly in an image-to-image translation way which leads to poor performance in simulating complex projection effects such as soft-shadow and interreflection. In this paper, we introduce a novel path tracing-based differentiable projector-camera systems (DPCS), offering a differentiable ProCams simulation method that explicitly integrates multi-bounce path tracing. Our DPCS models the physical project-and-capture process using differentiable physically-based rendering (PBR), enabling the scene parameters to be explicitly decoupled and learned using much fewer samples. Moreover, our physically-based method not only enables high-quality downstream ProCams tasks, such as ProCams relighting and projector compensation, but also allows novel scene simulation using the learned scene parameters. In experiments, DPCS demonstrates clear advantages over previous approaches in ProCams simulation, offering better interpretability, more efficient handling of complex interreflection and shadow, and requiring fewer training samples. The code and dataset are available on the project page: https://jijiangli.github.io/DPCS/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Severin Engert, Andreas Peetz, Konstantin Klamka, Pierre Surer, Tobias Isenberg, Raimund Dachselt
{"title":"Augmented Dynamic Data Physicalization: Blending Shape-changing Data Sculptures with Virtual Content for Interactive Visualization.","authors":"Severin Engert, Andreas Peetz, Konstantin Klamka, Pierre Surer, Tobias Isenberg, Raimund Dachselt","doi":"10.1109/TVCG.2025.3547432","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3547432","url":null,"abstract":"<p><p>We investigate the concept of Augmented Dynamic Data Physicalization, the combination of shape-changing physical data representations with high-resolution virtual content. Tangible data sculptures, for example using mid-air shape-changing interfaces, are aesthetically appealing and persistent, but also technically and spatially limited. Blending them with Augmented Reality overlays such as scales, labels, or other contextual information opens up new possibilities. We explore the potential of this promising combination and propose a set of essential visualization components and interaction principles. They facilitate sophisticated hybrid data visualizations, for example Overview & Detail techniques or 3D view aggregations. We discuss three implemented applications that demonstrate how our approach can be used for personal information hubs, interactive exhibitions, and immersive data analytics. Based on these use cases, we conducted hands-on sessions with external experts, resulting in valuable feedback and insights. They highlight the potential of combining dynamic physicalizations with dynamic AR overlays to create rich and engaging data experiences.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FlowHON: Representing Flow Fields Using Higher-Order Networks.","authors":"Nan Chen, Zhihong Li, Jun Tao","doi":"10.1109/TVCG.2025.3550130","DOIUrl":"10.1109/TVCG.2025.3550130","url":null,"abstract":"<p><p>Flow fields are often partitioned into data blocks for massively parallel computation and analysis based on blockwise relationships. However, most of the previous techniques only consider the first-order dependencies among blocks, which is insufficient in describing complex flow patterns. In this work, we present FlowHON, an approach to construct higher-order networks (HONs) from flow fields. FlowHON captures the inherent higher-order dependencies in flow fields as nodes and estimates the transitions among them as edges. We formulate the HON construction as an optimization problem with three linear transformations. The first two layers correspond to the node generation and the third one corresponds to edge estimation. Our formulation allows the node generation and edge estimation to be solved in a unified framework. With FlowHON, the rich set of traditional graph algorithms can be applied without any modification to analyze flow fields, while leveraging the higher-order information to understand the inherent structure and manage flow data for efficiency. We demonstrate the effectiveness of FlowHON using a series of downstream tasks, including estimating the density of particles during tracing, partitioning flow fields for data management, and understanding flow fields using the node-link diagram representation of networks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Roskopf, Andreas Muhlberger, Felix Starz, Steven van de Par, Matthias Blau, Leon O H Kroczek
{"title":"Impact of Visual Virtual Scene and Localization Task on Auditory Distance Perception in Virtual Reality.","authors":"Sarah Roskopf, Andreas Muhlberger, Felix Starz, Steven van de Par, Matthias Blau, Leon O H Kroczek","doi":"10.1109/TVCG.2025.3549855","DOIUrl":"10.1109/TVCG.2025.3549855","url":null,"abstract":"<p><p>Investigating auditory perception and cognition in realistic, controlled environments is made possible by virtual reality (VR). However, when visual information is presented, sound localization results from multimodal integration. Additionally, using headmounted displays leads to a distortion of visual egocentric distances. With two different paradigms, we investigated the extent to which different visual scenes influence auditory distance perception, and secondary presence and realism. To be more precise, different room models were displayed via HMD while participants had to localize sounds emanating from real loudspeakers. In the first paradigm, we manipulated whether a room was congruent or incongruent to the physical room. In a second paradigm, we manipulated room visibility - displaying either an audiovisual congruent room or a scene containing almost no spatial information- and localization task. Participants indicated distances either by placing a virtual loudspeaker, walking, or verbal report. While audiovisual room incongruence had a detrimental effect on distance perception, no main effect of room visibility was found but an interaction with the task. Overestimation of distances was higher using the placement task in the non-spatial scene. The results suggest an effect of visual scene on auditory perception in VR implying a need for consideration e.g., in virtual acoustics research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Pabst, Linda Rudolph, Nikolas Brasch, Verena Biener, Chloe Eghtebas, Ulrich Eck, Dieter Schmalstieg, Gudrun Klinker
{"title":"MRUnion: Asymmetric Task-Aware 3D Mutual Scene Generation of Dissimilar Spaces for Mixed Reality Telepresence.","authors":"Michael Pabst, Linda Rudolph, Nikolas Brasch, Verena Biener, Chloe Eghtebas, Ulrich Eck, Dieter Schmalstieg, Gudrun Klinker","doi":"10.1109/TVCG.2025.3549878","DOIUrl":"10.1109/TVCG.2025.3549878","url":null,"abstract":"<p><p>In mixed reality (MR) telepresence applications, the differences between participants' physical environments can interfere with effective collaboration. For asymmetric tasks, users might need to access different resources (information, objects, tools) distributed throughout their room. Existing intersection methods do not support such interactions, because a large portion of the telepresence participants' rooms become inaccessible, along with the relevant task resources. We propose MRUnion, a Mixed Reality Telepresence pipeline for asymmetric task-aware 3D mutual scene generation. The key concept of our approach is to enable a user in an asymmetric telecollaboration scenario to access the entire room, while still being able to communicate with remote users in a shared space. For this purpose, we introduce a novel mutual room layout called Union. We evaluated 882 space combinations quantitatively involving two, three, and four combined remote spaces and compared it to a conventional Intersect room layout. The results show that our method outperforms existing intersection methods and enables a significant increase in space and accessibility to resources within the shared space. In an exploratory user study (N=24), we investigated the applicability of the synthetic mutual scene in both MR and VR setups, where users collaborated on an asymmetric remote assembly task. The study results showed that our method achieved comparable results to the intersect method but requires further investigation in terms of social presence, safety and support of collaboration. From this study, we derived design implications for synthetic mutual spaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SensARy Substitution: Augmented Reality Techniques to Enhance Force Perception in Touchless Robot Control.","authors":"Tonia Mielke, Florian Heinrich, Christian Hansen","doi":"10.1109/TVCG.2025.3549856","DOIUrl":"10.1109/TVCG.2025.3549856","url":null,"abstract":"<p><p>The lack of haptic feedback in touchless human-robot interaction is critical in applications such as robotic ultrasound, where force perception is crucial to ensure image quality. Augmented reality (AR) is a promising tool to address this limitation by providing sensory substitution through visual or vibrotactile feedback. The implementation of visual force feedback requires consideration not only of feedback design but also of positioning. Therefore, we implemented two different visualization types at three different positions and investigated the effects of vibrotactile feedback on these approaches. Furthermore, we examined the effects of multimodal feedback compared to visual or vibrotactile output alone. Our results indicate that sensory substitution eases the interaction in contrast to a feedback-less baseline condition, with the presence of visual support reducing average force errors and being subjectively preferred by the participants. However, the more feedback was provided, the longer users needed to complete their tasks. Regarding visualization design, a 2D bar visualization reduced force errors compared to a 3D arrow concept. Additionally, the visualizations being displayed directly on the ultrasound screen were subjectively preferred. With findings regarding feedback modality and visualization design our work represents an important step toward sensory substitution for touchless human-robot interaction.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vojtech Smekal, Jeanne Hecquard, Sophie Kuhne, Nicole Occidental, Anatole Lecuyer, Marc Mace, Beatrice de Gelder
{"title":"Influence of Haptic Feedback on Perception of Threat and Peripersonal Space in Social VR.","authors":"Vojtech Smekal, Jeanne Hecquard, Sophie Kuhne, Nicole Occidental, Anatole Lecuyer, Marc Mace, Beatrice de Gelder","doi":"10.1109/TVCG.2025.3549884","DOIUrl":"10.1109/TVCG.2025.3549884","url":null,"abstract":"<p><p>Humans experience social interactions partly through nonverbal communication, including proxemic behaviors and haptic sensations. Body language, facial expressions, personal spaces, and social touch are multiple factors influencing how a stranger's approach is experienced. Furthermore, the rise of virtual social platforms raises concerns about virtual harassment and the perception of personal space in VR: harassment is felt much more strongly in virtual spaces, and the psychological effects can be just as severe. While most virtual platforms have a 'personal bubble' feature that keeps strangers at a distance, it does not seem to suffice: personal space violations seem influenced by more than simply distance. With this paper, we aim to further clarify the variability of personal spaces. We focus on haptic stimulation, elaborating our hypotheses on the relationship between social touch and the perception of personal spaces. Users wore a haptic compression belt and were immersed in a virtual dark alley. Virtual agents approached them while exhibiting either neutral or threatening body language. In half of all trials, as the agent advanced, the compression belt tightened around the users' torsos with three different pressures. Participants could press a response button when uncomfortable with the agent's proximity. Peripersonal space violations occurred 31% earlier on average when the agent was visibly angry and the compression belt activated. A greater tightening pressure also slightly increased the personal sphere radius by up to 13%. Overall, our results are consistent with previous works on peripersonal spaces. They help further define our relationship to personal space boundaries and encourage using haptic devices during simulated social interactions in VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143627269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}