IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Immersion, Attention, and Collaboration in Spatial Computing: a Study on Work Performance with Apple Vision Pro.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549145
Carolin Wienrich, David Obremski
{"title":"Immersion, Attention, and Collaboration in Spatial Computing: a Study on Work Performance with Apple Vision Pro.","authors":"Carolin Wienrich, David Obremski","doi":"10.1109/TVCG.2025.3549145","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549145","url":null,"abstract":"<p><p>Spatial computing is set to change the way we work. It will enable both focused work through a higher degree of immersion and collaborative work through enhanced integration of shared interaction spaces or interaction partners. With the Apple Vision Pro, the level of immersion can be adjusted seamlessly. So far, there have been no systematic studies on how this adjustability affects work performance when working alone or together. The present empirical study fills this research gap by varying the level of immersion across three stages (high, medium, low) while solving various tasks with the Apple Vision Pro. The results show that selective attention improves significantly with increasing immersion levels. In contrast, social presence decreases with increasing immersion. In general, participants performed better in the individual task than in the collaborative task. However, the degree of immersion did not influence the collaborative performance. In addition, we could not determine any adverse effects on depth perception or user experience after use. The present study provides initial contributions to the future of spatial computing in professional settings and highlights the importance of balancing immersion and social interaction in a world where digital and physical spaces seamlessly coexist.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Subjectivity: Continuous Cybersickness Detection Using EEG-based Multitaper Spectrum Estimation.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549132
Berken Utku Demirel, Adnan Harun Dogan, Juliete Rossie, Max Mobus, Christian Holz
{"title":"Beyond Subjectivity: Continuous Cybersickness Detection Using EEG-based Multitaper Spectrum Estimation.","authors":"Berken Utku Demirel, Adnan Harun Dogan, Juliete Rossie, Max Mobus, Christian Holz","doi":"10.1109/TVCG.2025.3549132","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549132","url":null,"abstract":"<p><p>Virtual reality (VR) presents immersive opportunities across many applications, yet the inherent risk of developing cybersickness during interaction can severely reduce enjoyment and platform adoption. Cybersickness is marked by symptoms such as dizziness and nausea, which previous work primarily assessed via subjective post-immersion questionnaires and motion-restricted controlled setups. In this paper, we investigate the dynamic nature of cybersickness while users experience and freely interact in VR. We propose a novel method to continuously identify and quantitatively gauge cybersickness levels from users' passively monitored electroencephalography (EEG) and head motion signals. Our method estimates multitaper spectrums from EEG, integrating specialized EEG processing techniques to counter motion artifacts, and, thus, tracks cybersickness levels in real-time. Unlike previous approaches, our method requires no user-specific calibration or personalization for detecting cybersickness. Our work addresses the considerable challenge of reproducibility and subjectivity in cybersickness research. In addition to our method's implementation, we release our dataset of 16 participants and approximately 2 hours of total recordings to spur future work in this domain. Source code: https://github.com/eth-siplab/EEG_Cybersickness_Estimation_VR-Beyond_Subjectivity.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Look at the Sky: Sky-aware Efficient 3D Gaussian Splatting in the Wild.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549187
Yuze Wang, Junyi Wang, Ruicheng Gao, Yansong Qu, Wantong Duan, Shuo Yang, Yue Qi
{"title":"Look at the Sky: Sky-aware Efficient 3D Gaussian Splatting in the Wild.","authors":"Yuze Wang, Junyi Wang, Ruicheng Gao, Yansong Qu, Wantong Duan, Shuo Yang, Yue Qi","doi":"10.1109/TVCG.2025.3549187","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549187","url":null,"abstract":"<p><p>Photos taken in unconstrained tourist environments often present challenges for accurate 3D scene reconstruction due to variable appearances and transient occlusions, which can introduce artifacts in novel view synthesis. Recently, in-the-wild 3D scene reconstruction has been achieved realistic rendering with Neural Radiance Fields (NeRFs). With the advancement of 3D Gaussian Splatting (3DGS), some methods also attempt to reconstruct 3D scenes from unconstrained photo collections and achieve real-time rendering. However, the rapid convergence of 3DGS is misaligned with the slower convergence of neural network-based appearance encoder and transient mask predictor, hindering the reconstruction efficiency. To address this, we propose a novel sky-aware framework for scene reconstruction from unconstrained photo collection using 3DGS. Firstly, we observe that the learnable per-image transient mask predictor in previous work is unnecessary. By introducing a simple yet efficient greedy supervision strategy, we directly utilize the pseudo mask generated by a pre-trained semantic segmentation network as the transient mask, thereby achieving more efficient and higher quality in-the-wild 3D scene reconstruction. Secondly, we find that separately estimating appearance embeddings for the sky and building significantly improves reconstruction efficiency and accuracy. We analyze the underlying reasons and introduce a neural sky module to generate diverse skies from latent sky embeddings extract from unconstrained images. Finally, we propose a mutual distillation learning strategy to constrain sky and building appearance embeddings within the same latent space, further enhancing reconstruction efficiency and quality. Extensive experiments on multiple datasets demonstrate that the proposed framework outperforms existing methods in novel view and appearance synthesis, offering superior rendering quality with faster convergence and rendering speed.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Airflow and Multisensory Feedback on Immersion and Cybersickness in a VR Surfing Simulation.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549125
Premankur Banerjee, Mia P Montiel, Lauren Tomita, Olivia Means, Jason Kutch, Heather Culbertson
{"title":"The Impact of Airflow and Multisensory Feedback on Immersion and Cybersickness in a VR Surfing Simulation.","authors":"Premankur Banerjee, Mia P Montiel, Lauren Tomita, Olivia Means, Jason Kutch, Heather Culbertson","doi":"10.1109/TVCG.2025.3549125","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549125","url":null,"abstract":"<p><p>Virtual Reality (VR) systems have increasingly leveraged multisensory feedback to enrich user experience and mitigate cybersickness. With a similar goal in focus, this paper presents an in-depth exploration of integrating airflow with visual and kinesthetic cues in a VR surfing simulation. Utilizing a custom-designed airflow system and a physical surfboard mounted on a 6-Degree of Freedom (DoF) motion platform, we present two studies that evaluate the effect of the different feedback modalities. The first study assesses the impact of variable airflow, which dynamically adjusts to the user's speed (wind speed) in VR, compared to constant airflow conditions, under both active and passive user engagement scenarios. Results demonstrate that variable airflow significantly enhances immersion and reduces cybersickness, particularly when users are actively engaged in the simulation. The second study evaluates the individual and combined effects of vision, motion, and airflow on acceleration perception, user immersion, and cybersickness, revealing that the integration of all feedback modalities yields the most immersive and comfortable VR experience. This study underscores the importance of synchronized multisensory feedback in dynamic VR environments and provides valuable insights for the design of more immersive and realistic virtual simulations, particularly in aquatic, interactive, and motion-intensive scenarios.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VASA-Rig: Audio-Driven 3D Facial Animation with 'Live' Mood Dynamics in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549168
Ye Pan, Chang Liu, Sicheng Xu, Shuai Tan, Jiaolong Yang
{"title":"VASA-Rig: Audio-Driven 3D Facial Animation with 'Live' Mood Dynamics in Virtual Reality.","authors":"Ye Pan, Chang Liu, Sicheng Xu, Shuai Tan, Jiaolong Yang","doi":"10.1109/TVCG.2025.3549168","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549168","url":null,"abstract":"<p><p>Audio-driven 3D facial animation is crucial for enhancing the metaverse's realism, immersion, and interactivity. While most existing methods focus on generating highly realistic and lively 2D talking head videos by leveraging extensive 2D video datasets these approaches work in pixel space and are not easily adaptable to 3D environments. We present VASA-Rig, which has achieved a significant advancement in the realism of lip-audio synchronization, facial dynamics, and head movements. In particular, we introduce a novel rig parameter-based emotional talking face dataset and propose the Latents2Rig model, which facilitates the transformation of 2D facial animations into 3D. Unlike mesh-based models, VASA-Rig outputs rig parameters, instantiated in this paper as 174 Metahuman rig parameters, making it more suitable for integration into industry-standard pipelines. Extensive experimental results demonstrate that our approach significantly outperforms existing state-of-the-art methods in terms of both realism and accuracy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Presentation Method for Paranormal Phenomena through Binocular Rivalry Induced by Dichoptic Color Differences.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549172
Kai Guo, Juro Hosoi, Yuki Shimomura, Yuki Ban, Shinichi Warisawa
{"title":"Visual Presentation Method for Paranormal Phenomena through Binocular Rivalry Induced by Dichoptic Color Differences.","authors":"Kai Guo, Juro Hosoi, Yuki Shimomura, Yuki Ban, Shinichi Warisawa","doi":"10.1109/TVCG.2025.3549172","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549172","url":null,"abstract":"<p><p>Paranormal visual effects, such as spirits and miracles, are frequently depicted in visual games and media design. However, current methods do not express paranormal experiences as aspects of the sixth sense. We propose utilizing binocular rivalry to provide a new visual presentation method by displaying different images in each eye. In this study, we conducted two experiments. Experiment 1 assessed paranormal sensation, color perception controllability, and visual discomfort caused by mismatched colors in each eye in relation to color difference. Experiment 2 assessed our proposed visual presentation method in three application scenarios. The results indicate that our proposed method improves the visual experience of more realistic paranormal phenomena. Moreover, the sensation of paranormal activity, color perception controllability, and visual discomfort increase as the color difference between the colors displayed in the two eyes increases.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Score Alignment Learning for Continual Perceptual Quality Assessment of 360-Degree Videos in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549179
Kanglei Zhou, Zikai Hao, Liyuan Wang, Xiaohui Liang
{"title":"Adaptive Score Alignment Learning for Continual Perceptual Quality Assessment of 360-Degree Videos in Virtual Reality.","authors":"Kanglei Zhou, Zikai Hao, Liyuan Wang, Xiaohui Liang","doi":"10.1109/TVCG.2025.3549179","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549179","url":null,"abstract":"<p><p>Virtual Reality Video Quality Assessment (VR-VQA) aims to evaluate the perceptual quality of 360-degree videos, which is crucial for ensuring a distortion-free user experience. Traditional VR-VQA methods trained on static datasets with limited distortion diversity struggle to balance correlation and precision. This becomes particularly critical when generalizing to diverse VR content and continually adapting to dynamic and evolving video distribution variations. To address these challenges, we propose a novel approach for assessing the perceptual quality of VR videos, Adaptive Score Alignment Learning (ASAL). ASAL integrates correlation loss with error loss to enhance alignment with human subjective ratings and precision in predicting perceptual quality. In particular, ASAL can naturally adapt to continually changing distributions through a feature space smoothing process that enhances generalization to unseen content. To further improve continual adaptation to dynamic VR environments, we extend ASAL with adaptive memory replay as a novel Continul Learning (CL) framework. Unlike traditional CL models, ASAL utilizes key frame extraction and feature adaptation to address the unique challenges of non-stationary variations with both the computation and storage restrictions of VR devices. We establish a comprehensive benchmark for VR-VQA and its CL counterpart, introducing new data splits and evaluation metrics. Our experiments demonstrate that ASAL outperforms recent strong baseline models, achieving overall correlation gains of up to 4.78% in the static joint training setting and 12.19% in the dynamic CL setting on various datasets. This validates the effectiveness of ASAL in addressing the inherent challenges of VR-VQA. Our code is available at https://github.com/ZhouKanglei/ASAL_CVQA.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Am I (Not) a Ghost? Leveraging Affordances to Study the Impact of Avatar/Interaction Coherence on Embodiment and Plausibility in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549136
Florian Dufresne, Charlotte Dubosc, Titouan Lefrou, Geoffrey Gorisse, Olivier Christmann
{"title":"Am I (Not) a Ghost? Leveraging Affordances to Study the Impact of Avatar/Interaction Coherence on Embodiment and Plausibility in Virtual Reality.","authors":"Florian Dufresne, Charlotte Dubosc, Titouan Lefrou, Geoffrey Gorisse, Olivier Christmann","doi":"10.1109/TVCG.2025.3549136","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549136","url":null,"abstract":"<p><p>The way users interact with Virtual Reality (VR) environments plays a crucial role in shaping their experience when embodying an avatar. How avatars are perceived by users significantly influences their behavior based on stereotypes, a phenomenon known as the Proteus effect. The psychological concept of affordances may also appear relevant when it comes to interact through avatars and is yet underexplored. Indeed, understanding how virtual representations suggest possibilities for action has attracted considerable attention in the human-computer interaction community, but only few studies clearly address the use of affordances. Of particular interest is the fact aesthetic features of avatars may signify false affordances, conflicting with users' expectations and impacting perceived plausibility of the depicted situations. Recent models of congruence and plausibility suggest altering the latter may result in unexpected consequences on other qualia like presence and embodiment. The proposed research initially aimed at exploring the operationalization of affordances as a tool to investigate the impact of congruence and plausibility manipulations on the sense of embodiment. In spite of a long and careful endeavor materialized by a preliminary assessment and two user studies, it appears our participants were primed by other internal processes that took precedence over the perception of the affordances we selected. However, we unexpectedly manipulated the internal congruence following repeated exposures (mixed design), causing a rupture in plausibility and significantly lowering scores of embodiment and task performance. The present research then constitutes a direct proof of a relationship between a break in plausibility and a break in embodiment.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549181
Tianyu Song, Felix Pabst, Ulrich Eck, Nassir Navab
{"title":"Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations.","authors":"Tianyu Song, Felix Pabst, Ulrich Eck, Nassir Navab","doi":"10.1109/TVCG.2025.3549181","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549181","url":null,"abstract":"<p><p>Robotic ultrasound systems have the potential to improve medical diagnostics, but patient acceptance remains a key challenge. To address this, we propose a novel system that combines an AI-based virtual agent, powered by a large language model (LLM), with three mixed reality visualizations aimed at enhancing patient comfort and trust. The LLM enables the virtual assistant to engage in natural, conversational dialogue with patients, answering questions in any format and offering real-time reassurance, creating a more intelligent and reliable interaction. The virtual assistant is animated as controlling the ultrasound probe, giving the impression that the robot is guided by the assistant. The first visualization employs augmented reality (AR), allowing patients to see the real world and the robot with the virtual avatar superimposed. The second visualization is an augmented virtuality (AV) environment, where the real-world body part being scanned is visible, while a 3D Gaussian Splatting reconstruction of the room, excluding the robot, forms the virtual environment. The third is a fully immersive virtual reality (VR) experience, featuring the same 3D reconstruction but entirely virtual, where the patient sees a virtual representation of their body being scanned in a robot-free environment. In this case, the virtual ultrasound probe, mirrors the movement of the probe controlled by the robot, creating a synchronized experience as it touches and moves over the patient's virtual body. We conducted a comprehensive agent-guided robotic ultrasound study with all participants, comparing these visualizations against a standard robotic ultrasound procedure. Results showed significant improvements in patient trust, acceptance, and comfort. Based on these findings, we offer insights into designing future mixed reality visualizations and virtual agents to further enhance patient comfort and acceptance in autonomous medical procedures.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Analytics as a Support Medium for Data-driven Monitoring in Hydropower.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549157
Marina Lima Medeiros, Hannes Kaufmann, Johanna Schmidt
{"title":"Immersive Analytics as a Support Medium for Data-driven Monitoring in Hydropower.","authors":"Marina Lima Medeiros, Hannes Kaufmann, Johanna Schmidt","doi":"10.1109/TVCG.2025.3549157","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549157","url":null,"abstract":"<p><p>Hydropower turbines are large-scale equipment essential to sustainable energy supply chains, and engineers have few opportunities to examine their internal structure. Our Immersive Analytics (IA) application is part of a research project that combines and compares simulated water turbine flows and sensor-measured data, looking for data-driven predictions of the lifetime of the mechanical parts of hydroelectric power plants. Our prototype combines spatial and abstract data in an immersive environment in which the user can navigate through a full-scale model of a water turbine, view simulated water flows of three different energy supply conditions, and visualize and interact with sensor-collected data situated at the reference position of the sensors in the actual turbine. In this paper, we detail our design process, which resulted from consultations with domain experts and a literature review, give an overview of our prototype, and present its evaluation, resulting from semi-structured interviews with experts and qualitative thematic analysis. Our findings confirm the current literature that IA applications add value to the presentation and analysis of situated data, as they show that we advance in the design directions for IA applications for domain experts that combine abstract and spatial data, with conclusions on how to avoid skepticism from such professionals.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信