IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
"Heart Flows with Zen": Exploring Multi-modal Mixed Reality to Promote the Inheritance and Experience of Cultural Heritage. “心随禅流”:探索多模态混合现实,促进文化遗产的传承与体验。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616750
Wenchen Guo, Zhirui Chen, Guoyu Sun, Hailiang Wang
{"title":"\"Heart Flows with Zen\": Exploring Multi-modal Mixed Reality to Promote the Inheritance and Experience of Cultural Heritage.","authors":"Wenchen Guo, Zhirui Chen, Guoyu Sun, Hailiang Wang","doi":"10.1109/TVCG.2025.3616750","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616750","url":null,"abstract":"<p><p>The preservation of cultural heritage (CH) is a complex and promising field. Driven by technological advancements, digitization has emerged as a crucial approach for revitalizing tangible/intangible cultural heritage (TCH/ICH). However, current research and practice remain limited in their exploration of abstract forms of ICH, such as traditional philosophies and ideologies. In this study, utilizing Zen as a context, we designed an immersive mixed reality (MR) experience system, Flowing with Zen, based on formative study and cultural symbol analysis. The MR system integrates multi-modal interfaces, motion capture, environmental sensing, and generative computing, enabling users to engage with four scenarios through meditation, life appreciation, and experiential Zen practice, providing the embodied experience of Zen. Comparative user evaluation (N = 51) revealed that the MR system has significant advantages in eliciting engagement and interest from users, enhancing their aesthetic appreciation and cultural understanding, and increasing the accessibility of Zen. Our research proposes a novel approach and design inspiration for the digital inheritance of abstract ICH.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behavioral and Symbolic Fillers as Delay Mitigation for Embodied Conversational Agents in Virtual Reality. 行为和符号填充作为虚拟现实中具身会话代理的延迟缓解。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616865
Denmar Mojan Gonzales, Snehanjali Kalamkar, Sophie Jorg, Jens Grubert
{"title":"Behavioral and Symbolic Fillers as Delay Mitigation for Embodied Conversational Agents in Virtual Reality.","authors":"Denmar Mojan Gonzales, Snehanjali Kalamkar, Sophie Jorg, Jens Grubert","doi":"10.1109/TVCG.2025.3616865","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616865","url":null,"abstract":"<p><p>When communicating with embodied conversational agents (ECAs) in virtual reality, there might be delays in the responses of the agents lasting several seconds, for example, due to more extensive computations of the answers when large language models are used. Such delays might lead to unnatural or frustrating interactions. In this paper, we investigate filler types to mitigate these effects and lead to a more positive experience and perception of the agent. In a within-subject study, we asked 24 participants to communicate with ECAs in virtual reality, comparing four strategies displayed during the delays: a multimodal behavioral filler consisting of conversational and gestural fillers, a base condition with only idle motions, and two symbolic indicators with progress bars, one embedded as a badge on the agent, the other one external and visualized as a thinking bubble. Our results indicate that the behavioral filler improved perceived response time, three subscales of presence, humanlikeness, and naturalness. Participants looked away from the face more often when symbolic indicators were displayed, but the visualizations did not lead to a more positive impression of the agent or to increased presence. The majority of participants preferred the behavioral fillers, only 12.5% and 4.2% favored the symbolic embedded and external conditions, respectively.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring and Modeling the Effects of Eye-Tracking Accuracy and Precision on Gaze-Based Steering in Virtual Environments. 虚拟环境中眼动追踪精度对注视转向的影响研究与建模。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616824
Xuning Hu, Yichuan Zhang, Yushi Wei, Liangyuting Zhang, Yue Li, Wolfgang Stuerzlinger, Hai-Ning Liang
{"title":"Exploring and Modeling the Effects of Eye-Tracking Accuracy and Precision on Gaze-Based Steering in Virtual Environments.","authors":"Xuning Hu, Yichuan Zhang, Yushi Wei, Liangyuting Zhang, Yue Li, Wolfgang Stuerzlinger, Hai-Ning Liang","doi":"10.1109/TVCG.2025.3616824","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616824","url":null,"abstract":"<p><p>Recent advances in eye-tracking technology have positioned gaze as an efficient and intuitive input method for Virtual Reality (VR), offering a natural and immersive user experience. As a result, gaze input is now leveraged for fundamental interaction tasks such as selection, manipulation, crossing, and steering. Although several studies have modeled user steering performance across various path characteristics and input methods, our understanding of gaze-based steering in VR remains limited. This gap persists because the unique qualities of eye movements-involving rapid, continuous motions-and the variability in eye-tracking make findings from other input modalities nontransferable to a gaze-based context, underscoring the need for a dedicated investigation into gaze-based steering behaviors and performance. To bridge this gap, we present two user studies to explore and model gaze-based steering. In the first one, user behavior data are collected across various path characteristics and eye-tracking conditions. Based on this data, we propose four refined models that extend the classic Steering Law to predict users' movement time in gaze-based steering tasks, explicitly incorporating the impact of tracking quality. The best-performing model achieves an adjusted R<sup>2</sup> of 0.956, corresponding to a 16% improvement in movement time prediction. This model also yields a substantial reduction in AIC (from 1550 to 1132) and BIC (from 1555 to 1142), highlighting improved model quality and better balance between goodness of fit and model complexity. Finally, data from a second study with varied settings, such as a different eye-tracking sampling rate, illustrate the strong robustness and predictability of our models. Finally, we present scenarios and applications that demonstrate how our models can be used to design enhanced gaze-based interactions in VR systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MetaRoundWorm: A Virtual Reality Escape Room Game for Learning the Lifecycle and Immune Response to Parasitic Infections. MetaRoundWorm:一个虚拟现实密室逃生游戏,用于学习寄生虫感染的生命周期和免疫反应。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616752
Xuanru Cheng, Xian Wang, Chi-Lok Tai, Lik-Hang Lee
{"title":"MetaRoundWorm: A Virtual Reality Escape Room Game for Learning the Lifecycle and Immune Response to Parasitic Infections.","authors":"Xuanru Cheng, Xian Wang, Chi-Lok Tai, Lik-Hang Lee","doi":"10.1109/TVCG.2025.3616752","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616752","url":null,"abstract":"<p><p>Promoting public health is challenging owing to its abstract nature, and individuals may be apprehensive about confronting it. Recently, there has been an increasing interest in using the metaverse and gamification as novel educational techniques to improve learning experiences related to the immune system. Thus, we present MetaRoundWorm, an immersive virtual reality (VR) escape room game designed to enhance the understanding of parasitic infections and host immune responses through interactive, gamified learning. The application simulates the lifecycle of Ascaris lumbricoides and corresponding immunological mechanisms across anatomically accurate environments within the human body. Integrating serious game mechanics with embodied learning principles, MetaRoundWorm offers players a task-driven experience combining exploration, puzzle-solving, and immune system simulation. To evaluate the educational efficacy and user engagement, we conducted a controlled study comparing MetaRoundWorm against a traditional approach, i.e., interactive slides. Results indicate that MetaRoundWorm significantly improves immediate learning outcomes, cognitive engagement, and emotional experience, while maintaining knowledge retention over time. Our findings suggest that immersive VR gamification holds promise as an effective pedagogical tool for communicating complex biomedical concepts and advancing digital health education.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Let's Do It My Way: Effects of Personality and Age of Virtual Characters. 《让我们按照我的方式去做:虚拟角色的个性和年龄的影响》
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616815
Minsoo Choi, Dixuan Cui, Siqi Guo, Dominic Kao, Christos Mousas
{"title":"Let's Do It My Way: Effects of Personality and Age of Virtual Characters.","authors":"Minsoo Choi, Dixuan Cui, Siqi Guo, Dominic Kao, Christos Mousas","doi":"10.1109/TVCG.2025.3616815","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616815","url":null,"abstract":"<p><p>Designing interactions between humans and virtual characters requires careful consideration of various human perceptions and user experiences. While numerous studies have explored the effects of several virtual characters' properties, the impacts of the virtual character's personality and age on human perceptions and experiences have yet to be thoroughly investigated. To address this gap, we conducted a within-group study (N = 28) following a 2 (personality: egoism vs. altruism) × 2 (age: child vs. adult) design to explore how the personality and age factors influence human perception and experience during interactions with virtual characters. In each condition of our study, our participants co-solved a jigsaw puzzle with a virtual character that embodied combinations of personality and age. After each condition, participants completed a survey. We also asked them to provide written feedback at the end of the study. Our statistical analyses revealed that the virtual character's personality and age significantly influenced participants' perceptions and experiences. The personality factor affected perceptions of altruism, anthropomorphism, likability, safety, and all aspects of user experience, including perceived collaboration, rapport, emotional reactivity, and the desire for future interaction. Additionally, the virtual character's age affected our participants' ratings of the uncanny valley and likability. We also identified an interaction effect between personality and age factors on the virtual character's anthropomorphism. Based on our findings, we offered guidelines and insights for researchers aiming to design collaborative experiences with virtual characters of different personalities and ages.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiteAT: A Data-Lightweight and User-Adaptive VR Telepresence System for Remote Education. LiteAT:用于远程教育的数据轻量级和用户自适应VR远程呈现系统。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616747
Yuxin Shen, Wei Liang, Jianzhu Ma
{"title":"LiteAT: A Data-Lightweight and User-Adaptive VR Telepresence System for Remote Education.","authors":"Yuxin Shen, Wei Liang, Jianzhu Ma","doi":"10.1109/TVCG.2025.3616747","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616747","url":null,"abstract":"<p><p>In educators' ongoing pursuit of enriching remote education, Virtual Reality (VR)-based telepresence has shown significant promise due to its immersive and interactive nature. Existing approaches often rely on point cloud or NeRF-based techniques to deliver realistic representations of teachers and classrooms to remote students. However, achieving low latency is non-trivial, and maintaining high-fidelity rendering under such constraints poses an even greater challenge. This paper introduces LiteAT, a data-lightweight and user-adaptive VR telepresence system, to enable real-time, immersive learning experiences. LiteAT employs a Gaussian Splatting-based reconstruction pipeline that integrates an SMPL-X-driven dynamic human model with a static classroom, supporting lightweight data transmission and high-quality rendering. To enable efficient and personalized exploration in the virtual classroom, we propose a user-adaptive viewpoint recommendation framework that dynamically suggests high-quality viewpoints tailored to user preferences. Candidate viewpoints are evaluated based on multiple visual quality factors and are continuously optimized based on recent user behavior and scene dynamics. Quantitative experiments and user studies validate the effectiveness of LiteAT across multiple evaluation metrics. LiteAT establishes a versatile and scalable foundation for immersive telepresence, potentially supporting real-time scenarios such as procedural teaching, multimodal instruction, and collaborative learning.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visceral Notices and Privacy Mechanisms for Eye Tracking in Augmented Reality. 增强现实中眼动追踪的本能通知和隐私机制。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616837
Nissi Otoo, Kailon Blue, G Nikki Ramirez, Evan Selinger, Shaun Foster, Brendan David-John
{"title":"Visceral Notices and Privacy Mechanisms for Eye Tracking in Augmented Reality.","authors":"Nissi Otoo, Kailon Blue, G Nikki Ramirez, Evan Selinger, Shaun Foster, Brendan David-John","doi":"10.1109/TVCG.2025.3616837","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616837","url":null,"abstract":"<p><p>Head-worn augmented reality (AR) continues to evolve through critical advancements in power optimizations, AI capabilities, and naturalistic user interactions. Eye-tracking sensors play a key role in these advancements. At the same time, eye-tracking data is not well understood by users and can reveal sensitive information. Our work contributes visualizations based on visceral notice to increase privacy awareness of eye-tracking data in AR. We also evaluated user perceptions towards privacy noise mechanisms applied to gaze data visualized through these visceral interfaces. While privacy mechanisms have been evaluated against privacy attacks, we are the first to evaluate them subjectively and understand their influence on data-sharing attitudes. Despite our participants being highly concerned with eye-tracking privacy risks, we found 47% of our participants still felt comfortable sharing raw data. When applying privacy noise, 70% to 76% felt comfortable sharing their gaze data for the Weighted Smoothing and Gaussian Noise privacy mechanisms, respectively. This implies that participants are still willing to share raw gaze data even though overall data-sharing sentiments decreased after experiencing the visceral interfaces and privacy mechanisms. Our work implies that increased access and understanding of privacy mechanisms are critical for gaze-based AR applications; further research is needed to develop visualizations and experiences that relay additional information about how raw gaze data can be used for sensitive inferences, such as age, gender, and ethnicity. We intend to open-source our codebase to provide AR developers and platforms with the ability to better inform users about privacy concerns and provide access to privacy mechanisms. A pre-print of this paper and all supplemental materials are available at https://bmdj-vt.github.io/project_pages/privacy_notice.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Hand Visibility in AR: Comparing Dexterity and Interaction with Virtual and Real Objects. 手可视性在AR中的作用:比较虚拟和真实物体的灵巧性和交互性。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616868
Jakob Hartbrich, Stephanie Arevalo Arboleda, Steve Goring, Alexander Raake
{"title":"The Effect of Hand Visibility in AR: Comparing Dexterity and Interaction with Virtual and Real Objects.","authors":"Jakob Hartbrich, Stephanie Arevalo Arboleda, Steve Goring, Alexander Raake","doi":"10.1109/TVCG.2025.3616868","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616868","url":null,"abstract":"<p><p>Hand-tracking technologies allow us to use our own hands to interact with real and virtual objects in Augmented Reality (AR) environments. This enables us to explore the interplay between hand-visualizations and hand-object interactions. We present a user study that examines the effect of different hand visualizations (invisible, transparent, opaque) on manipulation performance when interacting with real and virtual objects. For this, we implemented video-see-through (VST) AR-based virtual building blocks and hot wire tasks with real one-to-one counterparts that require participants to use gross and fine motor hand movements. To evaluate manipulation performance, we considered three measures: task completion time, number of collisions (hot wire task), and percentage of object displacement (building block task). Additionally, we explored the sense of agency and subjective impressions (preference, ease of interaction, successful and awkwardness) evoked by the different hand-visualizations. The results show that (1) manipulation performance is significantly higher when interacting with real objects compared to virtual ones, (2) invisible hands lead to fewer errors, higher agency, higher perceived success and ease of interaction during fine manipulation tasks with real objects, and (3) having some visualization of the virtual hands (transparent or opaque) overlayed on the real hands is preferred when manipulating virtual objects even when there are no significant performance improvements. Our empirical findings about the differences when interacting with real and virtual objects can aid hand visualization choices for manipulation tasks in AR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Eyebox Steering for Improved Pinlight AR Near-eye Displays. 改进的Pinlight AR近眼显示的动态眼箱转向。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616807
Xinxing Xia, Zheye Yu, Dongyu Qiu, Andrei State, Tat-Jen Cham, Frank Guan, Henry Fuchs
{"title":"Dynamic Eyebox Steering for Improved Pinlight AR Near-eye Displays.","authors":"Xinxing Xia, Zheye Yu, Dongyu Qiu, Andrei State, Tat-Jen Cham, Frank Guan, Henry Fuchs","doi":"10.1109/TVCG.2025.3616807","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616807","url":null,"abstract":"<p><p>An optical-see-through near-eye display (NED) for augmented reality (AR) allows the user to perceive virtual and real imagery simultaneously. Existing technologies for optical-see-through AR NEDs involve trade-offs between key metrics such as field of view (FOV), eyebox size, form factor, etc. We have enhanced an existing compact wide-FOV pinlight AR NED design with real-time 3D pupil localization in order to dynamically steer and thus effectively enlarge the usable eyebox. This is achieved with a dual-camera rig that captures stereoscopic views of the pupils. The 3D pupil location is used to dynamically calculate a display pattern that spatio-temporally modulates the light entering the wearer's eyes. We have built a demonstrable compact prototype and have conducted a user study that indicates the effectiveness of our eyebox steering method (e.g., without eyebox steering, in 10.5% of our tests, users were unable to perceive the test pattern correctly before experiment timeout; with eyebox steering, that fraction decreased dramatically to 1.25%). This is a small yet crucial step in making simple wide-FOV pinlight NEDs usable for human users and not just as demonstration prototypes filmed with a precisely positioned camera standing in for the user's eye. Further contributions of this paper include a detailed description of display design, calibration technique, and user study design, all of which may benefit other NED research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entering Your Space: How Agent Entrance Styles Shape Social Presence in AR. 进入你的空间:Agent入口风格如何塑造AR中的社交存在。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616757
Junyeong Kum, Seungwon Kim, Myungho Lee
{"title":"Entering Your Space: How Agent Entrance Styles Shape Social Presence in AR.","authors":"Junyeong Kum, Seungwon Kim, Myungho Lee","doi":"10.1109/TVCG.2025.3616757","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616757","url":null,"abstract":"<p><p>Embodied conversational agents (ECAs) capable of non-verbal behaviors have been developed to address the limitations of voice-only assistants, with research exploring their use in mixed and augmented reality (AR), suggesting they may soon interact with us more naturally in physical spaces. Traditionally, AI voice assistants are activated through wake-up keywords, and since they are invisible, their method of appearance has not been a concern. However, for ECAs in AR, the question of how they should enter the user's space when summoned remains underexplored. In this paper, we focused on the plausibility of ECAs' entering action into the user's field of view in AR. We analyzed its impact on user experience, concentrating on perceived social presence and co-presence of the agent. Three entrance styles were chosen for comparison: an obviously impossible one, a possible one, and an intermediate one, alongside a voice-only condition. We designed and conducted a within-subjects study with 38 participants. Our results indicated that while the plausibility of the action had less impact on functionality compared to the embodiment itself, it significantly affected social/co-presence. These findings highlight the importance of entrance design for future AR agent experiences.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信