IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Redirection Detection Thresholds for Avatar Manipulation with Different Body Parts. 不同身体部位的角色操作重定向检测阈值。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3549161
Ryutaro Watanabe, Azumi Maekawa, Michiteru Kitazaki, Yasuaki Monnai, Masahiko Inami
{"title":"Redirection Detection Thresholds for Avatar Manipulation with Different Body Parts.","authors":"Ryutaro Watanabe, Azumi Maekawa, Michiteru Kitazaki, Yasuaki Monnai, Masahiko Inami","doi":"10.1109/TVCG.2025.3549161","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549161","url":null,"abstract":"<p><p>This study investigates how both the body part used to control a VR avatar and the avatar's appearance affect redirection detection thresholds. We conducted experiments comparing hand and foot manipulation of two types of avatars: a hand-shaped avatar and an abstract spherical avatar. Our results show that, irrespective of the body part used, the redirection detection threshold increased by 21% when using the hand avatar compared to the abstract avatar. Additionally, when the avatar's position was redirected toward the body midline, the detection threshold increased by 49% compared to redirection away from the midline. No significant differences in detection thresholds were observed between the hand and foot manipulations. These findings suggest that avatar appearance and redirection direction significantly influence user perception in VR environments, offering valuable insights for the design of full-body VR interactions and human augmentation systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPCS: Path Tracing-Based Differentiable Projector-Camera Systems 基于路径跟踪的可微分投影-摄像机系统。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3549890
Jijiang Li;Qingyue Deng;Haibin Ling;Bingyao Huang
{"title":"DPCS: Path Tracing-Based Differentiable Projector-Camera Systems","authors":"Jijiang Li;Qingyue Deng;Haibin Ling;Bingyao Huang","doi":"10.1109/TVCG.2025.3549890","DOIUrl":"10.1109/TVCG.2025.3549890","url":null,"abstract":"Projector-camera systems (ProCams) simulation aims to model the physical project-and-capture process and associated scene parameters of a ProCams, and is crucial for spatial augmented reality (SAR) applications such as ProCams relighting and projector compensation. Recent advances use an end-to-end neural network to learn the project-and-capture process. However, these neural network-based methods often implicitly encapsulate scene parameters, such as surface material, gamma, and white balance in the network parameters, and are less interpretable and hard for novel scene simulation. Moreover, neural networks usually learn the indirect illumination implicitly in an image-to-image translation way which leads to poor performance in simulating complex projection effects such as soft-shadow and interreflection. In this paper, we introduce a novel path tracing-based differentiable projector-camera systems (DPCS), offering a differentiable ProCams simulation method that explicitly integrates multi-bounce path tracing. Our DPCS models the physical project-and-capture process using differentiable physically-based rendering (PBR), enabling the scene parameters to be explicitly decoupled and learned using much fewer samples. Moreover, our physically-based method not only enables high-quality downstream ProCams tasks, such as ProCams relighting and projector compensation, but also allows novel scene simulation using the learned scene parameters. In experiments, DPCS demonstrates clear advantages over previous approaches in ProCams simulation, offering better interpretability, more efficient handling of complex interreflection and shadow, and requiring fewer training samples. The code and dataset are available on the project page: https://jijiangli.github.io/DPCS/.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3666-3676"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlowHON: Representing Flow Fields Using Higher-Order Networks. FlowHON:使用高阶网络表示流场。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-14 DOI: 10.1109/TVCG.2025.3550130
Nan Chen, Zhihong Li, Jun Tao
{"title":"FlowHON: Representing Flow Fields Using Higher-Order Networks.","authors":"Nan Chen, Zhihong Li, Jun Tao","doi":"10.1109/TVCG.2025.3550130","DOIUrl":"10.1109/TVCG.2025.3550130","url":null,"abstract":"<p><p>Flow fields are often partitioned into data blocks for massively parallel computation and analysis based on blockwise relationships. However, most of the previous techniques only consider the first-order dependencies among blocks, which is insufficient in describing complex flow patterns. In this work, we present FlowHON, an approach to construct higher-order networks (HONs) from flow fields. FlowHON captures the inherent higher-order dependencies in flow fields as nodes and estimates the transitions among them as edges. We formulate the HON construction as an optimization problem with three linear transformations. The first two layers correspond to the node generation and the third one corresponds to edge estimation. Our formulation allows the node generation and edge estimation to be solved in a unified framework. With FlowHON, the rich set of traditional graph algorithms can be applied without any modification to analyze flow fields, while leveraging the higher-order information to understand the inherent structure and manage flow data for efficiency. We demonstrate the effectiveness of FlowHON using a series of downstream tasks, including estimating the density of particles during tracing, partitioning flow fields for data management, and understanding flow fields using the node-link diagram representation of networks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Visual Virtual Scene and Localization Task on Auditory Distance Perception in Virtual Reality 虚拟现实中视觉虚拟场景和定位任务对听觉距离感知的影响
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-14 DOI: 10.1109/TVCG.2025.3549855
Sarah Roßkopf;Andreas Mühlberger;Felix Stärz;Steven Van De Par;Matthias Blau;Leon O.H. Kroczek
{"title":"Impact of Visual Virtual Scene and Localization Task on Auditory Distance Perception in Virtual Reality","authors":"Sarah Roßkopf;Andreas Mühlberger;Felix Stärz;Steven Van De Par;Matthias Blau;Leon O.H. Kroczek","doi":"10.1109/TVCG.2025.3549855","DOIUrl":"10.1109/TVCG.2025.3549855","url":null,"abstract":"Investigating auditory perception and cognition in realistic, controlled environments is made possible by virtual reality (VR). However, when visual information is presented, sound localization results from multimodal integration. Additionally, using head-mounted displays leads to a distortion of visual egocentric distances. With two different paradigms, we investigated the extent to which different visual scenes influence auditory distance perception, and secondary presence and realism. To be more precise, different room models were displayed via HMD while participants had to localize sounds emanating from real loudspeakers. In the first paradigm, we manipulated whether a room was congruent or incongruent to the physical room. In a second paradigm, we manipulated room visibility - displaying either an audiovisual congruent room or a scene containing almost no spatial information- and localization task. Participants indicated distances either by placing a virtual loudspeaker, walking, or verbal report. While audiovisual room incongruence had a detrimental effect on distance perception, no main effect of room visibility was found but an interaction with the task. Overestimation of distances was higher using the placement task in the non-spatial scene. The results suggest an effect of visual scene on auditory perception in VR implying a need for consideration e.g. in virtual acoustics research.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2464-2474"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10926875","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRUnion: Asymmetric Task-Aware 3D Mutual Scene Generation of Dissimilar Spaces for Mixed Reality Telepresence MRUnion:用于混合现实远程呈现的非对称任务感知3D相互场景生成。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-14 DOI: 10.1109/TVCG.2025.3549878
Michael Pabst;Linda Rudolph;Nikolas Brasch;Verena Biener;Chloe Eghtebas;Ulrich Eck;Dieter Schmalstieg;Gudrun Klinker
{"title":"MRUnion: Asymmetric Task-Aware 3D Mutual Scene Generation of Dissimilar Spaces for Mixed Reality Telepresence","authors":"Michael Pabst;Linda Rudolph;Nikolas Brasch;Verena Biener;Chloe Eghtebas;Ulrich Eck;Dieter Schmalstieg;Gudrun Klinker","doi":"10.1109/TVCG.2025.3549878","DOIUrl":"10.1109/TVCG.2025.3549878","url":null,"abstract":"In mixed reality (MR) telepresence applications, the differences between participants' physical environments can interfere with effective collaboration. For asymmetric tasks, users might need to access different resources (information, objects, tools) distributed throughout their room. Existing intersection methods do not support such interactions, because a large portion of the telepresence participants' rooms become inaccessible, along with the relevant task resources. We propose MRUnion, a Mixed Reality Telepresence pipeline for asymmetric task-aware 3D mutual scene generation. The key concept of our approach is to enable a user in an asymmetric telecollaboration scenario to access the entire room, while still being able to communicate with remote users in a shared space. For this purpose, we introduce a novel mutual room layout called Union. We evaluated 882 space combinations quantitatively involving two, three, and four combined remote spaces and compared it to a conventional Intersect room layout. The results show that our method outperforms existing intersection methods and enables a significant increase in space and accessibility to resources within the shared space. In an exploratory user study (N=24), we investigated the applicability of the synthetic mutual scene in both MR and VR setups, where users collaborated on an asymmetric remote assembly task. The study results showed that our method achieved comparable results to the intersect method but requires further investigation in terms of social presence, safety and support of collaboration. From this study, we derived design implications for synthetic mutual spaces.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3354-3364"},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SensARy Substitution: Augmented Reality Techniques to Enhance Force Perception in Touchless Robot Control. 感知替代:增强现实技术,提高无触摸机器人控制中的力感知。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-14 DOI: 10.1109/TVCG.2025.3549856
Tonia Mielke, Florian Heinrich, Christian Hansen
{"title":"SensARy Substitution: Augmented Reality Techniques to Enhance Force Perception in Touchless Robot Control.","authors":"Tonia Mielke, Florian Heinrich, Christian Hansen","doi":"10.1109/TVCG.2025.3549856","DOIUrl":"10.1109/TVCG.2025.3549856","url":null,"abstract":"<p><p>The lack of haptic feedback in touchless human-robot interaction is critical in applications such as robotic ultrasound, where force perception is crucial to ensure image quality. Augmented reality (AR) is a promising tool to address this limitation by providing sensory substitution through visual or vibrotactile feedback. The implementation of visual force feedback requires consideration not only of feedback design but also of positioning. Therefore, we implemented two different visualization types at three different positions and investigated the effects of vibrotactile feedback on these approaches. Furthermore, we examined the effects of multimodal feedback compared to visual or vibrotactile output alone. Our results indicate that sensory substitution eases the interaction in contrast to a feedback-less baseline condition, with the presence of visual support reducing average force errors and being subjectively preferred by the participants. However, the more feedback was provided, the longer users needed to complete their tasks. Regarding visualization design, a 2D bar visualization reduced force errors compared to a 3D arrow concept. Additionally, the visualizations being displayed directly on the ultrasound screen were subjectively preferred. With findings regarding feedback modality and visualization design our work represents an important step toward sensory substitution for touchless human-robot interaction.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Haptic Feedback on Perception of Threat and Peripersonal Space in Social VR 触觉反馈对社交VR中威胁感知和周边个人空间感知的影响
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-13 DOI: 10.1109/TVCG.2025.3549884
Vojtěch Smekal;Jeanne Hecquard;Sophie Kühne;Nicole Occidental;Anatole Lécuyer;Marc Macé;Beatrice de Gelder
{"title":"Influence of Haptic Feedback on Perception of Threat and Peripersonal Space in Social VR","authors":"Vojtěch Smekal;Jeanne Hecquard;Sophie Kühne;Nicole Occidental;Anatole Lécuyer;Marc Macé;Beatrice de Gelder","doi":"10.1109/TVCG.2025.3549884","DOIUrl":"10.1109/TVCG.2025.3549884","url":null,"abstract":"Humans experience social interactions partly through nonverbal communication, including proxemic behaviors and haptic sensations. Body language, facial expressions, personal spaces, and social touch are multiple factors influencing how a stranger's approach is experienced. Furthermore, the rise of virtual social platforms raises concerns about virtual harassment and the perception of personal space in VR: harassment is felt much more strongly in virtual spaces, and the psychological effects can be just as severe. While most virtual platforms have a ‘personal bubble’ feature that keeps strangers at a distance, it does not seem to suffice: personal space violations seem influenced by more than simply distance. With this paper, we aim to further clarify the variability of personal spaces. We focus on haptic stimulation, elaborating our hypotheses on the relationship between social touch and the perception of personal spaces. Users wore a haptic compression belt and were immersed in a virtual dark alley. Virtual agents approached them while exhibiting either neutral or threatening body language. In half of all trials, as the agent advanced, the compression belt tightened around the users' torsos with three different pressures. Participants could press a response button when uncomfortable with the agent's proximity. Peripersonal space violations occurred 31% earlier on average when the agent was visibly angry and the compression belt activated. A greater tightening pressure also slightly increased the personal sphere radius by up to 13%. Overall, our results are consistent with previous works on peripersonal spaces. They help further define our relationship to personal space boundaries and encourage using haptic devices during simulated social interactions in VR.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2986-2994"},"PeriodicalIF":0.0,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143627269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality. 挖掘现实:理解在混合现实中互动对存在和反应时间的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549580
Yasra Chandio, Victoria Interrante, Fatima M Anwar
{"title":"Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality.","authors":"Yasra Chandio, Victoria Interrante, Fatima M Anwar","doi":"10.1109/TVCG.2025.3549580","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549580","url":null,"abstract":"<p><p>Enhancing presence in mixed reality (MR) relies on precise measurement and quantification. While presence has traditionally been measured through subjective questionnaires, recent research links presence with objective metrics like reaction time. Past studies examined this correlation with varying technical factors (object realism and behavior) and human conditioning, but the impact of interaction remains unclear. To answer this question, we conducted a within-subjects study (N = 50) to explore the correlation between presence and reaction time across two interaction scenarios (direct and symbolic) with two tasks (selection and manipulation). We found that presence scores and reaction times are correlated (correlation coefficient of -0.54), suggesting that the impact of interaction on reaction time correlates with its effect on presence.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality. ViDDAR:基于视觉语言模型的增强现实任务有害内容检测。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549147
Yanming Xiu, Tim Scargill, Maria Gorlatova
{"title":"ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality.","authors":"Yanming Xiu, Tim Scargill, Maria Gorlatova","doi":"10.1109/TVCG.2025.3549147","DOIUrl":"10.1109/TVCG.2025.3549147","url":null,"abstract":"<p><p>In Augmented Reality (AR), virtual content enhances user experience by providing additional information. However, improperly positioned or designed virtual content can be detrimental to task performance, as it can impair users' ability to accurately interpret real-world information. In this paper we examine two types of task-detrimental virtual content: obstruction attacks, in which virtual content prevents users from seeing real-world objects, and information manipulation attacks, in which virtual content interferes with users' ability to accurately interpret real-world information. We provide a mathematical framework to characterize these attacks and create a custom open-source dataset for attack evaluation. To address these attacks, we introduce ViDDAR (Vision language model-based Task-Detrimental content Detector for Augmented Reality), a comprehensive full-reference system that leverages Vision Language Models (VLMs) and advanced deep learning techniques to monitor and evaluate virtual content in AR environments, employing a user-edge-cloud architecture to balance performance with low latency. To the best of our knowledge, ViDDAR is the first system to employ VLMs for detecting task-detrimental content in AR settings. Our evaluation results demonstrate that ViDDAR effectively understands complex scenes and detects task-detrimental content, achieving up to 92.15% obstruction detection accuracy with a detection latency of 533 ms, and an 82.46% information manipulation content detection accuracy with a latency of 9.62 s.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
It's My Fingers' Fault: Investigating the Effect of Shared Avatar Control on Agency and Responsibility Attribution 这是我的手指的错:调查共享头像控制对代理和责任归因的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549868
Xiaotong Li;Yuji Hatada;Takuji Narumi
{"title":"It's My Fingers' Fault: Investigating the Effect of Shared Avatar Control on Agency and Responsibility Attribution","authors":"Xiaotong Li;Yuji Hatada;Takuji Narumi","doi":"10.1109/TVCG.2025.3549868","DOIUrl":"10.1109/TVCG.2025.3549868","url":null,"abstract":"Previous studies introduced an avatar body control sharing system known as “virtual co-embodiment,” where control over bodily movements and external events, or agency, of a single avatar is shared among multiple individuals. However, how this virtual co-embodiment experience influences users' perception of agency, both explicitly and implicitly, and the extent to which they are willing to take responsibility for successful or failed outcomes, remains an imminent problem. In this research, we addressed this issue using: (1) explicit agency questionnaires, (2) implicit intentional binding (IB) effect, (3) responsibility attribution measured through financial gain/loss distribution, and (4) interview to evaluate this experience where agency over the right hand's fingers was fully transferred to a human partner. Given the distinction between two layers of agency (body agency: control over actions, and external agency: action's effect on external events), we also investigated the impact of sharing only the body-level of agency. In a ball-throwing task involving 24 participants, results showed that sharing body agency over the fingers negatively affected the feeling of having control over both the fingers and the entire right upper limb, as measured by the questionnaire. However, sharing external agency did not significantly diminish the participants' perceived control over the ball-throwing, as indicated by IB. Interestingly, while IB demonstrated that participants felt greater causality for failed ball-throwing attempts, they were reluctant to take responsibility and accept financial penalties. Additionally, responsibility attribution was found to be linked to the participants' personal trait—Locus of Control.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2859-2869"},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10923684","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信