IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Visuo-Tactile Feedback with Hand Outline Styles for Modulating Affective Roughness Perception. 视觉触觉反馈与手轮廓风格调节情感粗糙感知。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616805
Minju Baeck, Yoonseok Shin, Dooyoung Kim, Hyunjin Lee, Sang Ho Yoon, Woontack Woo
{"title":"Visuo-Tactile Feedback with Hand Outline Styles for Modulating Affective Roughness Perception.","authors":"Minju Baeck, Yoonseok Shin, Dooyoung Kim, Hyunjin Lee, Sang Ho Yoon, Woontack Woo","doi":"10.1109/TVCG.2025.3616805","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616805","url":null,"abstract":"<p><p>We propose a visuo-tactile feedback method that combines virtual hand visualization and fingertip vibrations to modulate affective roughness perception in VR. While prior work has focused on object-based textures and vibrotactile feedback, the role of visual feedback on virtual hands remains underexplored. Our approach introduces affective visual cues including line shape, motion, and color applied to hand outlines, and examines their influence on both affective responses (arousal, valence) and perceived roughness. Results show that sharp contours enhanced perceived roughness, increased arousal, and reduced valence, intensifying the emotional impact of haptic feedback. In contrast, color affected valence only, with red consistently lowering emotional positivity. These effects were especially noticeable at lower haptic intensities, where visual cues extended affective modulation into mid-level perceptual ranges. Overall, the findings highlight how integrating expressive visual cues with tactile feedback can enrich affective rendering and offer flexible emotional tuning in immersive VR interactions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Learning and Knowledge Retention of Abstract Physics Concepts with Virtual Reality. 利用虚拟现实增强抽象物理概念的学习和知识记忆。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616826
M Akif Akdag, Jean Botev, Steffen Rothkugel
{"title":"Enhancing Learning and Knowledge Retention of Abstract Physics Concepts with Virtual Reality.","authors":"M Akif Akdag, Jean Botev, Steffen Rothkugel","doi":"10.1109/TVCG.2025.3616826","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616826","url":null,"abstract":"<p><p>Virtual reality (VR) is increasingly recognized as a powerful tool for science education, offering interactive environments to explore intangible concepts. Traditional teaching methods often struggle to convey abstract concepts in science, where many phenomena are not directly observable. VR can address this issue by modeling and visualizing complex and unobservable entities and processes, allowing learners to dynamically interact with what would otherwise not be directly perceptible. However, relatively few controlled studies have compared immersive VR learning with equivalent hands-on laboratory learning in physics education, particularly for more abstract topics. In this work, we designed a VR-based physics lab that is capable of visualizing electrons and electromagnetic fields to teach fundamental concepts of electronics and magnetism, closely replicating a traditional electronics learning kit used as a baseline for comparison. We evaluated the impact of the two conditions (VR versus traditional) on students' learning outcomes, motivation, engagement, and cognitive load. Our results show significantly higher knowledge retention in the VR group compared to the traditional group. Also, while there were no significant differences in immediate comprehension between the two groups, participants in the VR group spent substantially more time engaged with the learning content. These findings highlight the potential of visually enriched virtual environments to enhance the learning experience and improve knowledge retention of intangible scientific concepts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ada: A Distributed, Power-Aware, Real-Time Scene Provider for XR. Ada:面向XR的分布式、功率感知、实时场景提供商。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616835
Yihan Pang, Sushant Kondguli, Shenlong Wang, Sarita Adve
{"title":"Ada: A Distributed, Power-Aware, Real-Time Scene Provider for XR.","authors":"Yihan Pang, Sushant Kondguli, Shenlong Wang, Sarita Adve","doi":"10.1109/TVCG.2025.3616835","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616835","url":null,"abstract":"<p><p>Real-time scene provisioning-reconstructing and delivering scene data to requesting XR applications during runtime-is central to enabling spatial computing in modern XR systems. However, existing solutions struggle to balance latency, power and scene fidelity under XR device constraints, and often rely on designs that are either closed, application-specific designs, or both. We present Ada, the first open distributed, power-aware, application-agnostic real-time scene provisioning system. Through computation offloading along with algorithmic and system innovations, Ada provides high-fidelity scenes with stable performance across all evaluated scene sizes and with low power consumption. To isolate the benefits of Ada's algorithmic and design innovations over the closest prior work [82], which is on-device and CPU-based, we configure a comparable on-device, CPU-based variant of Ada (AdaLocal- CPU). We show this variant achieves up to 6.8× lower scene request latency and higher scene fidelity compared to the prior work. Furthermore, Ada's final distributed GPU-accelerated implementation reduces latency by an additional 2×, highlighting the benefits of GPU acceleration and distributed computing. Additionally, Ada also lowers the incremental power cost of scene provisioning by 24% compared to the best on-device variant (AdaLocal-GPU). Finally, Ada flexibly adapts to diverse latency, power, scene fidelity, and network bandwidth requirements.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of User Performance and Experience between Light Field and Conventional AR Glasses. 光场和传统AR眼镜的用户性能和体验比较。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3617940
Wei-An Teng, Su-Ling Yeh, Homer H Chen
{"title":"Comparison of User Performance and Experience between Light Field and Conventional AR Glasses.","authors":"Wei-An Teng, Su-Ling Yeh, Homer H Chen","doi":"10.1109/TVCG.2025.3617940","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3617940","url":null,"abstract":"<p><p>Light field AR glasses can provide better visual comfort than conventional AR glasses; however, studies on user performance comparison between them are notably scarce. In this paper, we present a systematic method employing a serial visual search task without confounding factors to quantify and compare the user performance and experience between these two types of AR glasses at two different viewing distances, 30 cm and 60 cm, and in two modes, purely virtual VR mode and virtualreal integration AR mode. The results show that the light field AR glasses led to a significantly faster reaction speed and higher accuracy than the conventional AR glasses at 30 cm in the AR mode. The participant feedback also shows that the former led to better virtual-real integration. User performance and experience of the light field AR glasses remained consistent across different viewing distances. Although the conventional AR glasses had a better search efficiency than the light field AR glasses at 60 cm in both AR and VR modes, it had more negative feedback from the participants. Overall, the design of this experiment successfully allows us to quantify the effect of VAC and underscores the strength of the evaluation method.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LITFORAGER: Exploring Multimodal Literature Foraging Strategies in Immersive Sensemaking. 探索沉浸式意义制造中的多模态文献搜寻策略。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616732
Haoyang Yang, Elliott H Faa, Weijian Liu, Shunan Guo, Duen Horng Chau, Yalong Yang
{"title":"LITFORAGER: Exploring Multimodal Literature Foraging Strategies in Immersive Sensemaking.","authors":"Haoyang Yang, Elliott H Faa, Weijian Liu, Shunan Guo, Duen Horng Chau, Yalong Yang","doi":"10.1109/TVCG.2025.3616732","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616732","url":null,"abstract":"<p><p>Exploring and comprehending relevant academic literature is a vital yet challenging task for researchers, especially given the rapid expansion in research publications. This task fundamentally involves sensemaking-interpreting complex, scattered information sources to build understanding. While emerging immersive analytics tools have shown cognitive benefits like enhanced spatial memory and reduced mental load, they predominantly focus on information synthesis (e.g., organizing known documents). In contrast, the equally important information foraging phase-discovering and gathering relevant literature-remains underexplored within immersive environments, hindering a complete sensemaking workflow. To bridge this gap, we introduce LITFORAGER, an interactive literature exploration tool designed to facilitate information foraging of research literature within an immersive sensemaking workflow using network-based visualizations and multimodal interactions. Developed with WebXR and informed by a formative study with researchers, LITFORAGER supports exploration guidance, spatial organization, and seamless transition through a 3D literature network. An observational user study with 15 researchers demonstrated LITFORAGER's effectiveness in supporting fluid foraging strategies and spatial sensemaking through its multimodal interface.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GS-ProCams: Gaussian Splatting-Based Projector-Camera Systems. gs - programs:基于高斯喷溅的投影-摄像系统。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616841
Qingyue Deng, Jijiang Li, Haibin Ling, Bingyao Huang
{"title":"GS-ProCams: Gaussian Splatting-Based Projector-Camera Systems.","authors":"Qingyue Deng, Jijiang Li, Haibin Ling, Bingyao Huang","doi":"10.1109/TVCG.2025.3616841","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616841","url":null,"abstract":"<p><p>We present GS-ProCams, the first Gaussian Splatting-based framework for projector-camera systems (ProCams). GSProCams is not only view-agnostic but also significantly enhances the efficiency of projection mapping (PM) that requires establishing geometric and radiometric mappings between the projector and the camera. Previous CNN-based ProCams are constrained to a specific viewpoint, limiting their applicability to novel perspectives. In contrast, NeRF-based ProCams support view-agnostic projection mapping, however, they require an additional co-located light source and demand significant computational and memory resources. To address this issue, we propose GS-ProCams that employs 2D Gaussian for scene representations, and enables efficient view-agnostic ProCams applications. In particular, we explicitly model the complex geometric and photometric mappings of ProCams using projector responses, the projection surface's geometry and materials represented by Gaussians, and the global illumination component. Then, we employ differentiable physically-based rendering to jointly estimate them from captured multi-view projections. Compared to state-of-the-art NeRF-based methods, our GS-ProCams eliminates the need for additional devices, achieving superior ProCams simulation quality. It also uses only 1/10 of the GPU memory for training and is 900 times faster in inference speed. Please refer to our project page for the code and dataset: https://realqingyue.github.io/GS-ProCams/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facilitating the Exploration of Linearly Aligned Objects in Controller-Free 3D Environment with Gaze and Microgestures. 促进探索线性对齐的对象在无控制器的3D环境与凝视和微手势。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-03 DOI: 10.1109/TVCG.2025.3616833
Jihyeon Lee, Jinwook Kim, Jeongmi Lee
{"title":"Facilitating the Exploration of Linearly Aligned Objects in Controller-Free 3D Environment with Gaze and Microgestures.","authors":"Jihyeon Lee, Jinwook Kim, Jeongmi Lee","doi":"10.1109/TVCG.2025.3616833","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616833","url":null,"abstract":"<p><p>As Extended Reality (XR) environments increasingly involve large amounts of data and content, designing effective exploration techniques has become critical. Depth-based object exploration is a common but underexplored task in XR environments, especially in settings without physical devices. Prior studies have largely focused on horizontal or planar interactions, leaving depthoriented exploration relatively overlooked. To bridge this gap, we propose three linearly aligned layer transition techniques (Continuous Push, Push&Return, and Tilt&Return) specifically designed to support efficient, precise, and continuous object exploration along the depth axis within depth-based UIs. In a user study with 30 participants, we compared their performance, usability, and user preference across two different layer configurations (8-layer vs. 16-layer). The results highlight that Continuous Push enables faster exploration with lower effort, while Push&Return provides the highest accuracy and is most preferred by users. Based on these findings, we discuss design implications for depth-based interaction techniques in controller-free XR environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Visual Augmentation on Embodied Skill Acquisition Across Perspectives and Body Representations. 视觉增强对跨视角和身体表征的具身技能习得的作用。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616832
Ruochen Cao, Zequn Liang, Chenkai Zhang, Andrew Cunningham, James A Walsh, Rui Cao
{"title":"The Role of Visual Augmentation on Embodied Skill Acquisition Across Perspectives and Body Representations.","authors":"Ruochen Cao, Zequn Liang, Chenkai Zhang, Andrew Cunningham, James A Walsh, Rui Cao","doi":"10.1109/TVCG.2025.3616832","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616832","url":null,"abstract":"<p><p>Immersive embodiment holds great promise for motor skill acquisition, however the design and effect of real-time visual guidance across perspectives and body representations remain underexplored. This study introduces a puppet-inspired visual feedback framework that uses continuous visual linkages - line, color, and thickness cues - to externalize spatial deviation and scaffold embodied learning. To evaluate its effectiveness, we conducted a controlled virtual reality experiment (N = 40) involving gesture imitation tasks with fine (sign language) and gross (aviation marshalling) motor components, under first- and third-person viewpoints. Results showed that color-based guidance significantly improved imitation accuracy, short-term learning, and perceived embodiment, especially in finger-based and first-person settings. Subjective assessments (NASA-TLX, Motivation, IPQ, Embodiment) confirmed improvements in presence, agency, and task engagement.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Evaluation of How Virtual Hand Visibility Affects Near-Field Size Perception and Reporting of Tangible Objects in Virtual Reality. 虚拟手可视性如何影响虚拟现实中有形物体的近场尺寸感知和报告的实证评估。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616829
Chandni Murmu, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Wen-Chieh Lin, Andrew C Robb, Christopher Pagano, Sabarish V Babu
{"title":"An Empirical Evaluation of How Virtual Hand Visibility Affects Near-Field Size Perception and Reporting of Tangible Objects in Virtual Reality.","authors":"Chandni Murmu, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Wen-Chieh Lin, Andrew C Robb, Christopher Pagano, Sabarish V Babu","doi":"10.1109/TVCG.2025.3616829","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616829","url":null,"abstract":"<p><p>In immersive virtual environments (IVEs), accurate size perception is critical, especially in training simulations designed to mimic real-world tasks, such as, nuclear power plant control room or medical procedures. These simulations have dials or instruments of varying sizes. Visual information of the objects alone, often fails to capture subtle size differences in virtual reality (VR). However, integrating haptic and hand-avatars may potentially improve accuracy and performance. This improvement could be especially beneficial for real-world scenarios where hand(s) are intermittently visible or obscured. To investigate how this intermittent presence or absence of body-scaled hand-avatars affects size perception when integrated with haptic information, we conducted 2×2 mixed-factorial experiment design using a near-field, size-estimation task in VR. The experiment conditions compared size estimations with or without virtual hand visibility in the perception and reporting phases. The task involved 16 graspable objects of varying sizes and randomly repeated 3 times across 48 trials per participant (total 80 participants). We employed Linear Mixed Models (LMMs) analysis to objective measures: perceived size, residual error and proportional errors. Results revealed that as the tangible-graspable size increases, overestimation reduces if the hand-avatars are visible in the reporting phase. Also, overestimation reduces as the number of trials increases, if the hand-avatars are visible in the reporting phase. Thus, the presence of hand-avatars facilitated perceptual calibration. This novel study, with different combinations of hand-avatar visibility, taking perception and reporting of size as two separate phases, could open future research directions in more complex scenarios for refined integration of sensory modalities and consequently enhance real-world application performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PACT: Modeling Coordination Dynamics in Scale-Asymmetric Virtual Reality Collaboration. 尺度非对称虚拟现实协作中的协调动力学建模。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-10-02 DOI: 10.1109/TVCG.2025.3616831
Hayeon Kim, In-Kwon Lee
{"title":"PACT: Modeling Coordination Dynamics in Scale-Asymmetric Virtual Reality Collaboration.","authors":"Hayeon Kim, In-Kwon Lee","doi":"10.1109/TVCG.2025.3616831","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3616831","url":null,"abstract":"<p><p>In virtual reality (VR), collaborators often experience the same environment at different visual scales, disrupting shared attention and increasing coordination difficulty. While prior work has focused on preventing misalignment, less is known about how teams recover when alignment fails. We examine collaboration under scale asymmetry, a particularly disruptive form of perceptual divergence. In a study with 36 VR teams, we identify behavioral patterns that distinguish adaptive recovery from persistent breakdown. Successful teams flexibly shifted between user-driven and system-supported cues, while others repeated ineffective strategies. Based on these findings, we introduce the Perceptual Asymmetry Coordination Theory (PACT), a dual-pathway model that describes coordination as an evolving process shaped by cue integration and strategic responsiveness. PACT reframes recovery not as a return to alignment, but as a dynamic adaptation to misalignment. These insights inform the design of VR systems that support recovery through multi-channel, adaptive coordination in scale-divergent environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信