IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
PantographHaptics: A Technique for Large-Surface Passive Haptic Interactions using Pantograph Mechanisms PantographHaptics:一种利用受电弓机制实现大表面被动触觉互动的技术。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-21 DOI: 10.1109/TVCG.2025.3549869
Marcus K. E. Friedel;Zachary McKendrick;Ehud Sharlin;Ryo Suzuki
{"title":"PantographHaptics: A Technique for Large-Surface Passive Haptic Interactions using Pantograph Mechanisms","authors":"Marcus K. E. Friedel;Zachary McKendrick;Ehud Sharlin;Ryo Suzuki","doi":"10.1109/TVCG.2025.3549869","DOIUrl":"10.1109/TVCG.2025.3549869","url":null,"abstract":"In Virtual Reality (VR), existing hand-scale passive interaction techniques are unsuitable for continuous large-scale renders: room-scale proxies lack portability, and wearable robotic arms are energy-intensive and induce friction. This paper presents a technique for providing wall haptics in VR which supports portable, passive, and large-scale user interactions. We propose a potential solution, PantographHaptics, a technique which uses the scaling properties of a pantograph to passively render two-degree-of-freedom body-scale surfaces to overcome the limitations present in existing methods. We demonstrate PantographHaptics through two prototypes: HapticLever, a grounded system, and Feedbackpack, a wearable device. We evaluate these prototypes with technical and user evaluations. Our 9-participant first study compares HapticLever against traditional haptic modalities, while our 7-participant second study verifies Feedbackpack's usability and interaction fidelity.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2736-2745"},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Navigation on Proxemics in an Immersive Virtual Environment with Conversational Agents 具有会话代理的沉浸式虚拟环境中导航对代理的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-20 DOI: 10.1109/TVCG.2025.3550231
Rose Connolly;Lauren Buck;Victor Zordan;Rachel McDonnell
{"title":"The Impact of Navigation on Proxemics in an Immersive Virtual Environment with Conversational Agents","authors":"Rose Connolly;Lauren Buck;Victor Zordan;Rachel McDonnell","doi":"10.1109/TVCG.2025.3550231","DOIUrl":"10.1109/TVCG.2025.3550231","url":null,"abstract":"As social VR grows in popularity, understanding how to optimise interactions becomes increasingly important. Interpersonal distance–the physical space people maintain between each other–is a key aspect of user experience. Previous work in psychology has shown that breaches of personal space cause stress and discomfort. Thus, effectively managing this distance is crucial in social VR, where social interactions are frequent. Teleportation, a commonly used locomotion method in these environments, involves distinct cognitive processes and requires users to rely on their ability to estimate distance. Despite its widespread use, the effect of teleportation on proximity remains unexplored. To investigate this, we measured the interpersonal distance of 70 participants during interactions with embodied conversational agents, comparing teleportation to natural walking. Our findings revealed that participants maintained closer proximity from the agents during teleportation. Female participants kept greater distances from the agents than male participants, and natural walking was associated with higher agency and body ownership, though co-presence remained unchanged. We propose that differences in spatial perception and spatial cognitive load contribute to reduced interpersonal distance with teleportation. These findings emphasise that proximity should be a key consideration when selecting locomotion methods in social VR, highlighting the need for further research on how locomotion impacts spatial perception and social dynamics in virtual environments.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2787-2797"},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View 3D Hair Modeling with Clumping Optimization. 单视图3D头发建模与团块优化。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-20 DOI: 10.1109/TVCG.2025.3552919
Zhongsi Tang, Jiahao Geng, Yanlin Weng, Youyi Zheng, Kun Zhou
{"title":"Single-View 3D Hair Modeling with Clumping Optimization.","authors":"Zhongsi Tang, Jiahao Geng, Yanlin Weng, Youyi Zheng, Kun Zhou","doi":"10.1109/TVCG.2025.3552919","DOIUrl":"10.1109/TVCG.2025.3552919","url":null,"abstract":"<p><p>Deep learning advancements have enabled the generation of visually plausible hair geometry from a single image, but the results still do not meet the realism required for further applications (e.g., high quality hair rendering and simulation). One of the essential element that is missing in previous single-view hair reconstruction methods is the clumping effect of hair, which is influenced by scalp secretions and oils, and is a key ingredient for high-quality hair rendering and simulation. Inspired by common practices in industrial production which simulates realistic hair clumping by allowing artists to adjust clumping parameters, we aim to integrate these clumping effects into single-view hair reconstruction. We introduce a hierarchical hair representation that incorporates a clumping modifier into the guide hair and skinning-based hair expressions. This representation utilizes guide strands and skinning weights to express the basic geometric structure of the hair. The clumping modifier allows for the expression of more detailed and realistic clumping effects. Based on this representation, We design a fully differentiable framework integrating a neural measurement of clumping and a line-based rasterization renderer to iteratively solve guide strands positions and clumping parameters. Our method demonstrates superior performance both qualitatively and quantitatively compared to state-of-the-art techniques.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Collaboration Context and Personality Traits Shape the Social Norms of Human-to-Avatar Identity Representation 协作情境与人格特质如何塑造人对化身身份表征的社会规范。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-20 DOI: 10.1109/TVCG.2025.3549904
Seoyoung Kang;Boram Yoon;Kangsoo Kim;Jonathan Gratch;Woontack Woo
{"title":"How Collaboration Context and Personality Traits Shape the Social Norms of Human-to-Avatar Identity Representation","authors":"Seoyoung Kang;Boram Yoon;Kangsoo Kim;Jonathan Gratch;Woontack Woo","doi":"10.1109/TVCG.2025.3549904","DOIUrl":"10.1109/TVCG.2025.3549904","url":null,"abstract":"As avatars have evolved from simple digital representations into extensions of our identities, they offer unprecedented opportunities for self-expression and customization beyond the physical world limitations. While virtual platforms foster new forms of identity exploration, social norms still play a crucial role in defining what is considered appropriate in these environments. In this study, we surveyed 150 participants to investigate social norms surrounding avatar modifications, examining how perspectives, contexts, and personality traits influence attitudes toward appropriateness. Our findings reveal that avatar modifications are generally viewed as more appropriate when considered from a partner's perspective, especially for changeable attributes. However, these modifications are perceived as less acceptable in professional settings such as workplaces. Additionally, individuals with high self-monitoring tendencies tend to be more resistant to changes, while those scoring higher on Machiavellianism are more accepting of changes, particularly regarding unchangeable attributes and emotional expressions. These findings provide valuable insights for platform developers and designers, highlighting the importance of implementing context-aware customization options that balance core identity elements with personality-driven preferences, thereby enhancing user experiences while respecting social norms.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3387-3396"},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Obstacle Visibility with Augmented Reality Improves Mobility in People with Low Vision 通过增强现实技术增强障碍物可视性可以改善低视力人群的行动能力。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-19 DOI: 10.1109/TVCG.2025.3549542
Lior Maman;Ilan Vol;Sarit F.A. Szpiro
{"title":"Enhancing Obstacle Visibility with Augmented Reality Improves Mobility in People with Low Vision","authors":"Lior Maman;Ilan Vol;Sarit F.A. Szpiro","doi":"10.1109/TVCG.2025.3549542","DOIUrl":"10.1109/TVCG.2025.3549542","url":null,"abstract":"Avoiding obstacles while navigating is a challenge for people with low vision, who have impaired yet functional vision, which impacts their mobility, safety, and independence. This study investigates the impact of using Augmented Reality (AR) to enhance the visibility of obstacles for people with low vision. Twenty-five participants (14 with low vision and 11 typically sighted) wore smart glasses and completed a real-world obstacle course under two conditions: with obstacles enhanced using 3D AR markings and without any enhancement (i.e., passthrough only - control condition). Our results reveal that AR enhancements significantly decreased walking time, with the low vision group demonstrating a notable reduction in time. Additionally, the path length was significantly shorter with AR enhancements. The decrease in time and path length did not lead to more collisions, suggesting improved obstacle avoidance. Participants also reported a positive user experience with the AR system, highlighting its potential to enhance mobility for low vision users. These results suggest that AR technology can play a critical role in supporting the independence and confidence of low vision individuals in mobility tasks within complex environments. We discuss design guidelines for future AR systems to assist low vision people.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3336-3343"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143665778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparison of the Effects of Older Age on Homing Performance in Real and Virtual Environments. 真实与虚拟环境中老年人对寻的影响比较。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549901
Maggie K McCracken, Corey S Shayman, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr
{"title":"A Comparison of the Effects of Older Age on Homing Performance in Real and Virtual Environments.","authors":"Maggie K McCracken, Corey S Shayman, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1109/TVCG.2025.3549901","DOIUrl":"10.1109/TVCG.2025.3549901","url":null,"abstract":"<p><p>Virtual reality (VR) has become a popular tool for studying navigation, providing the experimental control of a laboratory setting but also the potential for immersive and natural experiences that resemble the real world. For VR to be an effective tool to study navigation and be used for training or rehabilitation, it is important to establish whether performance is similar across virtual and real environments. Much of the existing navigation research has focused on young adult performance either in a virtual or a real environment, resulting in an open question regarding the validity of VR for studying age-related effects on spatial navigation. In this paper, young (18-30 years old) and older adults (60 years and older) performed the same navigation task in similar real and virtual environments. They completed a homing task, requiring walking along two legs of a triangle and returning to a home location, under three sensory conditions: visual cues (environmental landmarks present), body-based self-motion cues, and the combination of both cues. Our findings reveal that homing performance in VR demonstrates the same age-related differences as those observed in the real-world task. That said, within-age group differences arise when comparing cue use across environment types. In particular, young adults are less accurate and more variable with self-motion cues than visual cues in VR, while older adults show similar deficits with both cues. However, when both age groups can access multiple sensory cues, navigation performance does not differ between environment types. These results demonstrate that VR effectively captures age-related differences, with navigation performance most closely resembling performance in the real world when navigators can rely on an array of sensory information. Such findings have implications for future research on the aging population, highlighting that VR can be a valuable tool, particularly when multisensory cues are available.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HYPNOS: Interactive Data Lineage Tracing for Data Transformation Scripts. 数据转换脚本的交互式数据沿袭跟踪。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3552091
Xiwen Cai, Xiaodong Ge, Kai Xiong, Shuainan Ye, Di Weng, Ke Xu, Datong Wei, Jiang Long, Yingcai Wu
{"title":"HYPNOS: Interactive Data Lineage Tracing for Data Transformation Scripts.","authors":"Xiwen Cai, Xiaodong Ge, Kai Xiong, Shuainan Ye, Di Weng, Ke Xu, Datong Wei, Jiang Long, Yingcai Wu","doi":"10.1109/TVCG.2025.3552091","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552091","url":null,"abstract":"<p><p>In a formal data analysis workflow, data validation is a necessary step that helps data analysts verify the quality of the data and ensure the reliability of the results. Data analysts usually need to validate the result when encountering an unexpected result, such as an abnormal record in a table. In order to understand how a specific record is derived, they would backtrace it in the pipeline step by step via checking the code lines, exposing the intermediate tables, and finding the data records from which it is derived. However, manually reviewing code and backtracing data requires certain expertise, while inspecting the traced records in multiple tables and interpreting their relationships is tedious. In this work, we propose HYPNOS, a visualization system that supports interactive data lineage tracing for data transformation scripts. HYPNOS uses a lineage module for parsing and adapting code to capture both schema-level and instance-level data lineage from data transformation scripts. Then, it provides users with a lineage view for obtaining an overview of the data transformation process and a detail view for tracing instance-level data lineage and inspecting details. HYPNOS reveals different levels of data relationships and helps users with data lineage tracing. We demonstrate the usability and effectiveness of HYPNOS through a use case, interviews of four expert users, and a user study.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces. 虚拟现实中多房间连接技术:在小物理空间内行走。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549895
Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega
{"title":"Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces.","authors":"Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega","doi":"10.1109/TVCG.2025.3549895","DOIUrl":"10.1109/TVCG.2025.3549895","url":null,"abstract":"<p><p>In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation and joystick movements, due to the limited space for natural walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This paper presents three room-connection techniques - portals, corridors, and central hubs - that can be used in virtual environments (VEs) to create \"impossible spaces\". These spaces use overlapping areas to maximize available physical space, promising for walking even in constrained spaces. We conducted a user study with 33 participants to assess the effectiveness of these techniques within a small physical area (2.5 × 2.5 m). The results show that all three techniques are viable for connecting rooms in VR, each offering distinct characteristics. Each method positively impacts presence, cybersickness, spatial awareness, orientation, and overall user experience. Specifically, portals offer a flexible and straightforward solution, corridors provide a seamless and natural transition between spaces, and central hubs simplify navigation. The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt VEs to fit small, uncluttered physical spaces, such as those commonly available to VR users at home. Applications such as virtual museum tours, training simulations, and emergency preparedness exercises can greatly benefit from these methods, providing users with a more natural and engaging experience, even within the limited space typical in home settings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
360° 3D Photos from a Single 360° Input Image. 360°3D照片从一个单一的360°输入图像。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549538
Manuel Rey-Area, Christian Richardt
{"title":"360° 3D Photos from a Single 360° Input Image.","authors":"Manuel Rey-Area, Christian Richardt","doi":"10.1109/TVCG.2025.3549538","DOIUrl":"10.1109/TVCG.2025.3549538","url":null,"abstract":"<p><p>360° images are a popular medium for bringing photography into virtual reality. While users can look in any direction by rotating their heads, 360° images ultimately look flat. That is because they lack depth information and thus cannot create motion parallax when translating the head. To achieve a fully immersive VR experience from a single 360° image, we introduce a novel method to upgrade 360° images to free-viewpoint renderings with 6 degrees of freedom. Alternative approaches reconstruct textured 3D geometry, which is fast to render but suffers from visible reconstruction artifacts, or use neural radiance fields that produce high-quality novel views but too slowly for VR applications. Our 360° 3D photos build on 3D Gaussian splatting as the underlying scene representation to simultaneously achieve high visual quality and real-time rendering speed. To fill plausible content in previously unseen regions, we introduce a novel combination of latent diffusion inpainting and monocular depth estimation with Poisson-based blending. Our results demonstrate state-of-the-art visual and depth quality at rendering rates of 105 FPS per megapixel on a commodity GPU.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes Fov-GS:动态场景的注视点3D高斯飞溅。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549576
Runze Fan;Jian Wu;Xuehuai Shi;Lizhi Zhao;Qixiang Ma;Lili Wang
{"title":"Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes","authors":"Runze Fan;Jian Wu;Xuehuai Shi;Lizhi Zhao;Qixiang Ma;Lili Wang","doi":"10.1109/TVCG.2025.3549576","DOIUrl":"10.1109/TVCG.2025.3549576","url":null,"abstract":"Rendering quality and performance greatly affect the user's immersion in VR experiences. 3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. We introduce a 3D Gaussian forest representation that represents the scene as a forest. To construct the 3D Gaussian forest, we propose a 3D Gaussian forest initialization method based on dynamic-static separation. Subsequently, we propose a 3D Gaussian forest optimization method based on deformation field and Gaussian decomposition to optimize the forest and deformation field. To achieve real-time dynamic scene rendering, we present a 3D Gaussian forest rendering method based on HVS models. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to 11.33X speedup. We also conducted a user study, and the results prove that the perceptual quality of our method has a high visual similarity with the ground truth.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2975-2985"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信