IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision. 体育应用中的深度学习调查:感知、理解和决策。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-26 DOI: 10.1109/TVCG.2025.3554801
Zhonghan Zhao, Wenhao Chai, Shengyu Hao, Wenhao Hu, Guanhong Wang, Shidong Cao, Mingli Song, Jenq-Neng Hwang, Gaoang Wang
{"title":"A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision.","authors":"Zhonghan Zhao, Wenhao Chai, Shengyu Hao, Wenhao Hu, Guanhong Wang, Shidong Cao, Mingli Song, Jenq-Neng Hwang, Gaoang Wang","doi":"10.1109/TVCG.2025.3554801","DOIUrl":"10.1109/TVCG.2025.3554801","url":null,"abstract":"<p><p>Deep learning has the potential to revolutionize sports performance, with applications ranging from perception and comprehension to decision. This paper presents a comprehensive survey of deep learning in sports performance, focusing on three main aspects: algorithms, datasets and virtual environments, and challenges. Firstly, we discuss the hierarchical structure of deep learning algorithms in sports performance which includes perception, comprehension and decision while comparing their strengths and weaknesses. Secondly, we list widely used existing datasets in sports and highlight their characteristics and limitations. Finally, we summarize current challenges and point out future trends of deep learning in sports. Our survey provides valuable reference material for researchers interested in deep learning in sports applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio-visual aware Foveated Rendering. 视听感知的有眼渲染
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-26 DOI: 10.1109/TVCG.2025.3554737
Xuehuai Shi, Yucheng Li, Jiaheng Li, Jian Wu, Jieming Yin, Xiaobai Chen, Lili Wang
{"title":"Audio-visual aware Foveated Rendering.","authors":"Xuehuai Shi, Yucheng Li, Jiaheng Li, Jian Wu, Jieming Yin, Xiaobai Chen, Lili Wang","doi":"10.1109/TVCG.2025.3554737","DOIUrl":"10.1109/TVCG.2025.3554737","url":null,"abstract":"<p><p>With the increasing complexity of geometry and rendering effects in virtual reality (VR) scenes, existing foveated rendering methods for VR head-mounted displays (HMDs) struggle to meet users' demands for VR scene rendering with high frame rates (≥ 60 f ps for rendering binocular foveated images in VR scenes containing over 50M triangles). Current research validates that auditory content affects the perception of the human visual system (HVS). However, existing foveated rendering methods primarily model the HVS's eccentricity-dependent visual perception ability on the visual content in VR while ignoring the impact of auditory content on the HVS's visual perception. In this paper, we introduce an auditory-content-based perceived rendering quality analysis to quantify the impact of visual perception under different auditory conditions in foveated rendering. Based on the analysis results, we propose an audio-visual aware foveated rendering method (AvFR). AvFR first constructs an audio-visual feature-driven perception model that predicts the eccentricity-based visual perception in real time by combining the scene's audio-visual content, and then proposes a foveated rendering cost optimization algorithm to adaptively control the shading rate of different regions with the guidance of the perception model. In complex scenes with visual and auditory content containing over 1.17M triangles, AvFR renders high-quality binocular foveated images at an average frame rate of 116 f ps. The results of the main user study and performance evaluation validate that AvFR achieves significant performance improvement (up to 1.4× speedup) without lowering the perceived visual quality compared with the state-of-the-art VR-HMD foveated rendering method.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Performance and Perception of Uncertainty Visualizations in Geospatial Applications: A Scoping Review. 地理空间应用中人类对不确定性可视化的表现和感知:范围审查。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-26 DOI: 10.1109/TVCG.2025.3554969
Ryan Tennant, Tania Randall
{"title":"Human Performance and Perception of Uncertainty Visualizations in Geospatial Applications: A Scoping Review.","authors":"Ryan Tennant, Tania Randall","doi":"10.1109/TVCG.2025.3554969","DOIUrl":"10.1109/TVCG.2025.3554969","url":null,"abstract":"<p><p>Geospatial data are often uncertain due to measurement, spatial, or temporal limitations. A knowledge gap exists about how geospatial uncertainty visualization techniques influence human factors measures. This comprehensive review synthesized the current literature on visual representations of uncertainty in geospatial data applications, identifying the breadth of techniques and the relationships between strategies and human performance and perception outcomes. Eligible articles described and evaluated at least one method for representing uncertainty in geographical data with participants, including land, ocean, weather, climate, and positioning data. Forty articles were included. Uncertainty was visualized using multivariate and univariate maps through colours, shapes, boundary regions, textures, symbols, grid noise, and text. There were varying effects, and no definitive superior method was identified. The predominant user focus was on novices. Trends were observed in supporting users understand uncertainty, user preferences, confidence, decision-making performance, and response times for different techniques and application contexts. The findings highlight the impacts of different categorizations within colour and shape techniques, heterogeneity in perception and performance evaluation, performance and perception mismatch, and differences and similarities between novices and experts. Contextual factors and user characteristics, including understanding the decision-maker's tasks, user type, and desired outcomes for decision-support appear to be important factors influencing the design of effective uncertainty visualizations. Future research on geospatial applications of uncertainty visualizations can expand on the observed trends with consistent and standardized measurement and reporting, further explore human performance and perception impacts with 3-dimensional and interactive uncertainty visualizations, and perform real-world evaluations within various contexts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HO-NeRF: Radiance Fields Reconstruction for Two-Hand-Held Objects. HO-NeRF:用于手持物体的辐射场重建。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-25 DOI: 10.1109/TVCG.2025.3553975
Xinxin Liu, Qi Zhang, Xin Huang, Ying Feng, Guoqing Zhou, Qing Wang
{"title":"HO-NeRF: Radiance Fields Reconstruction for Two-Hand-Held Objects.","authors":"Xinxin Liu, Qi Zhang, Xin Huang, Ying Feng, Guoqing Zhou, Qing Wang","doi":"10.1109/TVCG.2025.3553975","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3553975","url":null,"abstract":"<p><p>Our work aims to reconstruct the appearance and geometry of the two-hand-held object from a sequence of color images. In contrast to traditional single-hand-held manipulation, two-hand-holding allows more flexible interaction, thereby providing back views of the object, which is particularly convenient for reconstruction but generates complex view-dependent occlusions. The recent development of neural rendering provides new potential for hand-held object reconstruction. In this paper, we propose a novel neural representation-based framework to recover radiance fields of the two-hand-held object, named HO-NeRF. We first design an object-centric semantic module based on the geometric signed distance function cues to predict 3D object-centric regions and develop the view-dependent visible module based on the image-related cues to label 2D occluded regions. We then combine them to obtain a 2D visible mask that adaptively guides ray sampling on the object for optimization. We also provide a newly collected HO dataset to validate the proposed method. Experiments show that our method achieves superior performance on reconstruction completeness and view-consistency synthesis compared to the state-of-the-art methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143712668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Neural Volume Rendering via Learning View-Dependent Integral Approximation. 基于学习视相关积分逼近的神经体绘制改进。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-24 DOI: 10.1109/TVCG.2025.3554692
Yifan Wang, Jun Xu, Yuan Zeng, Yi Gong
{"title":"Improving Neural Volume Rendering via Learning View-Dependent Integral Approximation.","authors":"Yifan Wang, Jun Xu, Yuan Zeng, Yi Gong","doi":"10.1109/TVCG.2025.3554692","DOIUrl":"10.1109/TVCG.2025.3554692","url":null,"abstract":"<p><p>Neural radiance fields (NeRFs) have achieved impressive view synthesis results by learning an implicit volumetric representation from multi-view images. To project the implicit representation into an image, NeRF employs volume rendering that approximates the continuous integrals of rays as an accumulation of the colors and densities of the sampled points. Although this approximation enables efficient rendering, it ignores the direction information in point intervals, resulting in ambiguous features and limited reconstruction quality. In this paper, we propose a learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction. We model the volume rendering integral with a piecewise constant volume density and spherical harmonic-guided view-dependent features, facilitating ambiguity elimination while preserving the rendering efficiency. In addition, we introduce a regularization term that restricts the anisotropic representation effect to be local, with negligible effect on geometry representations, and that encourages recovering the correct geometry. Our method is flexible and can be plugged into NeRF-based frameworks. Extensive experiments show that the proposed representation can boost the rendering quality of various NeRFs and achieve state-of-the-art rendering performance on both synthetic and real-world scenes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mixed Reality Car A-Pillar Design Support System Utilizing Projection Mapping. 基于投影映射的混合现实汽车A柱设计支撑系统。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-24 DOI: 10.1109/TVCG.2025.3554037
Ryotaro Yoshida, Toshihiro Hara, Yusaku Takeda, Kenji Murase, Daisuke Iwai, Kosuke Sato
{"title":"A Mixed Reality Car A-Pillar Design Support System Utilizing Projection Mapping.","authors":"Ryotaro Yoshida, Toshihiro Hara, Yusaku Takeda, Kenji Murase, Daisuke Iwai, Kosuke Sato","doi":"10.1109/TVCG.2025.3554037","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3554037","url":null,"abstract":"<p><p>Projection mapping (PM) is useful in the product design process, since it seamlessly bridges a physical mockup and its digital twin by allowing designers to interactively explore new textures, colors, and shapes without the need to create new physical mockups. While PM has proven effective for car interior design, previous research focused solely on supporting the design of dashboards and instrument panels, neglecting evaluation in realistic driving scenarios. This paper introduces a self-contained car interior design support system that extends beyond the dashboard to include the A-pillars. Additionally, to enable designers to evaluate their designs in authentic driving conditions, we integrate a driving simulator, complete with a motion platform, into the PM system. Through the construction of a prototype, we demonstrate the feasibility of our proposed system. Finally, through user studies, we derive guidelines for PM-based car interior design to optimize the user experience.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach MineVRA:通过情境感知方法探索 XR 环境中生成式人工智能驱动内容开发的作用。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-21 DOI: 10.1109/TVCG.2025.3549160
Lorenzo Stacchio;Emanuele Balloni;Emanuele Frontoni;Marina Paolanti;Primo Zingaretti;Roberto Pierdicca
{"title":"MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach","authors":"Lorenzo Stacchio;Emanuele Balloni;Emanuele Frontoni;Marina Paolanti;Primo Zingaretti;Roberto Pierdicca","doi":"10.1109/TVCG.2025.3549160","DOIUrl":"10.1109/TVCG.2025.3549160","url":null,"abstract":"The convergence of Artificial Intelligence (AI), Computer Vision (CV), Computer Graphics (CG), and Extended Reality (XR) is driving innovation in immersive environments. A key challenge in these environments is the creation of personalized 3D assets, traditionally achieved through manual modeling, a time-consuming process that often fails to meet individual user needs. More recently, Generative AI (GenAI) has emerged as a promising solution for automated, context-aware content generation. In this paper, we present MineVRA (Multimodal generative artificial iNtelligence for contExt-aware Virtual Reality Assets), a novel Human-In-The-Loop (HITL) XR framework that integrates GenAI to facilitate coherent and adaptive 3D content generation in immersive scenarios. To evaluate the effectiveness of this approach, we conducted a comparative user study analyzing the performance and user satisfaction of GenAI-generated 3D objects compared to those generated by Sketchfab in different immersive contexts. The results suggest that GenAI can significantly complement traditional 3D asset libraries, with valuable design implications for the development of human-centered XR environments.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"3602-3612"},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PantographHaptics: A Technique for Large-Surface Passive Haptic Interactions using Pantograph Mechanisms PantographHaptics:一种利用受电弓机制实现大表面被动触觉互动的技术。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-21 DOI: 10.1109/TVCG.2025.3549869
Marcus K. E. Friedel;Zachary McKendrick;Ehud Sharlin;Ryo Suzuki
{"title":"PantographHaptics: A Technique for Large-Surface Passive Haptic Interactions using Pantograph Mechanisms","authors":"Marcus K. E. Friedel;Zachary McKendrick;Ehud Sharlin;Ryo Suzuki","doi":"10.1109/TVCG.2025.3549869","DOIUrl":"10.1109/TVCG.2025.3549869","url":null,"abstract":"In Virtual Reality (VR), existing hand-scale passive interaction techniques are unsuitable for continuous large-scale renders: room-scale proxies lack portability, and wearable robotic arms are energy-intensive and induce friction. This paper presents a technique for providing wall haptics in VR which supports portable, passive, and large-scale user interactions. We propose a potential solution, PantographHaptics, a technique which uses the scaling properties of a pantograph to passively render two-degree-of-freedom body-scale surfaces to overcome the limitations present in existing methods. We demonstrate PantographHaptics through two prototypes: HapticLever, a grounded system, and Feedbackpack, a wearable device. We evaluate these prototypes with technical and user evaluations. Our 9-participant first study compares HapticLever against traditional haptic modalities, while our 7-participant second study verifies Feedbackpack's usability and interaction fidelity.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2736-2745"},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Navigation on Proxemics in an Immersive Virtual Environment with Conversational Agents 具有会话代理的沉浸式虚拟环境中导航对代理的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-20 DOI: 10.1109/TVCG.2025.3550231
Rose Connolly;Lauren Buck;Victor Zordan;Rachel McDonnell
{"title":"The Impact of Navigation on Proxemics in an Immersive Virtual Environment with Conversational Agents","authors":"Rose Connolly;Lauren Buck;Victor Zordan;Rachel McDonnell","doi":"10.1109/TVCG.2025.3550231","DOIUrl":"10.1109/TVCG.2025.3550231","url":null,"abstract":"As social VR grows in popularity, understanding how to optimise interactions becomes increasingly important. Interpersonal distance–the physical space people maintain between each other–is a key aspect of user experience. Previous work in psychology has shown that breaches of personal space cause stress and discomfort. Thus, effectively managing this distance is crucial in social VR, where social interactions are frequent. Teleportation, a commonly used locomotion method in these environments, involves distinct cognitive processes and requires users to rely on their ability to estimate distance. Despite its widespread use, the effect of teleportation on proximity remains unexplored. To investigate this, we measured the interpersonal distance of 70 participants during interactions with embodied conversational agents, comparing teleportation to natural walking. Our findings revealed that participants maintained closer proximity from the agents during teleportation. Female participants kept greater distances from the agents than male participants, and natural walking was associated with higher agency and body ownership, though co-presence remained unchanged. We propose that differences in spatial perception and spatial cognitive load contribute to reduced interpersonal distance with teleportation. These findings emphasise that proximity should be a key consideration when selecting locomotion methods in social VR, highlighting the need for further research on how locomotion impacts spatial perception and social dynamics in virtual environments.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"2787-2797"},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View 3D Hair Modeling with Clumping Optimization. 单视图3D头发建模与团块优化。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-20 DOI: 10.1109/TVCG.2025.3552919
Zhongsi Tang, Jiahao Geng, Yanlin Weng, Youyi Zheng, Kun Zhou
{"title":"Single-View 3D Hair Modeling with Clumping Optimization.","authors":"Zhongsi Tang, Jiahao Geng, Yanlin Weng, Youyi Zheng, Kun Zhou","doi":"10.1109/TVCG.2025.3552919","DOIUrl":"10.1109/TVCG.2025.3552919","url":null,"abstract":"<p><p>Deep learning advancements have enabled the generation of visually plausible hair geometry from a single image, but the results still do not meet the realism required for further applications (e.g., high quality hair rendering and simulation). One of the essential element that is missing in previous single-view hair reconstruction methods is the clumping effect of hair, which is influenced by scalp secretions and oils, and is a key ingredient for high-quality hair rendering and simulation. Inspired by common practices in industrial production which simulates realistic hair clumping by allowing artists to adjust clumping parameters, we aim to integrate these clumping effects into single-view hair reconstruction. We introduce a hierarchical hair representation that incorporates a clumping modifier into the guide hair and skinning-based hair expressions. This representation utilizes guide strands and skinning weights to express the basic geometric structure of the hair. The clumping modifier allows for the expression of more detailed and realistic clumping effects. Based on this representation, We design a fully differentiable framework integrating a neural measurement of clumping and a line-based rasterization renderer to iteratively solve guide strands positions and clumping parameters. Our method demonstrates superior performance both qualitatively and quantitatively compared to state-of-the-art techniques.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信