IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Manual-Free Gaze Interaction Via Bayesian-Based Implicit Intention Prediction. 基于贝叶斯的内隐意图预测的无手动凝视交互。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-29 DOI: 10.1109/TVCG.2025.3615198
Taewoo Jo, Ho Jung Lee, Sulim Chun, In-Kwon Lee
{"title":"Manual-Free Gaze Interaction Via Bayesian-Based Implicit Intention Prediction.","authors":"Taewoo Jo, Ho Jung Lee, Sulim Chun, In-Kwon Lee","doi":"10.1109/TVCG.2025.3615198","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3615198","url":null,"abstract":"<p><p>Eye gaze is regarded as a promising interaction modality in extended reality (XR) environments. However, to address the challenges posed by the Midas touch problem, the determination of selection intention frequently relies on the implementation of additional manual selection techniques, such as explicit gestures (e.g., controller/hand inputs or dwell), which are inherently limited in their functionality. We hereby present a machine learning (ML) model based on the Bayesian framework, which is employed to predict user selection intention in real-time, with the unique distinction that all data used for training and prediction are obtained from gaze data alone. The model utilizes a Bayesian approach to transform gaze data into selection probabilities, which are subsequently fed into an ML model to discern selection intentions. In Study 1, a high-performance model was constructed, enabling real-time inference using solely gaze data. This approach was found to enhance performance, thereby validating the efficacy of the proposed methodology. In Study 2, a user study was conducted to validate a manual-free technique based on the prediction model. The advantages of eliminating explicit gestures and potential applications were also discussed.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145194227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PICA: Physics-Integrated Clothed Avatar. 异食癖(PICA):物理一体化的穿着衣服的化身。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-26 DOI: 10.1109/TVCG.2025.3614642
Bo Peng, Yunfan Tao, Haoyu Zhan, Yudong Guo, Juyong Zhang
{"title":"PICA: Physics-Integrated Clothed Avatar.","authors":"Bo Peng, Yunfan Tao, Haoyu Zhan, Yudong Guo, Juyong Zhang","doi":"10.1109/TVCG.2025.3614642","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3614642","url":null,"abstract":"<p><p>We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-plausible dynamics, even for loose clothing. Previous neural rendering-based representations of animatable clothed humans typically employ a single model to represent both the clothing and the underlying body. While efficient, these approaches often fail to represent complex garment dynamics, leading to incorrect deformations and noticeable rendering artifacts, especially for sliding or loose garments. Furthermore, most previous works represent garment dynamics as pose-dependent deformations and facilitate novel pose animations in a data-driven manner. This often results in outcomes that do not faithfully represent the mechanics of motion and are prone to generating artifacts in out-of-distribution poses. To address these issues, we employ two individual 2D Gaussian Splatting (2DGS) models with different deformation characteristics, modeling the human body and clothing separately. This distinction allows for better handling of their respective motion characteristics. With this representation, we integrate a graph neural network (GNN)-based clothing physics simulation module to ensure a better representation of clothing dynamics. Our method, through its carefully designed features, achieves high-fidelity rendering of clothed human bodies in complex and novel driving poses, outperforming previous methods under the same settings. The source code will be available on our project page: https://ustc3dv.github.io/PICA/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strange Familiars: Exploring the Design of Avatars and Virtual Environments for Reconnecting Dormant Ties in Virtual Reality. 陌生的熟悉者:探索虚拟现实中重新连接休眠关系的化身和虚拟环境的设计。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-26 DOI: 10.1109/TVCG.2025.3614445
Yu-Ting Yen, Fang-Ying Liao, Chi-Lan Yang, Ruei-Che Chang, Fu-Yin Cherng, Bing-Yu Chen
{"title":"Strange Familiars: Exploring the Design of Avatars and Virtual Environments for Reconnecting Dormant Ties in Virtual Reality.","authors":"Yu-Ting Yen, Fang-Ying Liao, Chi-Lan Yang, Ruei-Che Chang, Fu-Yin Cherng, Bing-Yu Chen","doi":"10.1109/TVCG.2025.3614445","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3614445","url":null,"abstract":"<p><p>Rekindling old social bonds with individuals who were once a part of our lives but have since faded away is crucial for our well-being. Such connections with dormant ties help us overcome loneliness and provide social support. Recently, virtual reality (VR) emerged as a promising tool for facilitating social interactions, such as online gatherings for formal or casual activities. VR can offer immersive and shared experiences, facilitating genuine connections between people. This provides a unique advantage over traditional computer-mediated communication methods. However, while prior research has explored how VR can aid in forming new social connections, its potential to reconnect dormant ties is largely unexplored. This paper aims to bridge this gap by examining how different features of VR, specifically avatar appearance and virtual environments, influence reactivations of dormant ties. We conducted an experiment involving 24 dyads to investigate the effect of different avatar-self similarities and virtual environments on the perceptions and interactions between dormant ties. Our findings indicate that avatars resembling oneself and dormant ties promote social closeness. Familiar virtual environments evoke shared memories, while unfamiliar ones stimulate more conversations. We discuss the impact of VR features on reconnecting dormant ties and provide implications for re-connecting relationships in VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Space Map for Visual Utilization of Generated Data. 可视化利用生成数据的潜在空间图。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-25 DOI: 10.1109/TVCG.2025.3614247
Yang Zhang, Jie Li, Wei Zeng
{"title":"Latent Space Map for Visual Utilization of Generated Data.","authors":"Yang Zhang, Jie Li, Wei Zeng","doi":"10.1109/TVCG.2025.3614247","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3614247","url":null,"abstract":"<p><p>Samples produced by generative models, called Generated Samples (GSs), have become a critical supplement to those collected from the real world in data-centric applications. Domain experts typically randomly collect many GSs and manually select a few of interest for applications. However, the methodology lacks guidance to locate desirable ones that exhibit specific features or adhere to application-oriented metrics among infinite generable candidates. These samples are generally concentrated in a few small regions of the generative model's latent space, called Generative Latent Space (GLS). This paper presents Latent Space Map that projects a GLS onto a plane to help users locate regions rich in desirable GSs. Our research revolves around two challenges in constructing the map. First, many GSs in a GLS are low-quality and useless for applications. Excluding them from the projection is challenging for their irregular distribution. We employ a Monte Carlo-based method to capture a manifold for projection, where high-quality GSs are mainly distributed. Second, the GLS is high-dimensional and unbounded, complicating the projection. We design a manifold projection method that endows the map with desirable characteristics to achieve high display accuracy and effective pattern perception for users freely observing the manifold. We further develop a system integrating Latent Space Map to aid in GS selection and refinement. Real-world cases, quantitative experiments, and feedback from domain experts confirm the usability and effectiveness of our approach.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual Model for Foveated Rendering With Illuminance Demodulation. 亮度解调注视点渲染的感知模型。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-25 DOI: 10.1109/TVCG.2025.3614349
Xiao Hu, Xiang Xu, JiuXing Zhang, YanNing Xu, Lu Wang
{"title":"Perceptual Model for Foveated Rendering With Illuminance Demodulation.","authors":"Xiao Hu, Xiang Xu, JiuXing Zhang, YanNing Xu, Lu Wang","doi":"10.1109/TVCG.2025.3614349","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3614349","url":null,"abstract":"<p><p>Foveated rendering exploits the non-uniform acuity of human vision to allocate computational resources more efficiently by reducing image fidelity in the peripheral field of view. While existing perceptual models for foveated rendering focus primarily on spatial resolution and contrast sensitivity, they overlook the perceptual asymmetry between direct and indirect illumination. In this work, we introduce a novel perceptual model that incorporates illuminance demodulation to account for this distinction. Our model adaptively modulates the foveation rate based on the relative contributions of direct and indirect illumination. Building on this model, we develop a practical rendering framework that separately applies tailored foveation strategies to direct and indirect illumination effects. Quantitative metrics and user studies confirm that our method maintains perceptual equivalence to full-resolution rendering. The sparse rendering stage achieves a $2.18times$ to $7.10times$ speedup, contributing to an overall acceleration of $1.71times$ to $3.26times$.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parabolic Sphere Tracing Of Signed Distance Fields For Old Glass Modelling And Rendering. 旧玻璃建模与渲染中符号距离场的抛物球追踪。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-25 DOI: 10.1109/TVCG.2025.3613853
Quentin Huan, Francois Rousselle, Christophe Renaud
{"title":"Parabolic Sphere Tracing Of Signed Distance Fields For Old Glass Modelling And Rendering.","authors":"Quentin Huan, Francois Rousselle, Christophe Renaud","doi":"10.1109/TVCG.2025.3613853","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3613853","url":null,"abstract":"<p><p>We present a method for modeling and rendering irregular and heterogeneous glass objects, with a specific emphasis on stained glass windows and window works often encountered in architecture from middle age to 18th century. The artisanal production of sheet glass results in glass panels displaying a vast variety of surface and volume irregularities like bubbles, irregular surface or smoothly varying refractive index, all of which contribute to the specific visual aspect of old glass. We propose to account for all the aforementioned effects in a unified framework based on signed distance functions and an analytic solution of the ray tracing equations on tetrahedral volume elements. We demonstrate how to construct an unbiased estimator for the transmitted lighting produced by such panels by using Fermat's principle and results from seismic ray theory. We use texture coordinates to map arbitrary sections of a complex glass panel onto the individual faces of a mesh, allowing the modeling and rendering of complex 3-dimensional objects composed of colored glass facets such as stained glass windows.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SketchRefiner: Text-Guided Sketch Refinement Through Latent Diffusion Models. SketchRefiner:通过潜在扩散模型进行文本引导的草图细化。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-23 DOI: 10.1109/TVCG.2025.3613388
Yingjie Tian, Minghao Liu, Haoran Jiang, Yunbin Tu, Duo Su
{"title":"SketchRefiner: Text-Guided Sketch Refinement Through Latent Diffusion Models.","authors":"Yingjie Tian, Minghao Liu, Haoran Jiang, Yunbin Tu, Duo Su","doi":"10.1109/TVCG.2025.3613388","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3613388","url":null,"abstract":"<p><p>Free-hand sketches serve as efficient tools for creativity and communication, yet expressing ideas clearly through sketches remains challenging for untrained individuals. Optimizing sketches through text guidance can enhance individuals' ability to effectively convey their ideas and improve overall communication efficiency. While recent advancements in Artificial Intelligence Generated Content (AIGC) have been notable, research on optimizing free-hand sketches remains relatively unexplored. In this paper, we introduce SketchRefiner, an innovative method designed to refine rough sketches from various categories into polished versions guided by text prompts. SketchRefiner utilizes a latent diffusion model with ControlNet to guide a differentiable rasterizer in optimizing a set of Bézier curves. We extend the score distillation sampling (SDS) loss and introduce a joint semantic loss to encourage sketches aligned with given text prompts and free-hand sketches. Additionally, we propose a fusion attention-map stroke initialization strategy to improve the quality of refined sketches. Furthermore, SketchRefiner provides users with fine-grained control over text guidance. Through extensive experiments, we demonstrate that our method can generate accurate and aesthetically pleasing refined sketches that closely align with input text prompts and sketches.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145133147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene-based Foveated Fluid Animation in Virtual Reality. 虚拟现实中基于场景的注视点流体动画。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-22 DOI: 10.1109/TVCG.2025.3609904
Yue Wang, Yan Zhang, Xuanhui Yang, Hui Wang, Xubo Yang
{"title":"Scene-based Foveated Fluid Animation in Virtual Reality.","authors":"Yue Wang, Yan Zhang, Xuanhui Yang, Hui Wang, Xubo Yang","doi":"10.1109/TVCG.2025.3609904","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3609904","url":null,"abstract":"<p><p>Physically-based fluid animation in Virtual Reality (VR) significantly enhances the user experience through visually engaging flow motions. Nonetheless, such simulations are often limited by their substantial computational demands. A tailored adaptive simulation algorithm is important for high-performance VR fluid simulations, which dynamically allocate degrees of freedom (DoF) while accounting for user perception in VR. This paper proposes a novel scene-based gaze-contingent fluid simulation system for VR, featuring a highly adaptive fluid simulator integrated with a VR perceptual model that accounts for the foveation and geometry of fluid. Our method leverages an eccentricity and curvature-dependent perceptual model to dynamically allocate computational resources, improving the efficiency and maintaining spatio-temporal stability of fluid animation in VR. A user study was conducted to measure the simulation resolution thresholds for fluid animations in VR, considering various levels of eccentricity and curvature. Our findings indicate notable differences in perceptual thresholds based on these metrics. By incorporating these insights into our adaptive fluid simulator as a unified sizing function, we maintain perceptually optimal particle resolution, achieving up to a 3.62× performance improvement while delivering superior perceptual realism and user experience, as validated by a subjective evaluation study.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFG-PCN: Point Cloud Completion with Degree-Flexible Point Graph. DFG-PCN:基于度柔性点图的点云补全。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-22 DOI: 10.1109/TVCG.2025.3612379
Zhenyu Shu, Jian Yao, Shiqing Xin
{"title":"DFG-PCN: Point Cloud Completion with Degree-Flexible Point Graph.","authors":"Zhenyu Shu, Jian Yao, Shiqing Xin","doi":"10.1109/TVCG.2025.3612379","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3612379","url":null,"abstract":"<p><p>Point cloud completion is a vital task focused on reconstructing complete point clouds and addressing the incompleteness caused by occlusion and limited sensor resolution. Traditional methods relying on fixed local region partitioning, such as k-nearest neighbors, which fail to account for the highly uneven distribution of geometric complexity across different regions of a shape. This limitation leads to inefficient representation and suboptimal reconstruction, especially in areas with fine-grained details or structural discontinuities. This paper proposes a point cloud completion framework called Degree-Flexible Point Graph Completion Network (DFG-PCN). It adaptively assigns node degrees using a detail-aware metric that combines feature variation and curvature, focusing on structurally important regions. We further introduce a geometry-aware graph integration module that uses Manhattan distance for edge aggregation and detail-guided fusion of local and global features to enhance representation. Extensive experiments on multiple benchmark datasets demonstrate that our method consistently outperforms state-of-the-art approaches.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Textured Mesh Quality Assessment Using Geometry and Color Field Similarity. 基于几何和色场相似性的纹理网格质量评估。
IF 6.5
IEEE transactions on visualization and computer graphics Pub Date : 2025-09-22 DOI: 10.1109/TVCG.2025.3612942
Kaifa Yang, Qi Yang, Yiling Xu, Zhu Li
{"title":"Textured Mesh Quality Assessment Using Geometry and Color Field Similarity.","authors":"Kaifa Yang, Qi Yang, Yiling Xu, Zhu Li","doi":"10.1109/TVCG.2025.3612942","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3612942","url":null,"abstract":"<p><p>Textured mesh quality assessment (TMQA) is critical for various 3D mesh applications. However, existing TMQA methods often struggle to provide accurate and robust evaluations. Motivated by the effectiveness of fields in representing both 3D geometry and color information, we propose a novel point-based TMQA method called field mesh quality metric (FMQM). FMQM utilizes signed distance fields and a newly proposed color field named nearest surface point color field to realize effective mesh feature description. Four features related to visual perception are extracted from the geometry and color fields: geometry similarity, geometry gradient similarity, space color distribution similarity, and space color gradient similarity. Experimental results on three benchmark datasets demonstrate that FMQM outperforms state-of-the-art (SOTA) TMQA metrics. Furthermore, FMQM exhibits low computational complexity, making it a practical and efficient solution for real-world applications in 3D graphics and visualization. Our code is publicly available at: https://github.com/yyyykf/FMQM.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信