IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces IEEE可视化与计算机图形学汇刊:2025年IEEE虚拟现实与3D用户界面会议
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544887
{"title":"IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces","authors":"","doi":"10.1109/TVCG.2025.3544887","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544887","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"i-ii"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination. 具有鲁棒性分离和恢复彩色照明的内禀分解。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-24 DOI: 10.1109/TVCG.2025.3564229
Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu
{"title":"Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination.","authors":"Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu","doi":"10.1109/TVCG.2025.3564229","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564229","url":null,"abstract":"<p><p>Intrinsic decomposition separates an image into reflectance and shading, which contributes to image editing, augmented reality, etc. Despite recent efforts dedicated to this field, effectively separating colored illumination from reflectance and correctly restoring it into shading remains an challenge. We propose a deep intrinsic decomposition method to address this issue. Specifically, by transforming intrinsic decomposition process in RGB image domains into the combination of intensity and chromaticity domains, we propose a novel macro intrinsic decomposition network framework. This framework enables the generation of finer intrinsic components through more relevant features propagation and more detailed sub-constraints guidance. In order to expand the macro network, we integrate multiple attention mechanism modules in key positions of encoders, which enhances the extraction of distinct features. We also propose a skip connection module based on specific deep features guidance, which can filter out features that are physically irrelevant to each intrinsic component. Our method not only outperforms state-of-the-art methods across multiple datasets, but also robustly separates illumination from reflectance and restores it into shading in various types of images. By leveraging our intrinsic images, we achieve visually superior image editing effects compared to other methods, while also being able to manipulate the inherent lighting of the original scene.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of User Perspective, Visual Context, and Feedback on Interactions with AR targets on Magic-lens Displays. 用户视角、视觉环境和反馈对magic lens显示器上AR目标交互的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-23 DOI: 10.1109/TVCG.2025.3563609
Geert Lugtenberg, Isidro Butaslac, Taishi Sawabe, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato
{"title":"Effects of User Perspective, Visual Context, and Feedback on Interactions with AR targets on Magic-lens Displays.","authors":"Geert Lugtenberg, Isidro Butaslac, Taishi Sawabe, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato","doi":"10.1109/TVCG.2025.3563609","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3563609","url":null,"abstract":"<p><p>Performing tasks in a close range using augmented content or instructions visualized on a 2D display can be difficult because of missing visual information in the third dimension. This is because the world on the screen is rendered from the perspective of a single camera, typically on the device itself. However, when performing tasks using hands, haptic feedback supports vision, and prior knowledge and visual context affect task performance. This study rendered the world on a display from the user's perspective to re-enable depth cues from motion parallax and compared it with the conventional device perspective during haptic interactions. We conducted a user study involving 20 subjects and two experiments. First, the accuracy of touchpoint and depth estimation was measured under the conditions of a visual context and perspective rendering on a magic-lens display. We found that user-perspective rendering slightly improved the touch accuracy of targets on a physical surface; however, it significantly improved interactions without tactile feedback. This effect is relatively large when contextual information from the environment is absent, and it diminishes with increased haptic interactions. In the second experiment, we used a user-perspective magic lens to validate the proposed method in a practical needle injection scenario and confirm that the initial injections to virtual targets were more accurate. The results indicate that user-perspective rendering on magic lenses improves immediate performance in haptic tasks, suggesting they are particularly advantageous for frequently changing environments or short-duration tasks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delving into Invisible Semantics for Generalized One-shot Neural Human Rendering. 广义一次性神经人类渲染的不可见语义研究。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-22 DOI: 10.1109/TVCG.2025.3563229
Yihong Lin, Xuemiao Xu, Huaidong Zhang, Cheng Xu, Weijie Li, Yi Xie, Jing Qin, Shengfeng He
{"title":"Delving into Invisible Semantics for Generalized One-shot Neural Human Rendering.","authors":"Yihong Lin, Xuemiao Xu, Huaidong Zhang, Cheng Xu, Weijie Li, Yi Xie, Jing Qin, Shengfeng He","doi":"10.1109/TVCG.2025.3563229","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3563229","url":null,"abstract":"<p><p>Traditional human neural radiance fields often overlook crucial body semantics, resulting in ambiguous reconstructions, particularly in occluded regions. To address this problem, we propose the Super-Semantic Disentangled Neural Renderer (SSD-NeRF), which employs rich regional semantic priors to enhance human rendering accuracy. This approach initiates with a Visible-Invisible Semantic Propagation module, ensuring coherent semantic assignment to occluded parts based on visible body segments. Furthermore, a Region-Wise Texture Propagation module independently extends textures from visible to occluded areas within semantic regions, thereby avoiding irrelevant texture mixtures and preserving semantic consistency. Additionally, a view-aware curricular learning approach is integrated to bolster the model's robustness and output quality across different viewpoints. Extensive evaluations confirm that SSD-NeRF surpasses leading methods, particularly in generating quality and structurally semantic reconstructions of unseen or occluded views and poses.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144028366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields. 基于神经体素场潜在三维扩散的部件感知形状生成。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-22 DOI: 10.1109/TVCG.2025.3562871
Yuhang Huang, Shilong Zou, Xinwang Liu, Kai Xu
{"title":"Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields.","authors":"Yuhang Huang, Shilong Zou, Xinwang Liu, Kai Xu","doi":"10.1109/TVCG.2025.3562871","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3562871","url":null,"abstract":"<p><p>This paper introduces a novel latent 3D diffusion model for generating neural voxel fields with precise partaware structures and high-quality textures. In comparison to existing methods, this approach incorporates two key designs to guarantee high-quality and accurate part-aware generation. On one hand, we introduce a latent 3D diffusion process for neural voxel fields, incorporating part-aware information into the diffusion process and allowing generation at significantly higher resolutions to capture rich textural and geometric details accurately. On the other hand, a part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding accurate part decomposition and producing high-quality rendering results. Importantly, part-aware learning establishes structural relationships to generate texture information for similar regions, thereby facilitating high-quality rendering results. We evaluate our approach across eight different data classes through extensive experimentation and comparisons with state-of-the-art methods. The results demonstrate that our proposed method has superior generative capabilities in part-aware shape generation, outperforming existing state-of-the-art methods. Moreover, we have conducted image- and text-guided shape generation via the conditioned diffusion process, showcasing the advanced potential in multi-modal guided shape generation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TrustME: A Context-Aware Explainability Model to Promote User Trust in Guidance. TrustME:一个情境感知的可解释性模型,以促进用户在指导中的信任。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-21 DOI: 10.1109/TVCG.2025.3562929
Maath Musleh, Renata G Raidou, Davide Ceneda
{"title":"TrustME: A Context-Aware Explainability Model to Promote User Trust in Guidance.","authors":"Maath Musleh, Renata G Raidou, Davide Ceneda","doi":"10.1109/TVCG.2025.3562929","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3562929","url":null,"abstract":"<p><p>Guidance-enhanced approaches are used to support users in making sense of their data and overcoming challenging analytical scenarios. While recent literature underscores the value of guidance, a lack of clear explanations to motivate system interventions may still negatively impact guidance effectiveness. Hence, guidance-enhanced VA approaches require meticulous design, demanding contextual adjustments for developing appropriate explanations. Our paper discusses the concept of explainable guidance and how it impacts the user-system relationship-specifically, a user's trust in guidance within the VA process. We subsequently propose a model that supports the design of explainability strategies for guidance in VA. The model builds upon flourishing literature in explainable AI, available guidelines for developing effective guidance in VA systems, and accrued knowledge on user-system trust dynamics. Our model responds to challenges concerning guidance adoption and context-effectiveness by fostering trust through appropriately designed explanations. To demonstrate the model's value, we employ it in designing explanations within two existing VA scenarios. We also describe a design walk-through with a guidance expert to showcase how our model supports designers in clarifying the rationale behind system interventions and designing explainable guidance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtualized Point Cloud Rendering. 虚拟化点云渲染。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-21 DOI: 10.1109/TVCG.2025.3562696
Jose A Collado, Alfonso Lopez, Juan M Jurado, J Roberto Jimenez
{"title":"Virtualized Point Cloud Rendering.","authors":"Jose A Collado, Alfonso Lopez, Juan M Jurado, J Roberto Jimenez","doi":"10.1109/TVCG.2025.3562696","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3562696","url":null,"abstract":"<p><p>Remote sensing technologies, such as LiDAR, produce billions of points that commonly exceed the storage capacity of the GPU, restricting their processing and rendering. Level of detail (LoD) techniques have been widely investigated, but building the LoD structures is also time-consuming. This study proposes a GPU-driven culling system focused on determining the number of points visible in every frame. It can manipulate point clouds of any arbitrary size while maintaining a low memory footprint in both the CPU and GPU. Instead of organizing point clouds into hierarchical data structures, these are split into groups of points sorted using the Hilbert encoding. This alternative alleviates the occurrence of anomalous groups found in Morton curves. Instead of keeping the entire point cloud in the GPU, points are transferred on demand to ensure real-time capability. Accordingly, our solution can manipulate huge point clouds even in commodity hardware with low memory capacities. Moreover, hole filling is implemented to cover the gaps derived from insufficient density and our LoD system. Our proposal was evaluated with point clouds of up to 18 billion points, achieving an average of 80 frames per second (FPS) without perceptible quality loss. Relaxing memory constraints further enhances visual quality while maintaining an interactive frame rate. We assessed our method on real-world data, comparing it against three state-ofthe- art methods, demonstrating its ability to handle significantly larger point clouds. The code is available on https://github.com/Krixtalx/Nimbus.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144055946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaussianHead: High-Fidelity Head Avatars With Learnable Gaussian Derivation 高斯头像:高保真头像与可学习的高斯推导。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-17 DOI: 10.1109/TVCG.2025.3561794
Jie Wang;Jiu-Cheng Xie;Xianyan Li;Feng Xu;Chi-Man Pun;Hao Gao
{"title":"GaussianHead: High-Fidelity Head Avatars With Learnable Gaussian Derivation","authors":"Jie Wang;Jiu-Cheng Xie;Xianyan Li;Feng Xu;Chi-Man Pun;Hao Gao","doi":"10.1109/TVCG.2025.3561794","DOIUrl":"10.1109/TVCG.2025.3561794","url":null,"abstract":"Creating lifelike 3D head avatars and generating compelling animations for diverse subjects remain challenging in computer vision. This paper presents GaussianHead, which models the active head based on anisotropic 3D Gaussians. Our method integrates a motion deformation field and a single-resolution tri-plane to capture the head's intricate dynamics and detailed texture. Notably, we introduce a customized derivation scheme for each 3D Gaussian, facilitating the generation of multiple “doppelgangers” through learnable parameters for precise position transformation. This approach enables efficient representation of diverse Gaussian attributes and ensures their precision. Additionally, we propose an inherited derivation strategy for newly added Gaussians to expedite training. Extensive experiments demonstrate GaussianHead's efficacy, achieving high-fidelity visual results with a remarkably compact model size (<inline-formula><tex-math>$approx 12$</tex-math></inline-formula> MB). Our method outperforms state-of-the-art alternatives in tasks such as reconstruction, cross-identity reenactment, and novel view synthesis.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 7","pages":"4141-4154"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Multi-Plane Images Construction for Light Field Occlusion Removal. 渐进式多平面图像的光场遮挡去除。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-16 DOI: 10.1109/TVCG.2025.3561374
Shuo Zhang, Song Chang, Zhuoyu Shi, Youfang Lin
{"title":"Progressive Multi-Plane Images Construction for Light Field Occlusion Removal.","authors":"Shuo Zhang, Song Chang, Zhuoyu Shi, Youfang Lin","doi":"10.1109/TVCG.2025.3561374","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3561374","url":null,"abstract":"<p><p>Recently, Light Field (LF) shows great potential in removing occlusion since the objects occluded in some views may be visible in other views. However, existing LF-based methods implicitly model each scene and can only remove objects that have positive disparities in one central views. In this paper, we propose a novel Progressive Multi-Plane Images (MPI) Construction method specifically designed for LF-based occlusion removal. Different from the previous MPI construction methods, we progressively construct MPIs layer by layer in order from near to far. In order to accurately model the current layer, the positions of foreground occlusions in the nearer layers are taken as occlusion prior. Specifically, we propose an Occlusion-Aware Attention Network to generate each layer of MPIs with reliable information in occluded regions. For each layer, occlusions in the current layer are filtered out so that the background is better recovered just using the visible views instead of the other occluded views. Then, by simply removing the layers containing occlusions and rendering MPIs in kinds of viewpoints, the occlusion removal results for different views are generated. Experiments on synthetic and real-world scenes show that our method outperforms state-of-the-art LF occlusion removal methods in quantitative and visual comparisons. Moreover, we also apply the proposed progressive MPI construction method to the view synthesis task. The occlusion edges in our synthesized views achieve significantly better quality, which also verifies that our method can better model the occluded regions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Wireframe Meshes with Discrete Equivalence Classes. 用离散等价类建模线框网格。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-16 DOI: 10.1109/TVCG.2025.3561370
Pengyun Qiu, Rulin Chen, Peng Song, Ying He
{"title":"Modeling Wireframe Meshes with Discrete Equivalence Classes.","authors":"Pengyun Qiu, Rulin Chen, Peng Song, Ying He","doi":"10.1109/TVCG.2025.3561370","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3561370","url":null,"abstract":"<p><p>We study a problem of modeling wireframe meshes where the vertices and edges fall into a set of discrete equivalence classes, respectively. This problem is motivated by the need of fabricating large wireframe structures at lower cost and faster speed since both nodes (thickened vertices) and rods (thickened edges) can be mass-produced. Given a 3D shape represented as a wireframe mesh, our goal is to compute a set of template vertices and a set of template edges, whose instances can be used to produce a fabricable wireframe mesh that approximates the input shape. To achieve this goal, we propose a computational approach that generates the template vertices and template edges by iteratively clustering and optimizing the mesh vertices and edges. At the clustering stage, we cluster mesh vertices and edges according to their shape and length, respectively. At the optimization stage, we first locally optimize the mesh to reduce the number of clusters of vertices and/or edges, and then globally optimize the mesh to reduce the intra-cluster variance for vertices and edges, while facilitating fabricability of the wireframe mesh. We demonstrate that our approach is able to model wireframe meshes with various shapes and topologies, compare it with three state-of-the-art approaches to show its superiority, and validate fabricability of our results by making three physical prototypes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144063608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信