Computers & Graphics-Uk最新文献

筛选
英文 中文
GRPE: High-fidelity 3D Gaussian reconstruction for plant entities GRPE:植物实体的高保真三维高斯重建
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-20 DOI: 10.1016/j.cag.2025.104277
Yanhao Ding , Yanyan Li , Xiangyou Li , Zihao Guo , Xiaomeng Li , Zhenbo Li
{"title":"GRPE: High-fidelity 3D Gaussian reconstruction for plant entities","authors":"Yanhao Ding ,&nbsp;Yanyan Li ,&nbsp;Xiangyou Li ,&nbsp;Zihao Guo ,&nbsp;Xiaomeng Li ,&nbsp;Zhenbo Li","doi":"10.1016/j.cag.2025.104277","DOIUrl":"10.1016/j.cag.2025.104277","url":null,"abstract":"<div><div>3D Plant models hold significant importance for constructing virtual worlds. Currently, there is a lack of algorithms capable of achieving high-fidelity reconstruction of plant surfaces.</div><div>We propose a unified architecture to reconstruct high-fidelity 3D surface models and render realistic plant views, which enhances geometric accuracy during Gaussian densification and mesh extraction from 2D images.</div><div>The algorithm initially employs large vision models for semantic segmentation to extract plant objects from 2D RGB images, generating sparse mappings and camera poses. Subsequently, these images and point clouds are processed to produce Gaussian ellipsoids and 3D textured models, with the algorithm detecting smooth regions during densification. To ensure precise alignment of the Gaussians with object surfaces, the algorithm incorporates a robust 3D Gaussian splatting method that includes an outlier removal algorithm. Compared to traditional techniques, this approach yields models that are more accurate and exhibit less noise. Experimental results demonstrate that our method outperforms existing plant modeling approaches, surpassing existing methods</div><div>In terms of PSNR, LPIPS, and SSIM metrics. The high-precision annotated plant dataset and system code are available at <span><span>https://github.com/DYH200009/GRPE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104277"},"PeriodicalIF":2.5,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144489910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SciBlend: Advanced data visualization workflows within Blender SciBlend:高级数据可视化工作流程在搅拌机
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-19 DOI: 10.1016/j.cag.2025.104264
José Marín , Tiffany M.G. Baptiste , Cristobal Rodero , Steven E. Williams , Steven A. Niederer , Ignacio García-Fernández
{"title":"SciBlend: Advanced data visualization workflows within Blender","authors":"José Marín ,&nbsp;Tiffany M.G. Baptiste ,&nbsp;Cristobal Rodero ,&nbsp;Steven E. Williams ,&nbsp;Steven A. Niederer ,&nbsp;Ignacio García-Fernández","doi":"10.1016/j.cag.2025.104264","DOIUrl":"10.1016/j.cag.2025.104264","url":null,"abstract":"<div><div>Scientific data visualization is essential for analysis, communication, and storytelling in research. While Blender offers a powerful rendering engine and a flexible 3D environment, its steep learning curve and general-purpose interface can hinder scientific workflows. To address this gap, we present SciBlend, a Python-based toolkit that extends Blender for data visualization. It provides specialized add-ons for multiple computational data files import, annotation, shading, and scene composition, enabling both photorealistic (Cycles) and real-time (EEVEE) rendering of large-scale and time-varying data. By combining a streamlined workflow with physically based rendering, SciBlend supports advanced visualization tasks while preserving essential scientific attributes. Comparative evaluations across multiple case studies show improvements in rendering performance, clarity, and reproducibility relative to traditional tools. This modular and user-oriented design offers a robust solution for creating publication-ready visuals of complex computational data.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104264"},"PeriodicalIF":2.5,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144364593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting anomalies in dense 3D crowds 在密集的3D人群中检测异常
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-16 DOI: 10.1016/j.cag.2025.104267
Melania Prieto-Martín, Marc Comino-Trinidad, Dan Casas
{"title":"Detecting anomalies in dense 3D crowds","authors":"Melania Prieto-Martín,&nbsp;Marc Comino-Trinidad,&nbsp;Dan Casas","doi":"10.1016/j.cag.2025.104267","DOIUrl":"10.1016/j.cag.2025.104267","url":null,"abstract":"<div><div>Estimating the behavior of dense 3D crowds is crucial for applications in security, surveillance, and planning. Detecting events in such crowds from a single video, the most common scenario, is challenging due to ambiguities, occlusions, and complex human behavior. To address this, we propose a method that overlays pixel-based labels on video data to highlight anomalies in dense 3D crowds movement. Our key contribution is a data-driven, image-based model trained on features derived from 3D virtual crowd animations of articulated characters that mimic real crowds at a micro-level. By using training data based on captured dense crowd trajectories and realistic 3D motions, we can analyze and detect anomalies in complex real-world scenarios. Additionally, while acquiring ground-truth data from diverse viewpoints is difficult in real-world settings, our virtual simulator allows rendering scenes from multiple perspectives, enabling the training of models robust to viewpoint variations. We demonstrate qualitatively and quantitatively that our method can detect anomalies in much denser crowds than existing methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104267"},"PeriodicalIF":2.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On-site single image SVBRDF reconstruction with active planar lighting 现场单幅图像SVBRDF重建与主动平面照明
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-13 DOI: 10.1016/j.cag.2025.104268
Lianghao Zhang, Ruya Sun, Li Wang, Fangzhou Gao, Zixuan Wang, Jiawan Zhang
{"title":"On-site single image SVBRDF reconstruction with active planar lighting","authors":"Lianghao Zhang,&nbsp;Ruya Sun,&nbsp;Li Wang,&nbsp;Fangzhou Gao,&nbsp;Zixuan Wang,&nbsp;Jiawan Zhang","doi":"10.1016/j.cag.2025.104268","DOIUrl":"10.1016/j.cag.2025.104268","url":null,"abstract":"<div><div>Recovering the spatially-varying bidirectional reflectance distribution function (SVBRDF) from a single image in uncontrolled environments is challenging while essential for various applications. In this paper, we address this highly ill-posed problem using a convenient capture setup and a carefully designed reconstruction framework. Our proposed setup, which incorporates an active extended light source and a mirror hemisphere, is easy to implement for even common users and requires no careful calibration. These devices can simultaneously capture uncontrolled lighting, real active lighting patterns, and material appearance in a single image. Based on all captured information, we solve the reconstruction problem by designing lighting clues that are semantically aligned with the input image to aid the network in understanding the captured lighting. We further embed lighting clue generation into the network’s forward pass by introducing real-time rendering. This allows the network to render accurate lighting clues based on predicted normal variations while jointly learning to reconstruct high-quality SVBRDF. Moreover, we also use captured lighting patterns to model noises of pattern display in real scenes, which significantly increases the robustness of our methods on real data. With these innovations, our method demonstrates clear improvements over previous approaches on both synthetic and real-world data.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104268"},"PeriodicalIF":2.5,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144329524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VectorMamba: Enhancing point cloud analysis through vector representations and state space modeling VectorMamba:通过向量表示和状态空间建模增强点云分析
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-11 DOI: 10.1016/j.cag.2025.104255
Zhicheng Wen
{"title":"VectorMamba: Enhancing point cloud analysis through vector representations and state space modeling","authors":"Zhicheng Wen","doi":"10.1016/j.cag.2025.104255","DOIUrl":"10.1016/j.cag.2025.104255","url":null,"abstract":"<div><div>Point cloud data, despite its widespread adoption, poses significant challenges due to its sparsity and irregularity. Existing methods excel in capturing complex point cloud structures but struggle with local feature extraction and global modeling. To address these issues, we introduce VectorMamba, a novel 3D point cloud analysis network. VectorMamba employs a Vector-oriented Set Abstraction (VSA) method that integrates scalar, rotation, and scaling information into vector representations, enhancing local feature representation. Additionally, the Flash Residual MLP (FlaResMLP) module improves generalization and efficiency by leveraging anisotropic functions and explicit positional embeddings. To address global modeling challenges, we propose the PosMamba Block, a state-space-based module that incorporates positional encoding to preserve spatial information and mitigate the loss of geometric context in deeper layers. Experimental results on the ModelNet40 classification dataset, ShapeNetPart part segmentation dataset, and S3DIS semantic segmentation dataset demonstrate that VectorMamba outperforms baseline methods and achieves competitive performance compared to other approaches. The code and dataset are openly available at <span><span>github.com/Shadow581/VectorMamba</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104255"},"PeriodicalIF":2.5,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time dynamic 3D geological visualization based on Octree-TEN 基于Octree-TEN的实时动态三维地质可视化
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-10 DOI: 10.1016/j.cag.2025.104259
Yu Han , Weiduo Xu , Pingan Liu , Xinyu Xu , Xi Duan , Bo Qin , Xinjie Wang
{"title":"Real-time dynamic 3D geological visualization based on Octree-TEN","authors":"Yu Han ,&nbsp;Weiduo Xu ,&nbsp;Pingan Liu ,&nbsp;Xinyu Xu ,&nbsp;Xi Duan ,&nbsp;Bo Qin ,&nbsp;Xinjie Wang","doi":"10.1016/j.cag.2025.104259","DOIUrl":"10.1016/j.cag.2025.104259","url":null,"abstract":"<div><div>3D geological visualization offers extensive support for geographic information systems. Grid space division is a fundamental technique of 3D geological visualization. However, to our knowledge, the existing division structures give little consideration to the construction or simulation of physical processes; meanwhile, most physical systems only focus on the calculation of physical fields, failing to build indexes and control voxel units. To tackle these challenges, we propose a novel real-time dynamic 3D geological visualization method based on the Octree and Tetrahedral Network (Octree-TEN). Our method combines Octree-TEN with Position-based Dynamics (PBD) to achieve voxel-controllable PBD physical field calculations. Therefore, it is suitable for data-driven visualization of fractured-grid physical field calculations, such as landslide simulation. Furthermore, in the data pre-processing phase, i.e. the process of generating voxelized grids from raw data, we employ an enhanced Delaunay Triangulation method to improve efficiency. To build a practical visualization system, we optimize load balancing at the engine rendering stage and the Delaunay simplification stage, respectively. In the experiment, we dynamically visualize geological information containing nearly 2 million voxels, which reach 118.5 FPS on an NVIDIA GeForce 3060 GPU. It indicates that the proposed method is both effective and feasible. Moreover, our method has potential applications in other fields, such as geological disaster prediction, mineral resource exploration, and popular science education.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104259"},"PeriodicalIF":2.5,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mural image restoration with spatial geometric perception and progressive context refinement 空间几何感知与渐进式情境精细的壁画图像复原
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-09 DOI: 10.1016/j.cag.2025.104266
Yumeng Zhou, Min Guo, Miao Ma
{"title":"Mural image restoration with spatial geometric perception and progressive context refinement","authors":"Yumeng Zhou,&nbsp;Min Guo,&nbsp;Miao Ma","doi":"10.1016/j.cag.2025.104266","DOIUrl":"10.1016/j.cag.2025.104266","url":null,"abstract":"<div><div>Ancient murals, as invaluable cultural heritage, have long been a focal point and significant challenge in the field of cultural heritage preservation. Traditional restoration methods typically address texture and structural features separately, leading to inconsistencies between local details and the overall structure. This approach is insufficient to meet the complex demands of texture and structural restoration for ancient murals. To address this issue, this paper proposes a collaborative encoder–decoder architecture (MIR-SGPR) that achieves simultaneous restoration of texture and structural features in ancient mural images. The generator extracts shallow texture features and deep structural features through the encoder and, in conjunction with the Spatial Geometric Awareness (SGA) module, achieves precise modeling of the spatial location and directional information of damaged areas. To resolve the imbalance between local details and global semantics, this paper introduces the Progressive Contextual Refinement (PCR) network, which progressively optimizes multi-scale features and effectively integrates texture and structural information, thereby enhancing the collaborative modeling capability of local details and global structure. Furthermore, this paper proposes the Mask Reverse-Focus Mechanism (MRF), which leverages mask information to eliminate feature interference from undamaged areas, significantly improving the efficiency and accuracy of restoration. Ultimately, the generated images are optimized through both the global and local discriminators. Experimental results demonstrate that this method significantly outperforms existing state-of-the-art approaches across multiple evaluation metrics. The generated restored images exhibit superior visual consistency, detail authenticity, and overall structural recovery, providing an efficient and reliable solution for the digital preservation of ancient murals.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104266"},"PeriodicalIF":2.5,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144280914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special issue editorial: Recent advances in spatial user interaction 特刊社论:空间用户交互的最新进展
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-06 DOI: 10.1016/j.cag.2025.104265
Hai-Ning Liang , Lingyun Yu , Weidong Huang , Ferran Argelaguet , Pedro Lopes , Mayra Barrera Machuca
{"title":"Special issue editorial: Recent advances in spatial user interaction","authors":"Hai-Ning Liang ,&nbsp;Lingyun Yu ,&nbsp;Weidong Huang ,&nbsp;Ferran Argelaguet ,&nbsp;Pedro Lopes ,&nbsp;Mayra Barrera Machuca","doi":"10.1016/j.cag.2025.104265","DOIUrl":"10.1016/j.cag.2025.104265","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104265"},"PeriodicalIF":2.5,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration of interactive nuclide chart visualisations in virtual reality for physics education 物理教学虚拟现实中互动式核素图可视化的探索
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-05 DOI: 10.1016/j.cag.2025.104258
Janine Zöllner , Bernhard Preim , Jan-Willem Vahlbruch , Vivien Pottgießer , Patrick Saalfeld
{"title":"Exploration of interactive nuclide chart visualisations in virtual reality for physics education","authors":"Janine Zöllner ,&nbsp;Bernhard Preim ,&nbsp;Jan-Willem Vahlbruch ,&nbsp;Vivien Pottgießer ,&nbsp;Patrick Saalfeld","doi":"10.1016/j.cag.2025.104258","DOIUrl":"10.1016/j.cag.2025.104258","url":null,"abstract":"<div><div>Immersive virtual reality (VR) is used for various types of learning content. One fundamental but challenging part of VR applications are suitable interaction techniques. In this work, we use the example of interactive nuclide charts to investigate interaction techniques in VR. For this purpose, four variants of visualising an interactive nuclide chart for decay rows in a VR environment were implemented: The floor-freehand variant offers the possibility to move freely on the chart, the floor-controller variant enables teleportation to nuclides using the controller, the wall-freehand variant uses hand gestures to select nuclides, and the wall-controller variant also uses the controller to select nuclides on the wall. Our user study with 24 participants indicated that the wall-controller variant was favoured in terms of usability and user experience.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104258"},"PeriodicalIF":2.5,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SR-CurvANN: Advancing 3D surface reconstruction through curvature-aware neural networks SR-CurvANN:通过曲率感知神经网络推进三维表面重建
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-04 DOI: 10.1016/j.cag.2025.104260
Marina Hernández-Bautista , Francisco J. Melero
{"title":"SR-CurvANN: Advancing 3D surface reconstruction through curvature-aware neural networks","authors":"Marina Hernández-Bautista ,&nbsp;Francisco J. Melero","doi":"10.1016/j.cag.2025.104260","DOIUrl":"10.1016/j.cag.2025.104260","url":null,"abstract":"<div><div>Incomplete or missing data in three-dimensional (3D) models can lead to erroneous or flawed renderings, limiting their usefulness in applications such as visualization, geometric computation, and 3D printing. Conventional surface-repair techniques often fail to infer complex geometric details in missing areas. Neural networks successfully address hole-filling tasks in 2D images using inpainting techniques. The combination of surface reconstruction algorithms, guided by the model’s curvature properties and the creativity of neural networks in the inpainting processes, should provide realistic results in the hole completion task. In this paper, we propose a novel method entitled SR-CurvANN (Surface Reconstruction Based on Curvature-Aware Neural Networks) that incorporates neural network-based 2D inpainting to effectively reconstruct 3D surfaces. We train the neural networks with images that represent planar representations of the curvature at vertices of hundreds of 3D models. Once the missing areas have been inferred, a coarse-to-fine surface deformation process ensures that the surface fits the reconstructed curvature image. Our proposal makes it possible to learn and generalize patterns from a wide variety of training 3D models, generating comprehensive inpainted curvature images and surfaces. Experiments conducted on 959 models with several holes have demonstrated that SR-CurvANN excels in the shape completion process, filling holes with a remarkable level of realism and precision.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104260"},"PeriodicalIF":2.5,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144223286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信