Computers & Graphics-Uk最新文献

筛选
英文 中文
Design and evaluation of a virtual rehabilitation system integrating intangible cultural heritage: A case study of Baduanjin-based somatosensory training 整合非物质文化遗产的虚拟康复系统设计与评价——以八段锦体感训练为例
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-21 DOI: 10.1016/j.cag.2025.104378
Yifan Zhang , Lanqi Xu , Xu Lang , Jianing Liu , Pengfei Leng , Xuanchi Gong , Xiao Song , Zhigeng Pan
{"title":"Design and evaluation of a virtual rehabilitation system integrating intangible cultural heritage: A case study of Baduanjin-based somatosensory training","authors":"Yifan Zhang ,&nbsp;Lanqi Xu ,&nbsp;Xu Lang ,&nbsp;Jianing Liu ,&nbsp;Pengfei Leng ,&nbsp;Xuanchi Gong ,&nbsp;Xiao Song ,&nbsp;Zhigeng Pan","doi":"10.1016/j.cag.2025.104378","DOIUrl":"10.1016/j.cag.2025.104378","url":null,"abstract":"<div><div>Motor coordination disorders significantly impair patients’ ability to perform daily activities. Traditional rehabilitation approaches often suffer from low adherence due to their monotony. This study presents The Wonderful Journey Based on Baduanjin (TWJB), a gamified virtual rehabilitation system that integrates somatosensory interaction with the traditional Chinese exercise Baduanjin. The system employs embodied narrative scenarios and avatar-guided mechanisms to facilitate culturally embedded rehabilitation training. By incorporating real-time motion capture, dynamic feedback, and interactive knowledge cards, the system enhances user immersion and rehabilitation motivation. A controlled experiment was conducted to evaluate its effectiveness. Results indicate that TWJB outperforms conventional rehabilitation methods in terms of user engagement, cultural knowledge acquisition, and functional recovery. These findings underscore the potential of combining traditional culture with intelligent interaction technologies in modern rehabilitation medicine.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104378"},"PeriodicalIF":2.8,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144908411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing singular patterns in discrete planar vector fields via persistent path homology 基于持续路径同调的离散平面矢量场奇异模式分析
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-21 DOI: 10.1016/j.cag.2025.104354
Yu Chen, Hongwei Lin
{"title":"Analyzing singular patterns in discrete planar vector fields via persistent path homology","authors":"Yu Chen,&nbsp;Hongwei Lin","doi":"10.1016/j.cag.2025.104354","DOIUrl":"10.1016/j.cag.2025.104354","url":null,"abstract":"<div><div>Analyzing singular patterns in vector fields is a fundamental problem in theoretical and practical domains due to the ability of such patterns to detect the intrinsic characteristics of vector fields. In this study, we propose an approach for analyzing singular patterns from discrete planar vector fields. Our method involves converting the planar discrete vector field into a specialized digraph and computing its one-dimensional persistent path homology. By analyzing the persistence diagram, we can determine the location of singularities, and the variations of singular patterns can also be analyzed. The experimental results demonstrate the effectiveness of our method in analyzing the singular patterns of noisy real-world vector fields and measuring the variations between different vector fields.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104354"},"PeriodicalIF":2.8,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-based geometric framework for noise discernment and denoising of 2D point sets 基于学习的二维点集噪声识别与去噪几何框架
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-21 DOI: 10.1016/j.cag.2025.104327
Minu Reghunath , Keerthiharan Ananth , Joms Antony , Keerthana Muralidharan , Ramanathan Muthuganapathy
{"title":"Learning-based geometric framework for noise discernment and denoising of 2D point sets","authors":"Minu Reghunath ,&nbsp;Keerthiharan Ananth ,&nbsp;Joms Antony ,&nbsp;Keerthana Muralidharan ,&nbsp;Ramanathan Muthuganapathy","doi":"10.1016/j.cag.2025.104327","DOIUrl":"10.1016/j.cag.2025.104327","url":null,"abstract":"<div><div>Denoising, the recovery of ground-truth point sets from noisy inputs, is essential for real-world applications like sketch-to-vector conversion, sketch skeletonization/thinning, curve reconstruction from images, and scanned point clouds, etc. In practice, the nature of noise in these applications varies significantly. However, the widely used noise modeling for 2D pointsets falls under two categories, viz. (a) as a perturbed one, (b) as an offset-based approach. Hence, a single denoising method is often inadequate. In this paper, we propose a novel approach based on Delaunay triangulation (DT) for discernment as well as denoising of a point set. Using a developed dataset, a learning-based framework that derives features from DT is proposed for the discernment of a point set. Further, we propose different denoising approaches based on the classification. Experimental results show that our method effectively classifies and denoises diverse point sets, including real data, outperforming state-of-the-art techniques.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104327"},"PeriodicalIF":2.8,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144908413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale cascaded network with high-low frequency for low-light image enhancement 用于弱光图像增强的高低频多尺度级联网络
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-21 DOI: 10.1016/j.cag.2025.104380
Jianxing Wu , Teng Ran , Wendong Xiao , Liang Yuan , Qing Tao
{"title":"Multi-scale cascaded network with high-low frequency for low-light image enhancement","authors":"Jianxing Wu ,&nbsp;Teng Ran ,&nbsp;Wendong Xiao ,&nbsp;Liang Yuan ,&nbsp;Qing Tao","doi":"10.1016/j.cag.2025.104380","DOIUrl":"10.1016/j.cag.2025.104380","url":null,"abstract":"<div><div>Low-light images affect human visual perception and computer vision downstream tasks because of low illumination, blurred details, and severe noise. Most existing methods optimize the illumination prior and reflectance of the image to accomplish low-light image enhancement. However, in these methods, the acquired illumination features cannot be effectively restored, and the spatial structure cannot be adequately rendered. To address the above issues, this paper proposes a high and low-frequency enhanced low-light image enhancement framework based on a cascaded UNet. To obtain high-quality illumination features, we design a UNet architecture to capture both local and global semantic priors, which are then used to illuminate low-light images. The second UNet module extracts local details and fine spatial structures to repair degraded image information using illumination-guided restoration with high and low-frequency enhancements. At the second UNet skip connections, we quote the channel reduction attention mechanism to enhance the interaction of feature channel information. Experiments on public datasets show that the proposed method achieves superior enhancement performance.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104380"},"PeriodicalIF":2.8,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144908414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoPartNeRF: Human reconstruction from monocular video via part-based neural radiance fields MonoPartNeRF:基于局部神经辐射场的单眼视频人体重建
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-20 DOI: 10.1016/j.cag.2025.104385
Yao Lu , Jiawei Li , Ming Jiang
{"title":"MonoPartNeRF: Human reconstruction from monocular video via part-based neural radiance fields","authors":"Yao Lu ,&nbsp;Jiawei Li ,&nbsp;Ming Jiang","doi":"10.1016/j.cag.2025.104385","DOIUrl":"10.1016/j.cag.2025.104385","url":null,"abstract":"<div><div>In recent years, Neural Radiance Fields (NeRF) have achieved remarkable progress in dynamic human reconstruction and rendering. Part-based rendering paradigms, guided by human segmentation, allow for flexible parameter allocation based on structural complexity, thereby enhancing representational efficiency. However, existing methods still struggle with complex pose variations, often producing unnatural transitions at part boundaries and failing to reconstruct occluded regions accurately in monocular settings. We propose MonoPartNeRF, a novel framework for monocular dynamic human rendering that ensures smooth transitions and robust occlusion recovery. First, we build a bidirectional deformation model that combines rigid and non-rigid transformations to establish a continuous, reversible mapping between observation and canonical spaces. Sampling points are projected into a parameterized surface-time space (u, v, t) to better capture non-rigid motion. A consistency loss further suppresses deformation-induced artifacts and discontinuities. We introduce a part-based pose embedding mechanism that decomposes global pose vectors into local joint embeddings based on body regions. This is combined with keyframe pose retrieval and interpolation, along three orthogonal directions, to guide pose-aware feature sampling. A learnable appearance code is integrated via attention to model dynamic texture changes effectively. Experiments on the ZJU-MoCap and MonoCap datasets demonstrate that our method significantly outperforms prior approaches under complex pose and occlusion conditions, achieving superior joint alignment, texture fidelity, and structural continuity.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104385"},"PeriodicalIF":2.8,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144889478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CDR-CARNet: Baggage re-identification based on cross-domain robust features and camera-aware re-ranking CDR-CARNet:基于跨域鲁棒特征和相机感知重排序的行李再识别
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-19 DOI: 10.1016/j.cag.2025.104377
Yinghong Liu , Hongying Zhang , Xi Yang , Sijia Zhao , Jinhong Zhang
{"title":"CDR-CARNet: Baggage re-identification based on cross-domain robust features and camera-aware re-ranking","authors":"Yinghong Liu ,&nbsp;Hongying Zhang ,&nbsp;Xi Yang ,&nbsp;Sijia Zhao ,&nbsp;Jinhong Zhang","doi":"10.1016/j.cag.2025.104377","DOIUrl":"10.1016/j.cag.2025.104377","url":null,"abstract":"<div><div>To address the challenges of cross-domain distribution inconsistency, large intra-class appearance and viewpoint variations in airport baggage re-identification, this paper proposes CDR-CARNet, which integrates cross-domain robust feature learning, dynamic hard sample mining, and camera-aware re-ranking. Firstly, the integration of Instance-Batch Normalization and Global Context attention mechanisms is employed to alleviate inter-domain shifts. Secondly, Margin Sample Mining Loss is adopted to dynamically select the hardest positive and negative sample pairs, thereby optimizing the decision boundary between samples. Finally, the CA-Jaccard re-ranking strategy is introduced to suppress cross-camera noise interference. Experiments conducted on the MVB dataset demonstrate that CDR-CARNet achieves 87.0% mAP, 86.1% Rank-1, and 84.6% mINP, representing improvements of 4.6%, 4.5%, and 5.9% over the AGW baseline, respectively. The method also significantly outperforms existing mainstream approaches, verifying its practicality and robustness for cross-camera baggage matching in complex airport scenarios.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104377"},"PeriodicalIF":2.8,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Canonical pose reconstruction from single depth image for 3D non-rigid pose recovery on limited datasets 基于单深度图像的典型位姿重建,用于有限数据集上的三维非刚性位姿恢复
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-19 DOI: 10.1016/j.cag.2025.104370
Fahd Alhamazani , Paul L. Rosin , Yu-Kun Lai
{"title":"Canonical pose reconstruction from single depth image for 3D non-rigid pose recovery on limited datasets","authors":"Fahd Alhamazani ,&nbsp;Paul L. Rosin ,&nbsp;Yu-Kun Lai","doi":"10.1016/j.cag.2025.104370","DOIUrl":"10.1016/j.cag.2025.104370","url":null,"abstract":"<div><div>3D reconstruction from 2D inputs, especially for non-rigid objects like humans, presents unique challenges due to the significant range of possible deformations. Traditional methods often struggle with non-rigid shapes, which require extensive training data to cover the entire deformation space. This study addresses these limitations by proposing a canonical pose reconstruction model that transforms single-view depth images of deformable shapes into a canonical form. This alignment facilitates shape reconstruction by enabling the application of rigid object reconstruction techniques, and supports recovering the input pose in voxel representation as part of the reconstruction task, utilising both the original and deformed depth images. Notably, our model achieves effective results with using a small dataset with 300 samples in total, containing variations in shape (obese, slim and fit bodies) and gender (female and male) and size (child and adult). Experimental results on animal and human datasets demonstrate that our model outperforms other state-of-the-art methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104370"},"PeriodicalIF":2.8,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time voxelized mesh fracture with Gram–Schmidt constraints 具有Gram-Schmidt约束的实时体素网格断裂
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-19 DOI: 10.1016/j.cag.2025.104382
Tim McGraw, Xinyi Zhou
{"title":"Real-time voxelized mesh fracture with Gram–Schmidt constraints","authors":"Tim McGraw,&nbsp;Xinyi Zhou","doi":"10.1016/j.cag.2025.104382","DOIUrl":"10.1016/j.cag.2025.104382","url":null,"abstract":"<div><div>Much previous research about fracture of deformable bodies has focused on physical principles (e.g. energy and mass conservation), leading to simulation methods that are very realistic, but not yet applicable in real-time. We present a stylized animation method for destruction of soft bodies that is visually plausible and capable of running at hundreds of frames per second by sacrificing visual realism and physical accuracy. Our method uses a new volume-preserving voxel constraint based on Gram–Schmidt orthonormalization which, when used in tandem with a breakable face-to-face voxel constraint, allows us to animate destructible models. We also describe optional LOD constraints which speed convergence and increase apparent stiffness of the models. The creation pipeline and constraints presented here are designed to minimize the number of partitions needed for parallel Gauss–Seidel iterations. We compare the proposed techniques with shape constraints and the state-of-the-art material point method on the basis of memory usage, computation time and visual results.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104382"},"PeriodicalIF":2.8,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantics and perceptual coding for cloud-edge collaborative game streaming 云边缘协同游戏流的语义和感知编码
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-19 DOI: 10.1016/j.cag.2025.104367
An Kang , KeXun Pu , ZeJun Lyu , Yanci Zhang
{"title":"Semantics and perceptual coding for cloud-edge collaborative game streaming","authors":"An Kang ,&nbsp;KeXun Pu ,&nbsp;ZeJun Lyu ,&nbsp;Yanci Zhang","doi":"10.1016/j.cag.2025.104367","DOIUrl":"10.1016/j.cag.2025.104367","url":null,"abstract":"<div><div>In cloud–edge collaborative game streaming, maintaining high perceptual quality under constrained bandwidth is challenging. To address this, we propose a perceptually-driven enhancement coding framework that enhances visual quality without increasing bitrate. Specifically, the enhancement represents the pixel-wise difference between full-quality and base-quality frames generated on the cloud server, which is transmitted to the client to refine the locally rendered baseline visuals. Our method introduces three key strategies: game-semantics-aware QP Control to dynamically guide QP distribution based on gameplay context, saliency and JND-guided QP allocation, and Non-ROI high-frequency suppression using variable rate shading with perceptually guided filtering to reduce irrelevant visual details before enhancement computation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104367"},"PeriodicalIF":2.8,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM2Med3D: Leveraging video foundation models for 3D breast MRI segmentation SAM2Med3D:利用视频基础模型进行3D乳房MRI分割
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-19 DOI: 10.1016/j.cag.2025.104341
Ying Chen , Wenjing Cui , Xiaoyan Dong , Shuai Zhou , Zhongqiu Wang
{"title":"SAM2Med3D: Leveraging video foundation models for 3D breast MRI segmentation","authors":"Ying Chen ,&nbsp;Wenjing Cui ,&nbsp;Xiaoyan Dong ,&nbsp;Shuai Zhou ,&nbsp;Zhongqiu Wang","doi":"10.1016/j.cag.2025.104341","DOIUrl":"10.1016/j.cag.2025.104341","url":null,"abstract":"<div><div>Foundation models such as the Segment Anything Model 2 (SAM2) have demonstrated impressive generalization across natural image domains. However, their potential in volumetric medical imaging remains largely underexplored, particularly under limited data conditions. In this paper, we present SAM2Med3D, a novel multi-stage framework that adapts a general-purpose video foundation model for accurate and consistent 3D breast MRI segmentation by treating 3D MRI scan as a sequence of images. Unlike existing image-based approaches (e.g., MedSAM) that require large-scale medical data for fine-tuning, our method combines a lightweight, task-specific segmentation network with a video foundation model, achieving strong performance with only modest training data. To guide the foundation model effectively, we introduce a novel spatial filtering strategy that identifies reliable slices from the initial segmentation to serve as high-quality prompts. Additionally, we propose a confidence-driven fusion mechanism that adaptively integrates coarse and refined predictions across the volume, mitigating segmentation drift and ensuring both local accuracy and global volumetric consistency. We validate SAM2Med3D on two multi-center breast MRI datasets, including both public and self-collected datasets. Experimental results demonstrate that our method outperforms both task-specific segmentation networks and recent foundation-model-based methods, achieving superior accuracy and inter-slice consistency.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104341"},"PeriodicalIF":2.8,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144865704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信