Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
An HBIM Framework and Virtual Reconstruction for the Preservation of Confucian Temple Heritage 孔庙遗产保护的HBIM框架与虚拟重构
IF 1.7 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-30 DOI: 10.1002/cav.70068
Jin Liu
{"title":"An HBIM Framework and Virtual Reconstruction for the Preservation of Confucian Temple Heritage","authors":"Jin Liu","doi":"10.1002/cav.70068","DOIUrl":"https://doi.org/10.1002/cav.70068","url":null,"abstract":"<div>\u0000 \u0000 <p>The current 3D modeling methods for analysis, documentation, and preservation of cultural heritage sites are tedious, and the realistic expression ability of the model is insufficient. It is urgent to break through the bottleneck of professional model creation and architectural layout analysis by non-professional users. Based on the study of architectural regulations of Confucian temples, this paper presents an HBIM (Heritage Building Information Modeling) framework for management and spatial analysis of the historical buildings within the space of Confucian temples. This paper solves the problems of low degree of automation in 3D modeling and lack of detail expression ability in virtual architectural layout analysis for Confucian temples. The framework can be an important tool to analyze the architectural space form. Through case studies of Confucian temples in Qufu and Beijing, the impact of the study on enhancing non-professional users' modeling experience is revealed. The results of the study strengthen the digital presentation of structure and hierarchy of ancient Chinese society, thus become the starting point of further research for Confucian temples all over the world.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144740158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Yolov8-HAC: Safety Helmet Detection Model for Complex Underground Coal Mine Scene Yolov8-HAC:煤矿井下复杂场景安全帽检测模型
IF 1.7 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-29 DOI: 10.1002/cav.70051
Rui Liu, Fangbo Lu, Wanchuang Luo, Tianjian Cao, Hailian Xue, Meili Wang
{"title":"Yolov8-HAC: Safety Helmet Detection Model for Complex Underground Coal Mine Scene","authors":"Rui Liu,&nbsp;Fangbo Lu,&nbsp;Wanchuang Luo,&nbsp;Tianjian Cao,&nbsp;Hailian Xue,&nbsp;Meili Wang","doi":"10.1002/cav.70051","DOIUrl":"https://doi.org/10.1002/cav.70051","url":null,"abstract":"<div>\u0000 \u0000 <p>The underground coal mine working environment is complicated, and the detection of safety helmet wearing is vital for assuring worker safety. This article proposes an improved YOLOv8n safety helmet detection model, YOLOv8-HAC, to address the issues of coexisting strong light exposure and low illumination, equipment occlusions that result in partial target loss, and the missed detection of small targets due to limited surveillance perspectives in underground coal mines. The model substitutes the suggested HAC-Net for the C2f module in YOLOv8n's backbone network to improve feature extraction and detection performance for targets with motion blur and low-resolution images. To improve detection stability in complicated situations and lessen background interference, the AGC-Block module is also included for dynamic feature selection. Additionally, a tiny target detection layer is included to increase the long-range identification rate of tiny safety helmets. According to experimental data, the enhanced model outperforms existing popular object detection algorithms, with a mAP of 94.8% and a recall rate of 90.4%. This demonstrates how well the suggested approach works to identify safety helmets in situations with complicated lighting and low-resolution photos.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144725587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Real-Time Virtual-Real Fusion Rendering Framework in Cloud-Edge Environments 云边缘环境下的实时虚实融合渲染框架
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-21 DOI: 10.1002/cav.70049
Yuxi Zhou, Bowen Gao, Hongxin Zhang, Wei Chen, Xiaoliang Luo, Lvchun Wang
{"title":"A Real-Time Virtual-Real Fusion Rendering Framework in Cloud-Edge Environments","authors":"Yuxi Zhou,&nbsp;Bowen Gao,&nbsp;Hongxin Zhang,&nbsp;Wei Chen,&nbsp;Xiaoliang Luo,&nbsp;Lvchun Wang","doi":"10.1002/cav.70049","DOIUrl":"https://doi.org/10.1002/cav.70049","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper introduces a cloud-edge collaborative framework for real-time virtual-real fusion rendering in augmented reality (AR). By integrating Visual Simultaneous Localization and Mapping (VSLAM) with Neural Radiance Fields (NeRF), the proposed method achieves high-fidelity virtual object placement and shadow synthesis in real-world scenes. The cloud server handles computationally intensive tasks, including offline NeRF-based 3D reconstruction and online illumination estimation, while edge devices perform real-time data acquisition, SLAM-based plane detection, and rendering. To enhance realism, the system employs an improved soft shadow generation technique that dynamically adjusts shadow parameters based on light source information. Experimental results across diverse indoor environments demonstrate the system's effectiveness, with consistent real-time performance, accurate illumination estimation, and high-quality shadow rendering. The proposed method reduces the computational burden on edge devices, enabling immersive AR experiences on resource-constrained hardware, such as mobile and wearable devices.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144672936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Retrieval-Augmented Generation System for Accurate and Contextual Historical Analysis: AI-Agent for the Annals of the Joseon Dynasty 用于准确和上下文历史分析的检索-增强生成系统:朝鲜王朝编年史的AI-Agent
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-20 DOI: 10.1002/cav.70048
Jeong Ha Lee, Ghazanfar Ali, Jae-In Hwang
{"title":"A Retrieval-Augmented Generation System for Accurate and Contextual Historical Analysis: AI-Agent for the Annals of the Joseon Dynasty","authors":"Jeong Ha Lee,&nbsp;Ghazanfar Ali,&nbsp;Jae-In Hwang","doi":"10.1002/cav.70048","DOIUrl":"https://doi.org/10.1002/cav.70048","url":null,"abstract":"<div>\u0000 \u0000 <p>In this article, we propose an AI-agent that integrates a large language model (LLM) with a retrieval-augmented generation (RAG) system to deliver reliable historical information from the Annals of the Joseon Dynasty through both objective facts and contextual analysis, achieving significant performance improvements over existing models. For an AI-agent using the Annals of the Joseon Dynasty to deliver reliable historical information, clear source citations and systematic analysis are essential. The Annals, an official record spanning 472 years (1392–1897), offer a dense, chronological account of daily events and state administration that shaped Korea's cultural, political, and social foundations. We propose integrating a LLM with a RAG system to generate highly accurate responses based on this extensive dataset. This approach provides both objective information about historical figures and events from specific periods and subjective contextual analysis of the era, helping users gain a broader understanding. Our experiments demonstrate improvements of approximately 23 to 50 points on a 100-point scale compared with the GPT-4o and OpenAI AI-Assistant v2 models.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144673041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Botanical-Based Simulation of Fruit Shape Change During Growth 基于植物学的果实生长形态变化模拟
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-16 DOI: 10.1002/cav.70064
Yixin Xu, Shiguang Liu
{"title":"Botanical-Based Simulation of Fruit Shape Change During Growth","authors":"Yixin Xu,&nbsp;Shiguang Liu","doi":"10.1002/cav.70064","DOIUrl":"https://doi.org/10.1002/cav.70064","url":null,"abstract":"<div>\u0000 \u0000 <p>Fruit growth is an interesting time-lapse process. The simulation of this process using computer graphics technology can have many applications in areas such as films, games, agriculture, etc. Although there are some methods to model the shape of the fruit, it is challenging to accurately simulate its growth process and include shape changes. We propose a botanical-based framework to address this problem. By combining the growth pattern function and the exponential model in botany, we propose a mesh scaling method that can accurately simulate the fruit volume increase. Specifically, the RGR (relative growth rate) in the exponential model is automatically calculated according to the user's input growth pattern function or real size data. In addition, we model and simulate fruit shape changes by integrating axial, longitudinal, and latitudinal shape parameters into the RGR function. Various defective fruits can be simulated by adjusting these parameters. Inspired by the principle of root curvature, we propose a deformation technique-based approach in conjunction with our volume increase approach to simulate the bending growth of fruits such as cucumber. Various experiments show that our framework can effectively simulate the growth process of a wide range of fruits with shape change or bending.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144646800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoPadSAR: A Spatial Augmented Reality Interaction Approach for Collaborative Design via Pad-Based Cross-Device Interaction CoPadSAR:基于pad的跨设备交互的协同设计空间增强现实交互方法
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-16 DOI: 10.1002/cav.70065
Keming Chen, Qihao Yang, Qingshu Yuan, Jin Xu, Zhengwei Yao, Zhigeng Pan
{"title":"CoPadSAR: A Spatial Augmented Reality Interaction Approach for Collaborative Design via Pad-Based Cross-Device Interaction","authors":"Keming Chen,&nbsp;Qihao Yang,&nbsp;Qingshu Yuan,&nbsp;Jin Xu,&nbsp;Zhengwei Yao,&nbsp;Zhigeng Pan","doi":"10.1002/cav.70065","DOIUrl":"https://doi.org/10.1002/cav.70065","url":null,"abstract":"<div>\u0000 \u0000 <p>Augmented reality (AR) is a technology that superimposes digital information onto the real world. As one of the three major forms of AR, spatial augmented reality (SAR) projects virtual content into public spaces, making it accessible to collaborators. Due to its shared large display area, SAR has significant potential for application in collaborative design. However, existing SAR interaction methods may suffer from inefficiencies and poor collaborative experiences. To address this issue, CoPadSAR, a Pad-based cross-device interaction method, is proposed. It can map 2D operations from each Pad onto 3D objects within the SAR environment, allowing users to collaborate using multiple Pads. Moreover, a prototype is presented that supports collaborative painting, annotation, and object creation. Furthermore, a comparative study involving 40 participants (20 pairs) is conducted. The results indicate CoPadSAR reveals better group performance than controller-based, gesture, and tangible interactions. It has greater usability and provides a better collaborative experience. The interviews further confirm the user preference for it. This study contributes to expanding the application of SAR in collaborative design.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144646801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LGNet: Local-And-Global Feature Adaptive Network for Single Image Two-Hand Reconstruction LGNet:单幅图像双手重建的局部-全局特征自适应网络
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-09 DOI: 10.1002/cav.70021
Haowei Xue, Meili Wang
{"title":"LGNet: Local-And-Global Feature Adaptive Network for Single Image Two-Hand Reconstruction","authors":"Haowei Xue,&nbsp;Meili Wang","doi":"10.1002/cav.70021","DOIUrl":"https://doi.org/10.1002/cav.70021","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality (VR). Especially in the field of robotics, accurate interacting hand mesh reconstruction can significantly improve the accuracy and naturalness of human-robot interaction. This task requires an accurate understanding of complex interactions between two hands and ensuring reasonable alignment of the hand mesh with the image. Recent Transformer-based methods directly utilize the features of the two hands as input tokens, ignoring the correlation between local and global features of the interacting hands, leading to hand ambiguity, self-occlusion, and self-similarity problems. We propose LGNet, Local and Global Feature Adaptive Network, through separating the hand mesh reconstruction process into three stages: A joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine-tuning the mesh-image alignment using an offset mesh. LGNet enables high-quality fingertip-level mesh-image alignment, effectively models the spatial relationship between two hands, and supports real-time prediction. Comprehensive quantitative and qualitative evaluations on benchmark datasets reveal that LGNet surpasses existing methods in mesh accuracy and alignment accuracy, while also showcasing robust generalization performance in tests on in-the-wild images.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144589950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chinese Painting Generation With a Stroke-By-Stroke Renderer and a Semantic Loss 一笔一画的中国画生成与语义丢失
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-07 DOI: 10.1002/cav.70020
Yuan Ma, Zhixuan Wang, Yinghan Shi, Meili Wang
{"title":"Chinese Painting Generation With a Stroke-By-Stroke Renderer and a Semantic Loss","authors":"Yuan Ma,&nbsp;Zhixuan Wang,&nbsp;Yinghan Shi,&nbsp;Meili Wang","doi":"10.1002/cav.70020","DOIUrl":"https://doi.org/10.1002/cav.70020","url":null,"abstract":"<div>\u0000 \u0000 <p>Chinese painting is the traditional way of painting in China, with distinctive artistic characteristics and a strong national style. Creating Chinese paintings is a complex and difficult process for non-experts, so utilizing computer-aided Chinese painting generation is a meaningful topic. In this paper, we propose a novel Chinese painting generation model, which can generate vivid Chinese paintings in a stroke-by-stroke manner. In contrast to previous neural renderers, we design a Chinese painting renderer that can generate two classic stroke types of Chinese painting (i.e., middle-tip stroke and side-tip stroke), without the aid of any neural network. To capture the subtle semantic representation from the input image, we design a semantic loss to compute the distance between the input image and the output Chinese painting. Experiments demonstrate that our method can generate vivid and elegant Chinese paintings.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144573529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coarse-To-Fine 3D Craniofacial Landmark Detection via Heat Kernel Optimization 基于热核优化的粗到精三维颅面特征点检测
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-03 DOI: 10.1002/cav.70050
Xingfei Xue, Xuesong Wang, Weizhou Liu, Xingce Wang, Junli Zhao, Zhongke Wu
{"title":"Coarse-To-Fine 3D Craniofacial Landmark Detection via Heat Kernel Optimization","authors":"Xingfei Xue,&nbsp;Xuesong Wang,&nbsp;Weizhou Liu,&nbsp;Xingce Wang,&nbsp;Junli Zhao,&nbsp;Zhongke Wu","doi":"10.1002/cav.70050","DOIUrl":"https://doi.org/10.1002/cav.70050","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate 3D craniofacial landmark detection is critical for applications in medicine and computer animation, yet remains challenging due to the complex geometry of craniofacial structures. In this work, we propose a coarse-to-fine framework for anatomical landmark localization on 3D craniofacial models. First, we introduce a Diffused Two-Stream Network (DTS-Net) for heatmap regression, which effectively captures both local and global geometric features by integrating pointwise scalar flow, tangent space vector flow, and spectral features in the Laplace-Beltrami space. This design enables robust representation of complex anatomical structures. Second, we propose a heat kernel-based energy optimization method to extract landmark coordinates from the predicted heatmaps. This approach exhibits strong performance across various geometric regions, including boundaries, flat surfaces, and high-curvature areas, ensuring accurate and consistent localization. Our method achieves state-of-the-art results on both a 3D cranial dataset and the BU-3DFE facial dataset.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144551040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Sampling for Interactive Simulation of Granular Material 基于自适应采样的颗粒材料交互模拟
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-07-03 DOI: 10.1002/cav.70062
Samraat Gupta, John Keyser
{"title":"Adaptive Sampling for Interactive Simulation of Granular Material","authors":"Samraat Gupta,&nbsp;John Keyser","doi":"10.1002/cav.70062","DOIUrl":"https://doi.org/10.1002/cav.70062","url":null,"abstract":"<p>We present a method for simulating granular materials faster within a position based dynamics framework. We do this by combining an adaptive particle sampling scheme with an upsampling approach. This allows for faster simulations in interactive applications, while maintaining visual resolution. Particles are merged or split based on their distance from the boundary, allowing for high details in areas of importance such as the surface and edges. Merging particles into a single particle reduces the number of particles for which collisions have to be simulated, thus reducing the overall simulation time. The adaptive sampling technique is then combined with an upsampling scheme that gives the coarser particle simulation the appearance of much finer resolution.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70062","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144551041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信