Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
Fast constrained optimization for cloth simulation parameters from static drapes 从静态窗帘快速约束优化布料模拟参数
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-14 DOI: 10.1002/cav.2265
Eunjung Ju, Eungjune Shim, Kwang-yun Kim, Sungjin Yoon, Myung Geol Choi
{"title":"Fast constrained optimization for cloth simulation parameters from static drapes","authors":"Eunjung Ju,&nbsp;Eungjune Shim,&nbsp;Kwang-yun Kim,&nbsp;Sungjin Yoon,&nbsp;Myung Geol Choi","doi":"10.1002/cav.2265","DOIUrl":"https://doi.org/10.1002/cav.2265","url":null,"abstract":"<p>We present a cloth simulation parameter estimation method that integrates the flexibility of global optimization with the speed of neural networks. While global optimization allows for varied designs in objective functions and specifying the range of optimization variables, it requires thousands of objective function evaluations. Each evaluation, which involves a cloth simulation, is computationally demanding and impractical time-wise. On the other hand, neural network learning methods offer quick estimation results but face challenges such as the need for data collection, re-training when input data formats change, and difficulties in setting constraints on variable ranges. Our proposed method addresses these issues by replacing the simulation process, typically necessary for objective function evaluations in global optimization, with a neural network for inference. We demonstrate that, once an estimation model is trained, optimization for various objective functions becomes straightforward. Moreover, we illustrate that it is possible to achieve optimization results that reflect the intentions of expert users through visualization of a wide optimization space and the use of range constraints.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward comprehensive Chiroptera modeling: A parametric multiagent model for bat behavior 实现全面的脊索动物建模:蝙蝠行为的多代理参数模型
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-14 DOI: 10.1002/cav.2251
Brendan Marney, Brandon Haworth
{"title":"Toward comprehensive Chiroptera modeling: A parametric multiagent model for bat behavior","authors":"Brendan Marney,&nbsp;Brandon Haworth","doi":"10.1002/cav.2251","DOIUrl":"https://doi.org/10.1002/cav.2251","url":null,"abstract":"<p>Chiroptera behavior is complex and often unseen as bats are nocturnal, small, and elusive animals. Chiroptology has led to significant insights into the behavior and environmental interactions of bats. Biology, ecology, and even digital media often benefit from mathematical models of animals including humans. However, the history of Chiroptera modeling is often limited to specific behaviors, species, or biological functions and relies heavily on classical modeling methodologies that may not fully represent individuals or colonies well. This work proposes a continuous, parametric, multiagent, Chiroptera behavior model that captures the latest research in echolocation, hunting, and energetics of bats. This includes echolocation-based perception (or lack thereof), hunting patterns, roosting behavior, and energy consumption rates. We proposed the integration of these mathematical models in a framework that affords the individual simulation of bats within large-scale colonies. Practitioners can adjust the model to account for different perceptual affordances or patterns among species of bats, or even individuals (such as sickness or injury). We show that our model closely matches results from the literature, affords an animated graphical simulation, and has utility in simulation-based studies.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2251","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting roads from satellite images via enhancing road feature investigation in learning 在学习中通过加强道路特征调查从卫星图像中提取道路
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-12 DOI: 10.1002/cav.2275
Shiming Feng, Fei Hou, Jialu Chen, Wencheng Wang
{"title":"Extracting roads from satellite images via enhancing road feature investigation in learning","authors":"Shiming Feng,&nbsp;Fei Hou,&nbsp;Jialu Chen,&nbsp;Wencheng Wang","doi":"10.1002/cav.2275","DOIUrl":"https://doi.org/10.1002/cav.2275","url":null,"abstract":"<p>It is a hot topic to extract road maps from satellite images. However, it is still very challenging with existing methods to achieve high-quality results, because the regions covered by satellite images are very large and the roads are slender, complex and only take up a small part of a satellite image, making it difficult to distinguish roads from the background in satellite images. In this article, we address this challenge by presenting two modules to more effectively learn road features, and so improving road extraction. The first module exploits the differences between the patches containing roads and the patches containing no road to exclude the background regions as many as possible, by which the small part containing roads can be more specifically investigated for improvement. The second module enhances feature alignment in decoding feature maps by using strip convolution in combination with the attention mechanism. These two modules can be easily integrated into the networks of existing learning methods for improvement. Experimental results show that our modules can help existing methods to achieve high-quality results, superior to the state-of-the-art methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identity-consistent transfer learning of portraits for digital apparel sample display 数字服装样品展示中的人像身份一致性迁移学习
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-10 DOI: 10.1002/cav.2278
Luyuan Wang, Yiqian Wu, Yong-Liang Yang, Chen Liu, Xiaogang Jin
{"title":"Identity-consistent transfer learning of portraits for digital apparel sample display","authors":"Luyuan Wang,&nbsp;Yiqian Wu,&nbsp;Yong-Liang Yang,&nbsp;Chen Liu,&nbsp;Xiaogang Jin","doi":"10.1002/cav.2278","DOIUrl":"https://doi.org/10.1002/cav.2278","url":null,"abstract":"<p>The rapid development of the online apparel shopping industry demands innovative solutions for high-quality digital apparel sample displays with virtual avatars. However, developing such displays is prohibitively expensive and prone to the well-known “uncanny valley” effect, where a nearly human-looking artifact arouses eeriness and repulsiveness, thus affecting the user experience. To effectively mitigate the “uncanny valley” effect and improve the overall authenticity of digital apparel sample displays, we present a novel photo-realistic portrait generation framework. Our key idea is to employ transfer learning to learn an identity-consistent mapping from the latent space of rendered portraits to that of real portraits. During the inference stage, the input portrait of an avatar can be directly transferred to a realistic portrait by changing its appearance style while maintaining the facial identity. To this end, we collect a new dataset, <b>D</b>az-<b>R</b>endered-<b>F</b>aces-<b>HQ</b> (<i>DRFHQ</i>), specifically designed for rendering-style portraits. We leverage this dataset to fine-tune the StyleGAN2-<i>FFHQ</i> generator, using our carefully crafted framework, which helps to preserve the geometric and color features relevant to facial identity. We evaluate our framework using portraits with diverse gender, age, and race variations. Qualitative and quantitative evaluations, along with ablation studies, highlight our method's advantages over state-of-the-art approaches.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive information fusion network for multi-modal personality recognition 用于多模态个性识别的自适应信息融合网络
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-10 DOI: 10.1002/cav.2268
Yongtang Bao, Xiang Liu, Yue Qi, Ruijun Liu, Haojie Li
{"title":"Adaptive information fusion network for multi-modal personality recognition","authors":"Yongtang Bao,&nbsp;Xiang Liu,&nbsp;Yue Qi,&nbsp;Ruijun Liu,&nbsp;Haojie Li","doi":"10.1002/cav.2268","DOIUrl":"https://doi.org/10.1002/cav.2268","url":null,"abstract":"<p>Personality recognition is of great significance in deepening the understanding of social relations. While personality recognition methods have made significant strides in recent years, the challenge of heterogeneity between modalities during feature fusion still needs to be solved. This paper introduces an adaptive multi-modal information fusion network (AMIF-Net) capable of concurrently processing video, audio, and text data. First, utilizing the AMIF-Net encoder, we process the extracted audio and video features separately, effectively capturing long-term data relationships. Then, adding adaptive elements in the fusion network can alleviate the problem of heterogeneity between modes. Lastly, we concatenate audio-video and text features into a regression network to obtain Big Five personality trait scores. Furthermore, we introduce a novel loss function to address the problem of training inaccuracies, taking advantage of its unique property of exhibiting a peak at the critical mean. Our tests on the ChaLearn First Impressions V2 multi-modal dataset show partial performance surpassing state-of-the-art networks.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing doctor-patient communication in surgical explanations: Designing effective facial expressions and gestures for animated physician characters 在手术讲解中加强医患沟通:为动画医生角色设计有效的面部表情和手势
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-06 DOI: 10.1002/cav.2236
Hwang Youn Kim, Ghazanfar Ali, Jae-In Hwang
{"title":"Enhancing doctor-patient communication in surgical explanations: Designing effective facial expressions and gestures for animated physician characters","authors":"Hwang Youn Kim,&nbsp;Ghazanfar Ali,&nbsp;Jae-In Hwang","doi":"10.1002/cav.2236","DOIUrl":"https://doi.org/10.1002/cav.2236","url":null,"abstract":"<p>Paying close attention to facial expressions, gestures, and communication techniques is essential when creating animated physician characters that are realistic and captivating when describing surgical procedures. This paper emphasizes the integration of appropriate emotions, co-speech gestures when medical experts explain the medical procedure, and designing animated characters. We can achieve healthy doctor-patient relationships and improvement of patients' understanding by depicting these components truthfully. We suggest two critical approaches to developing virtual medical experts by incorporating these elements. First, doctors can generate the contents of the surgical procedure with a virtual doctor. Second, patients can listen to the surgical procedure described by the virtual doctor and ask if they have any questions. Our system helps patients by considering their psychology and adding medical professionals' opinions. These improvements ensure the animated virtual agent is comforting, reassuring, and emotionally supportive. Through a user study, we evaluated our hypothesis and gained insight into improvements.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2236","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A language-directed virtual human motion generation approach based on musculoskeletal models 基于肌肉骨骼模型的语言导向虚拟人体运动生成方法
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-05 DOI: 10.1002/cav.2257
Libo Sun, Yongxiang Wang, Wenhu Qin
{"title":"A language-directed virtual human motion generation approach based on musculoskeletal models","authors":"Libo Sun,&nbsp;Yongxiang Wang,&nbsp;Wenhu Qin","doi":"10.1002/cav.2257","DOIUrl":"https://doi.org/10.1002/cav.2257","url":null,"abstract":"<p>The development of the systems capable of synthesizing natural and life-like motions for virtual characters has long been a central focus in computer animation. It needs to generate high-quality motions for characters and provide users with a convenient and flexible interface for guiding character motions. In this work, we propose a language-directed virtual human motion generation approach based on musculoskeletal models to achieve interactive and higher-fidelity virtual human motion, which lays the foundation for the development of language-directed controllers in physics-based character animation. First, we construct a simplified model of musculoskeletal dynamics for the virtual character. Subsequently, we propose a hierarchical control framework consisting of a trajectory tracking layer and a muscle control layer, obtaining the optimal control policy for imitating the reference motions through the training. We design a multi-policy aggregation controller based on large language models, which selects the motion policy with the highest similarity to user text commands from the action-caption data pool, facilitating natural language-based control of virtual character motions. Experimental results demonstrate that the proposed approach not only generates high-quality motions highly resembling reference motions but also enables users to effectively guide virtual characters to perform various motions via natural language instructions.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HIDE: Hierarchical iterative decoding enhancement for multi-view 3D human parameter regression HIDE:针对多视角三维人体参数回归的分层迭代解码增强技术
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-05 DOI: 10.1002/cav.2266
Weitao Lin, Jiguang Zhang, Weiliang Meng, Xianglong Liu, Xiaopeng Zhang
{"title":"HIDE: Hierarchical iterative decoding enhancement for multi-view 3D human parameter regression","authors":"Weitao Lin,&nbsp;Jiguang Zhang,&nbsp;Weiliang Meng,&nbsp;Xianglong Liu,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2266","DOIUrl":"https://doi.org/10.1002/cav.2266","url":null,"abstract":"<p>Parametric human modeling are limited to either single-view frameworks or simple multi-view frameworks, failing to fully leverage the advantages of easily trainable single-view networks and the occlusion-resistant capabilities of multi-view images. The prevalent presence of object occlusion and self-occlusion in real-world scenarios leads to issues of robustness and accuracy in predicting human body parameters. Additionally, many methods overlook the spatial connectivity of human joints in the global estimation of model pose parameters, resulting in cumulative errors in continuous joint parameters.To address these challenges, we propose a flexible and efficient iterative decoding strategy. By extending from single-view images to multi-view video inputs, we achieve local-to-global optimization. We utilize attention mechanisms to capture the rotational dependencies between any node in the human body and all its ancestor nodes, thereby enhancing pose decoding capability. We employ a parameter-level iterative fusion of multi-view image data to achieve flexible integration of global pose information, rapidly obtaining appropriate projection features from different viewpoints, ultimately resulting in precise parameter estimation. Through experiments, we validate the effectiveness of the HIDE method on the Human3.6M and 3DPW datasets, demonstrating significantly improved visualization results compared to previous methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmenting collaborative interaction with shared visualization of eye movement and gesture in VR 在虚拟现实中通过共享眼动和手势可视化增强协作互动
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-06-04 DOI: 10.1002/cav.2264
Yang Liu, Song Zhao, Shiwei Cheng
{"title":"Augmenting collaborative interaction with shared visualization of eye movement and gesture in VR","authors":"Yang Liu,&nbsp;Song Zhao,&nbsp;Shiwei Cheng","doi":"10.1002/cav.2264","DOIUrl":"https://doi.org/10.1002/cav.2264","url":null,"abstract":"<p>Virtual Reality (VR)-enabled multi-user collaboration has been gradually applied in academic research and industrial applications, but it still has key problems. First, it is often difficult for users to select or manipulate objects in complex three-dimesnional spaces, which greatly affects their operational efficiency. Second, supporting natural communication cues is crucial for cooperation in VR, especially in collaborative tasks, where ambiguous verbal communication cannot effectively assign partners the task of selecting or manipulating objects. To address the above issues, in this paper, we propose a new interaction method, Eye-Gesture Combination Interaction in VR, to enhance the execution of collaborative tasks by sharing the visualization of eye movement and gesture data among partners. We conducted user experiments and showed that using dots to represent eye gaze and virtual hands to represent gestures can help users complete tasks faster than other visualization methods. Finally, we developed a VR multi-user collaborative assembly system. The results of the user study show that sharing gaze points and gestures among users can significantly improve the productivity of collaborating users. Our work can effectively improve the efficiency of multi-user collaborative systems in VR and provide new design guidelines for collaborative systems in VR.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141245708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A double-layer crowd evacuation simulation method based on deep reinforcement learning 基于深度强化学习的双层人群疏散模拟方法
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-30 DOI: 10.1002/cav.2280
Yong Zhang, Bo Yang, Jianlin Zhu
{"title":"A double-layer crowd evacuation simulation method based on deep reinforcement learning","authors":"Yong Zhang,&nbsp;Bo Yang,&nbsp;Jianlin Zhu","doi":"10.1002/cav.2280","DOIUrl":"https://doi.org/10.1002/cav.2280","url":null,"abstract":"<p>Existing crowd evacuation simulation methods commonly face challenges of low efficiency in path planning and insufficient realism in pedestrian movement during the evacuation process. In this study, we propose a novel crowd evacuation path planning approach based on the learning curve–deep deterministic policy gradient (LC-DDPG) algorithm. The algorithm incorporates dynamic experience pool and a priority experience sampling strategy, enhancing convergence speed and achieving higher average rewards, thus efficiently enabling global path planning. Building upon this foundation, we introduce a double-layer method for crowd evacuation using deep reinforcement learning. Specifically, within each group, individuals are categorized into leaders and followers. At the top layer, we employ the LC-DDPG algorithm to perform global path planning for the leaders. Simultaneously, at the bottom layer, an enhanced social force model guides the followers to avoid obstacles and follow the leaders during evacuation. We implemented a crowd evacuation simulation platform. Experimental results show that our proposed method has high path planning efficiency and can generate more realistic pedestrian trajectories in different scenarios and crowd sizes.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信