Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
Talking Face Generation With Lip and Identity Priors 有嘴唇和身份先验的说话面孔一代
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-28 DOI: 10.1002/cav.70026
Jiajie Wu, Frederick W. B. Li, Gary K. L. Tam, Bailin Yang, Fangzhe Nan, Jiahao Pan
{"title":"Talking Face Generation With Lip and Identity Priors","authors":"Jiajie Wu,&nbsp;Frederick W. B. Li,&nbsp;Gary K. L. Tam,&nbsp;Bailin Yang,&nbsp;Fangzhe Nan,&nbsp;Jiahao Pan","doi":"10.1002/cav.70026","DOIUrl":"https://doi.org/10.1002/cav.70026","url":null,"abstract":"<div>\u0000 \u0000 <p>Speech-driven talking face video generation has attracted growing interest in recent research. While person-specific approaches yield high-fidelity results, they require extensive training data from each individual speaker. In contrast, general-purpose methods often struggle with accurate lip synchronization, identity preservation, and natural facial movements. To address these limitations, we propose a novel architecture that combines an alignment model with a rendering model. The rendering model synthesizes identity-consistent lip movements by leveraging facial landmarks derived from speech, a partially occluded target face, multi-reference lip features, and the input audio. Concurrently, the alignment model estimates optical flow using the occluded face and a static reference image, enabling precise alignment of facial poses and lip shapes. This collaborative design enhances the rendering process, resulting in more realistic and identity-preserving outputs. Extensive experiments demonstrate that our method significantly improves lip synchronization and identity retention, establishing a new benchmark in talking face video generation.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Motion Inbetweening via Bidirectional Autoregressive Diffusion Models 通过双向自回归扩散模型的精确运动中间
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-28 DOI: 10.1002/cav.70040
Jiawen Peng, Zhuoran Liu, Jingzhong Lin, Gaoqi He
{"title":"Precise Motion Inbetweening via Bidirectional Autoregressive Diffusion Models","authors":"Jiawen Peng,&nbsp;Zhuoran Liu,&nbsp;Jingzhong Lin,&nbsp;Gaoqi He","doi":"10.1002/cav.70040","DOIUrl":"https://doi.org/10.1002/cav.70040","url":null,"abstract":"<div>\u0000 \u0000 <p>Conditional motion diffusion models have demonstrated significant potential in generating natural and reasonable motions response to constraints such as keyframes, that can be used for motion inbetweening task. However, most methods struggle to match the keyframe constraints accurately, which resulting in unsmooth transitions between keyframes and generated motion. In this article, we propose Bidirectional Autoregressive Motion Diffusion Inbetweening (BAMDI) to generate seamless motion between start and target frames. The main idea is to transfer the motion diffusion model to autoregressive paradigm, which predicts subsequence of motion adjacent to both start and target keyframes to infill the missing frames through several iterations. This can help to improve the local consistency of generated motion. Additionally, bidirectional generation make sure the smoothness on both start frame target keyframes. Experiments show our method achieves state-of-the-art performance compared with other diffusion-based motion inbetweening methods.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144171472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PG-VTON: Front-And-Back Garment Guided Panoramic Gaussian Virtual Try-On With Diffusion Modeling PG-VTON:前后服装引导全景高斯虚拟试戴扩散建模
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-27 DOI: 10.1002/cav.70054
Jian Zheng, Shengwei Sang, Yifei Lu, Guojun Dai, Xiaoyang Mao, Wenhui Zhou
{"title":"PG-VTON: Front-And-Back Garment Guided Panoramic Gaussian Virtual Try-On With Diffusion Modeling","authors":"Jian Zheng,&nbsp;Shengwei Sang,&nbsp;Yifei Lu,&nbsp;Guojun Dai,&nbsp;Xiaoyang Mao,&nbsp;Wenhui Zhou","doi":"10.1002/cav.70054","DOIUrl":"https://doi.org/10.1002/cav.70054","url":null,"abstract":"<div>\u0000 \u0000 <p>Virtual try-on (VTON) technology enables the rapid creation of realistic try-on experiences, which makes it highly valuable for the metaverse and e-commerce. However, 2D VTON methods struggle to convey depth and immersion, while existing 3D methods require multi-view garment images and face challenges in generating high-fidelity garment textures. To address the aforementioned limitations, this paper proposes a panoramic Gaussian VTON framework guided solely by front-and-back garment information, named PG-VTON, which uses an adapted local controllable diffusion model for generating virtual dressing effects in specific regions. Specifically, PG-VTON adopts a coarse-to-fine architecture consisting of two stages. The coarse editing stage employs a local controllable diffusion model with a score distillation sampling (SDS) loss to generate coarse garment geometries with high-level semantics. Meanwhile, the refinement stage applies the same diffusion model with a photometric loss not only to enhance garment details and reduce artifacts but also to correct unwanted noise and distortions introduced during the coarse stage, thereby effectively enhancing realism. To improve training efficiency, we further introduce a dynamic noise scheduling (DNS) strategy, which ensures stable training and high-fidelity results. Experimental results demonstrate the superiority of our method, which achieves geometrically consistent and highly realistic 3D virtual try-on generation.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust 3D Mesh Segmentation Algorithm With Anisotropic Sparse Embedding 基于各向异性稀疏嵌入的鲁棒三维网格分割算法
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-27 DOI: 10.1002/cav.70042
Mengyao Zhang, Wenting Li, Yong Zhao, Xin Si, Jingliang Zhang
{"title":"A Robust 3D Mesh Segmentation Algorithm With Anisotropic Sparse Embedding","authors":"Mengyao Zhang,&nbsp;Wenting Li,&nbsp;Yong Zhao,&nbsp;Xin Si,&nbsp;Jingliang Zhang","doi":"10.1002/cav.70042","DOIUrl":"https://doi.org/10.1002/cav.70042","url":null,"abstract":"<div>\u0000 \u0000 <p>3D mesh segmentation, as a very challenging problem in computer graphics, has attracted considerable interest. The most popular methods in recent years are data-driven methods. However, such methods require a large amount of accurately labeled data, which is difficult to obtain. In this article, we propose a novel mesh segmentation algorithm based on anisotropic sparse embedding. We first over-segment the input mesh and get a collection of patches. Then these patches are embedded into a latent space via an anisotropic <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mrow>\u0000 <mi>L</mi>\u0000 </mrow>\u0000 <mrow>\u0000 <mn>1</mn>\u0000 </mrow>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {L}_1 $$</annotation>\u0000 </semantics></math>-regularized optimization problem. In the new space, the patches that belong to the same part of the mesh will be closer, while those belonging to different parts will be farther. Finally, we can easily generate the segmentation result by clustering. Various experimental results on the PSB and COSEG datasets show that our algorithm is able to get perception-aware results and is superior to the state-of-the-art algorithms. In addition, the proposed algorithm can robustly deal with meshes with different poses, different triangulations, noises, missing regions, or missing parts.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144148303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UTMCR: 3U-Net Transformer With Multi-Contrastive Regularization for Single Image Dehazing UTMCR:用于单幅图像去雾的多对比正则化3U-Net变压器
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-26 DOI: 10.1002/cav.70029
HangBin Xu, ChangJun Zou, ChuChao Lin
{"title":"UTMCR: 3U-Net Transformer With Multi-Contrastive Regularization for Single Image Dehazing","authors":"HangBin Xu,&nbsp;ChangJun Zou,&nbsp;ChuChao Lin","doi":"10.1002/cav.70029","DOIUrl":"https://doi.org/10.1002/cav.70029","url":null,"abstract":"<div>\u0000 \u0000 <p>Convolutional neural networks have a long history of development in single-width dehazing tasks, but have gradually been dominated by the Transformer framework due to their insufficient global modeling capability and large number of parameters. However, the existing Transformer network structure adopts a single U-Net structure, which is insufficient in multi-level and multi-scale feature fusion and modeling capability. Therefore, we propose an end-to-end dehazing network (UTMCR-Net). The network consists of two parts: (1) UT module, which connects three U-Net networks in series, where the backbone is replaced by the Dehazeformer block. By connecting three U-Net networks in series, we can improve the image global modeling capability and capture multi-scale information at different levels to achieve multi-level and multi-scale feature fusion. (2) MCR module, which improves the original contrastive regularization method by splitting the results of the UT module into four equal blocks, which are then compared and learned by using the contrast regularization method, respectively. Specifically, we use three U-Net networks to enhance the global modeling capability of UTMCR as well as the multi-scale feature fusion capability. The image dehazing ability is further enhanced using the MCR module. Experimental results show that our method achieves better results on most datasets.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144135834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupling Density Dynamics: A Neural Operator Framework for Adaptive Multi-Fluid Interactions 解耦密度动力学:自适应多流体相互作用的神经算子框架
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-26 DOI: 10.1002/cav.70027
Yalan Zhang, Yuhang Xu, Xiaokun Wang, Angelos Chatzimparmpas, Xiaojuan Ban
{"title":"Decoupling Density Dynamics: A Neural Operator Framework for Adaptive Multi-Fluid Interactions","authors":"Yalan Zhang,&nbsp;Yuhang Xu,&nbsp;Xiaokun Wang,&nbsp;Angelos Chatzimparmpas,&nbsp;Xiaojuan Ban","doi":"10.1002/cav.70027","DOIUrl":"https://doi.org/10.1002/cav.70027","url":null,"abstract":"<div>\u0000 \u0000 <p>The dynamic interface prediction of multi-density fluids presents a fundamental challenge across computational fluid dynamics and graphics, rooted in nonlinear momentum transfer. We present Density-Conditioned Dynamic Convolution, a novel neural operator framework that establishes differentiable density-dynamics mapping through decoupled operator response. The core theoretical advancement lies in continuously adaptive neighborhood kernels that transform local density distributions into tunable filters, enabling unified representation from homogeneous media to multi-phase fluid. Experiments demonstrate autonomous evolution of physically consistent interface separation patterns in density contrast scenarios, including cocktail and bidirectional hourglass flow. Quantitative evaluation shows improved computational efficiency compared to a SPH method and qualitatively plausible interface dynamics, with a larger time step size.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144140557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weisfeiler-Lehman Kernel Augmented Product Representation for Queries on Large-Scale BIM Scenes 面向大规模BIM场景查询的Weisfeiler-Lehman核增强产品表示
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-26 DOI: 10.1002/cav.70043
Huiqiang Hu, Changyan He, Xiaojun Liu, Jinyuan Jia, Ting Yu
{"title":"Weisfeiler-Lehman Kernel Augmented Product Representation for Queries on Large-Scale BIM Scenes","authors":"Huiqiang Hu,&nbsp;Changyan He,&nbsp;Xiaojun Liu,&nbsp;Jinyuan Jia,&nbsp;Ting Yu","doi":"10.1002/cav.70043","DOIUrl":"https://doi.org/10.1002/cav.70043","url":null,"abstract":"<div>\u0000 \u0000 <p>To achieve efficient querying of BIM products in large-scale virtual scenes, this study introduces a Weisfeiler-Lehman (WL) kernel augmented representation for Building Information Modeling(BIM) products based on Product Attributed Graphs (PAGs). Unlike conventional data-driven approaches that demand extensive labeling and preprocessing, our method directly processes raw BIM product data to extract stable semantic and geometric features. Initially, a PAG is constructed to encapsulate product features. Subsequently, a WL kernel enhanced multi-channel node aggregation strategy is employed to integrate BIM product attributes effectively. Leveraging the bijective relationship in graph isomorphism, an unsupervised convergence mechanism based on attribute value differences is established. Experiments demonstrate that our method achieves convergence within an average of 3 iterations, completes graph isomorphism testing in minimal time, and attains an average query accuracy of 95%. This approach outperforms 1-WL and 3-WL methods, especially in handling products with topologically isomorphic but oppositely attributed spaces.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144135833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk-Aware Pedestrian Behavior Using Reinforcement Learning in Mixed Traffic 基于强化学习的混合交通风险感知行人行为研究
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-25 DOI: 10.1002/cav.70031
Cheng-En Cai, Sai-Keung Wong, Tzu-Yu Chen
{"title":"Risk-Aware Pedestrian Behavior Using Reinforcement Learning in Mixed Traffic","authors":"Cheng-En Cai,&nbsp;Sai-Keung Wong,&nbsp;Tzu-Yu Chen","doi":"10.1002/cav.70031","DOIUrl":"https://doi.org/10.1002/cav.70031","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper introduces a reinforcement learning method to simulate agents crossing roads in unsignalized, mixed-traffic environments. These agents represent individual pedestrians or small groups. The method ensures that agents adopt safe interactions with nearby dynamic obstacles (bikes, motorcycles, or cars) by considering factors such as conflict zones and post-encroachment times. Risk assessments based on interaction times encourage agents to avoid hazardous behaviors. Additionally, risk-informed reward terms incentivize agents to perform safe actions, while collision penalties deter collisions. The method achieved collision-free crossings and demonstrated normal, conservative, and aggressive pedestrian behaviors in various scenarios. Finally, ablation tests revealed the impact of reward weights, reward terms, and key agent state components. The weights of reward terms can be adjusted to achieve either conservative or aggressive pedestrian crossing behaviors, balancing road crossing efficiency and safety.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144135795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion In-Betweening via Recursive Keyframe Prediction 通过递归关键帧预测运动之间
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-25 DOI: 10.1002/cav.70035
Rui Zeng, Ju Dai, Junxuan Bai, Junjun Pan
{"title":"Motion In-Betweening via Recursive Keyframe Prediction","authors":"Rui Zeng,&nbsp;Ju Dai,&nbsp;Junxuan Bai,&nbsp;Junjun Pan","doi":"10.1002/cav.70035","DOIUrl":"https://doi.org/10.1002/cav.70035","url":null,"abstract":"<div>\u0000 \u0000 <p>Motion in-betweening is a flexible and efficient technique for generating 3-dimensional animations. In this paper, we propose a keyframe-driven method that effectively addresses the pose ambiguity issue and achieves robust in-betweening performance. We introduce a keyframe-driven synthesis framework. At each recursion, the key poses at both ends keep predicting the new one at the midpoint. The recursive breakdown reduces motion ambiguities by simplifying the in-betweening sequence as the integration of short clips. The hybrid positional encoding scales the hidden states to adapt to long- and short-term dependencies. Additionally, we employ a temporal refinement network to capture the local motion relationships, thereby enhancing the consistency of the predicted pose sequence. Through comprehensive evaluations that include both quantitative and qualitative comparisons, the proposed model demonstrates its competitiveness in prediction accuracy and in-betweening flexibility.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144135796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSFaceMorpher: High-Fidelity 3D Face Morphing via Gaussian Splatting GSFaceMorpher:高保真3D人脸变形通过高斯飞溅
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2025-05-23 DOI: 10.1002/cav.70036
Xiwen Shi, Hao Zhao, Yi Jiang, Hao Xu, Ziyi Yang, Yiqian Wu, Qingbiao Wu, Xiaogang Jin
{"title":"GSFaceMorpher: High-Fidelity 3D Face Morphing via Gaussian Splatting","authors":"Xiwen Shi,&nbsp;Hao Zhao,&nbsp;Yi Jiang,&nbsp;Hao Xu,&nbsp;Ziyi Yang,&nbsp;Yiqian Wu,&nbsp;Qingbiao Wu,&nbsp;Xiaogang Jin","doi":"10.1002/cav.70036","DOIUrl":"https://doi.org/10.1002/cav.70036","url":null,"abstract":"<div>\u0000 \u0000 <p>High-fidelity 3D face morphing aims to achieve seamless transitions between realistic 3D facial representations of different identities. Although 3D Gaussian Splatting (3DGS) excels in high-quality rendering, its application to morphing is hindered by the lack of Gaussian primitive correspondence and variations in primitive quantities. To address this, we propose <i>GSFaceMorpher</i>, which is a novel framework for high-fidelity 3D face morphing based on 3DGS. Our method constructs an auxiliary model that bridges the source and target face models by aligning the geometry through Radial Basis Function (RBF) warping and optimizing the appearance in the image space. This auxiliary model enables smooth parameter interpolation, whereas a diffusion-based refinement step enhances critical facial details through attention replacement from the reference faces. Experiments demonstrate that our method produces visually coherent and high-fidelity morphing sequences, significantly outperforming NeRF-based baselines in terms of both quantitative metrics and user preferences. Our work establishes a new benchmark for high-fidelity 3D face morphing with applications in visual effects, animation, and immersive experiences.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144126047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信