Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
SADNet: Generating immersive virtual reality avatars by real-time monocular pose estimation SADNet:通过实时单目姿势估计生成身临其境的虚拟现实头像
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-29 DOI: 10.1002/cav.2233
Ling Jiang, Yuan Xiong, Qianqian Wang, Tong Chen, Wei Wu, Zhong Zhou
{"title":"SADNet: Generating immersive virtual reality avatars by real-time monocular pose estimation","authors":"Ling Jiang,&nbsp;Yuan Xiong,&nbsp;Qianqian Wang,&nbsp;Tong Chen,&nbsp;Wei Wu,&nbsp;Zhong Zhou","doi":"10.1002/cav.2233","DOIUrl":"https://doi.org/10.1002/cav.2233","url":null,"abstract":"<div>\u0000 \u0000 <p>Generating immersive virtual reality avatars is a challenging task in VR/AR applications, which maps physical human body poses to avatars in virtual scenes for an immersive user experience. However, most existing work is time-consuming and limited by datasets, which does not satisfy immersive and real-time requirements of VR systems. In this paper, we aim to generate 3D real-time virtual reality avatars based on a monocular camera to solve these problems. Specifically, we first design a self-attention distillation network (SADNet) for effective human pose estimation, which is guided by a pre-trained teacher. Secondly, we propose a lightweight pose mapping method for human avatars that utilizes the camera model to map 2D poses to 3D avatar keypoints, generating real-time human avatars with pose consistency. Finally, we integrate our framework into a VR system, displaying generated 3D pose-driven avatars on Helmet-Mounted Display devices for an immersive user experience. We evaluate SADNet on two publicly available datasets. Experimental results show that SADNet achieves a state-of-the-art trade-off between speed and accuracy. In addition, we conducted a user experience study on the performance and immersion of virtual reality avatars. Results show that pose-driven 3D human avatars generated by our method are smooth and attractive.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S-LASSIE: Structure and smoothness enhanced learning from sparse image ensemble for 3D articulated shape reconstruction S-LASSIE:从稀疏图像集合中进行结构和平滑度增强学习,用于三维关节形状重建
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-29 DOI: 10.1002/cav.2277
Jingze Feng, Chong He, Guorui Wang, Meili Wang
{"title":"S-LASSIE: Structure and smoothness enhanced learning from sparse image ensemble for 3D articulated shape reconstruction","authors":"Jingze Feng,&nbsp;Chong He,&nbsp;Guorui Wang,&nbsp;Meili Wang","doi":"10.1002/cav.2277","DOIUrl":"https://doi.org/10.1002/cav.2277","url":null,"abstract":"<p>In computer vision, the task of 3D reconstruction from monocular sparse images poses significant challenges, particularly in the field of animal modelling. The diverse morphology of animals, their varied postures, and the variable conditions of image acquisition significantly complicate the task of accurately reconstructing their 3D shape and pose from a monocular image. To address these complexities, we propose S-LASSIE, a novel technique for 3D reconstruction of quadrupeds from monocular sparse images. It requires only 10–30 images of similar breeds for training. To effectively mitigate depth ambiguities inherent in monocular reconstructions, S-LASSIE employs a multi-angle projection loss function. In addition, our approach, which involves fusion and smoothing of bone structures, resolves issues related to disjointed topological structures and uneven connections at junctions, resulting in 3D models with comprehensive topologies and improved visual fidelity. Our extensive experiments on the Pascal-Part and LASSIE datasets demonstrate significant improvements in keypoint transfer, overall 2D IOU and visual quality, with an average keypoint transfer and overall 2D IOU of 59.6% and 86.3%, respectively, which are superior to existing techniques in the field.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face attribute translation with multiple feature perceptual reconstruction assisted by style translator 由风格翻译器辅助的多特征感知重建人脸属性翻译
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-29 DOI: 10.1002/cav.2273
Shuqi Zhu, Jiuzhen Liang, Hao Liu
{"title":"Face attribute translation with multiple feature perceptual reconstruction assisted by style translator","authors":"Shuqi Zhu,&nbsp;Jiuzhen Liang,&nbsp;Hao Liu","doi":"10.1002/cav.2273","DOIUrl":"https://doi.org/10.1002/cav.2273","url":null,"abstract":"<p>Improving the accuracy and disentanglement of attribute translation, and maintaining the consistency of face identity have been hot topics in face attribute translation. Recent approaches employ attention mechanisms to enable attribute translation in facial images. However, due to the lack of accuracy in the extraction of style code, the attention mechanism alone is not precise enough for the translation of attributes. To tackle this, we introduce a style translator module, which partitions the style code into attribute-related and unrelated components, enhancing latent space disentanglement for more accurate attribute manipulation. Additionally, many current methods use per-pixel loss functions to preserve face identity. However, this can sacrifice crucial high-level features and textures in the target image. To address this limitation, we propose a multiple-perceptual reconstruction loss to better maintain image fidelity. Extensive qualitative and quantitative experiments in this article demonstrate significant improvements over state-of-the-art methods, validating the effectiveness of our approach.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KDPM: Knowledge-driven dynamic perception model for evacuation scene simulation KDPM:用于疏散场景模拟的知识驱动动态感知模型
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-29 DOI: 10.1002/cav.2279
Kecheng Tang, Jiawen Zhang, Yuji Shen, Chen Li, Gaoqi He
{"title":"KDPM: Knowledge-driven dynamic perception model for evacuation scene simulation","authors":"Kecheng Tang,&nbsp;Jiawen Zhang,&nbsp;Yuji Shen,&nbsp;Chen Li,&nbsp;Gaoqi He","doi":"10.1002/cav.2279","DOIUrl":"https://doi.org/10.1002/cav.2279","url":null,"abstract":"<p>Evacuation scene simulation has become one important approach for public safety decision-making. Although existing research has considered various factors, including social forces, panic emotions, and so forth, there is a lack of consideration of how complex environmental factors affect human psychology and behavior. The main idea of this paper is to model complex evacuation environmental factors from the perspective of knowledge and explore pedestrians' emergency response mechanisms to this knowledge. Thus, a knowledge-driven dynamic perception model (KDPM) for evacuation scene simulation is proposed in this paper. This model combines three modules: knowledge dissemination, dynamic scene perception, and stress response. Both scenario knowledge and hazard source knowledge are extracted and expressed. The improved intelligent agent perception model is designed by adopting position determination. Moreover, a general adaptation syndrome (GAS) model is first presented by introducing a modified stress system model. Experimental results show that the proposed model is closer to the reality of real data sets.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowd evacuation simulation based on hierarchical agent model and physics-based character control 基于分层代理模型和基于物理的角色控制的人群疏散模拟
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2263
Jianming Ye, Zhen Liu, Tingting Liu, Yanhui Wu, Yuanyi Wang
{"title":"Crowd evacuation simulation based on hierarchical agent model and physics-based character control","authors":"Jianming Ye,&nbsp;Zhen Liu,&nbsp;Tingting Liu,&nbsp;Yanhui Wu,&nbsp;Yuanyi Wang","doi":"10.1002/cav.2263","DOIUrl":"https://doi.org/10.1002/cav.2263","url":null,"abstract":"<p>Crowd evacuation has gained increasing attention in recent years. The agent-based method has shown a superior capability to simulate complex behaviors during crowd evacuation simulation. For agent modeling, most existing methods only consider the decision process but ignore the detailed physical motion. In this article, we propose a hierarchical framework for crowd evacuation simulation, which combines the agent decision model with the agent motion model. In the decision model, we integrate emotional contagion and scene information to determine global path planning and local collision avoidance. In the motion model, we introduce a physics-based character control method and control agent motion using deep reinforcement learning. Based on the decision strategy, the decision model can use a signal to control the agent motion in the motion model. Compared with existing methods, our framework can simulate physical interactions between agents and the environment. The results of the crowd evacuation simulation demonstrate that our framework can simulate crowd evacuation with physical fidelity.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-particle debris flow simulation based on SPH 基于 SPH 的双粒子泥石流模拟
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2261
Jiaxiu Zhang, Meng Yang, Xiaomin Li, Qun'ou Jiang, Heng Zhang, Weiliang Meng
{"title":"Two-particle debris flow simulation based on SPH","authors":"Jiaxiu Zhang,&nbsp;Meng Yang,&nbsp;Xiaomin Li,&nbsp;Qun'ou Jiang,&nbsp;Heng Zhang,&nbsp;Weiliang Meng","doi":"10.1002/cav.2261","DOIUrl":"https://doi.org/10.1002/cav.2261","url":null,"abstract":"<p>Debris flow is a highly destructive natural disaster, necessitating accurate simulation and prediction. Existing simulation methods tend to be overly simplified, neglecting the three-dimensional complexity and multiphase fluid interactions, and they also lack comprehensive consideration of soil conditions. We propose a novel two-particle debris flow simulation method based on smoothed particle hydrodynamics (SPH) for enhanced accuracy. Our method employs a sophisticated two-particle model coupling debris flow dynamics with SPH to simulate fluid-solid interaction effectively, which considers various soil factors, dividing terrain into variable and fixed areas, incorporating soil impact factors for realistic simulation. By dynamically updating positions and reconstructing surfaces, and employing GPU and hash lookup acceleration methods, we achieve accurate simulation with significantly efficiency. Experimental results validate the effectiveness of our method across different conditions, making it valuable for debris flow risk assessment in natural disaster management.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiagent trajectory prediction with global-local scene-enhanced social interaction graph network 利用全局-本地场景增强型社会互动图网络进行多机器人轨迹预测
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2237
Xuanqi Lin, Yong Zhang, Shun Wang, Xinglin Piao, Baocai Yin
{"title":"Multiagent trajectory prediction with global-local scene-enhanced social interaction graph network","authors":"Xuanqi Lin,&nbsp;Yong Zhang,&nbsp;Shun Wang,&nbsp;Xinglin Piao,&nbsp;Baocai Yin","doi":"10.1002/cav.2237","DOIUrl":"https://doi.org/10.1002/cav.2237","url":null,"abstract":"<p>Trajectory prediction is essential for intelligent autonomous systems like autonomous driving, behavior analysis, and service robotics. Deep learning has emerged as the predominant technique due to its superior modeling capability for trajectory data. However, deep learning-based models face challenges in effectively utilizing scene information and accurately modeling agent interactions, largely due to the complexity and uncertainty of real-world scenarios. To mitigate these challenges, this study presents a novel multiagent trajectory prediction model, termed the global-local scene-enhanced social interaction graph network (GLSESIGN), which incorporates two pivotal strategies: global-local scene information utilization and a social adaptive attention graph network. The model hierarchically learns scene information relevant to multiple intelligent agents, thereby enhancing the understanding of complex scenes. Additionally, it adaptively captures social interactions, improving adaptability to diverse interaction patterns through sparse graph structures. This model not only improves the understanding of complex scenes but also accurately predicts future trajectories of multiple intelligent agents by flexibly modeling intricate interactions. Experimental validation on public datasets substantiates the efficacy of the proposed model. This research offers a novel model to address the complexity and uncertainty in multiagent trajectory prediction, providing more accurate predictive support in practical application scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highlight mask-guided adaptive residual network for single image highlight detection and removal 用于单幅图像高光检测和去除的高光遮罩引导自适应残差网络
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2271
Shuaibin Wang, Li Li, Juan Wang, Tao Peng, Zhenwei Li
{"title":"Highlight mask-guided adaptive residual network for single image highlight detection and removal","authors":"Shuaibin Wang,&nbsp;Li Li,&nbsp;Juan Wang,&nbsp;Tao Peng,&nbsp;Zhenwei Li","doi":"10.1002/cav.2271","DOIUrl":"https://doi.org/10.1002/cav.2271","url":null,"abstract":"<p>Specular highlights detection and removal is a challenging task. Although various methods exist for removing specular highlights, they often fail to effectively preserve the color and texture details of objects after highlight removal due to the high brightness and nonuniform distribution characteristics of highlights. Furthermore, when processing scenes with complex highlight properties, existing methods frequently encounter performance bottlenecks, which restrict their applicability. Therefore, we introduce a highlight mask-guided adaptive residual network (HMGARN). HMGARN comprises three main components: detection-net, adaptive-removal network (AR-Net), and reconstruct-net. Specifically, detection-net can accurately predict highlight mask from a single RGB image. The predicted highlight mask is then inputted into the AR-Net, which adaptively guides the model to remove specular highlights and estimate an image without specular highlights. Subsequently, reconstruct-net is used to progressively refine this result, remove any residual specular highlights, and construct the final high-quality image without specular highlights. We evaluated our method on the public dataset (SHIQ) and confirmed its superiority through comparative experimental results.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person 以关键点为导向的自适应卷积和实例归一化,实现任意人物的连续反式人脸重现
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-24 DOI: 10.1002/cav.2256
Shibiao Xu, Miao Hua, Jiguang Zhang, Zhaohui Zhang, Xiaopeng Zhang
{"title":"Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person","authors":"Shibiao Xu,&nbsp;Miao Hua,&nbsp;Jiguang Zhang,&nbsp;Zhaohui Zhang,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2256","DOIUrl":"https://doi.org/10.1002/cav.2256","url":null,"abstract":"<p>Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth- and point-wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SocialVis: Dynamic social visualization in dense scenes via real-time multi-object tracking and proximity graph construction SocialVis:通过实时多目标跟踪和邻近图构建实现密集场景中的动态社交可视化
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-24 DOI: 10.1002/cav.2272
Bowen Li, Wei Li, Jingqi Wang, Weiliang Meng, Jiguang Zhang, Xiaopeng Zhang
{"title":"SocialVis: Dynamic social visualization in dense scenes via real-time multi-object tracking and proximity graph construction","authors":"Bowen Li,&nbsp;Wei Li,&nbsp;Jingqi Wang,&nbsp;Weiliang Meng,&nbsp;Jiguang Zhang,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2272","DOIUrl":"https://doi.org/10.1002/cav.2272","url":null,"abstract":"<p>To monitor and assess social dynamics and risks at large gatherings, we propose “SocialVis,” a comprehensive monitoring system based on multi-object tracking and graph analysis techniques. Our SocialVis includes a camera detection system that operates in two modes: a real-time mode, which enables participants to track and identify close contacts instantly, and an offline mode that allows for more comprehensive post-event analysis. The dual functionality not only aids in preventing mass gatherings or overcrowding by enabling the issuance of alerts and recommendations to organizers, but also allows for the generation of proximity-based graphs that map participant interactions, thereby enhancing the understanding of social dynamics and identifying potential high-risk areas. It also provides tools for analyzing pedestrian flow statistics and visualizing paths, offering valuable insights into crowd density and interaction patterns. To enhance system performance, we designed the SocialDetect algorithm in conjunction with the BYTE tracking algorithm. This combination is specifically engineered to improve detection accuracy and minimize ID switches among tracked objects, leveraging the strengths of both algorithms. Experiments on both public and real-world datasets validate that our SocialVis outperforms existing methods, showing <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>1</mn>\u0000 <mo>.</mo>\u0000 <mn>2</mn>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ 1.2% $$</annotation>\u0000 </semantics></math> improvement in detection accuracy and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>45</mn>\u0000 <mo>.</mo>\u0000 <mn>4</mn>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ 45.4% $$</annotation>\u0000 </semantics></math> reduction in ID switches in dense pedestrian scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信