Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
KDPM: Knowledge-driven dynamic perception model for evacuation scene simulation KDPM:用于疏散场景模拟的知识驱动动态感知模型
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-29 DOI: 10.1002/cav.2279
Kecheng Tang, Jiawen Zhang, Yuji Shen, Chen Li, Gaoqi He
{"title":"KDPM: Knowledge-driven dynamic perception model for evacuation scene simulation","authors":"Kecheng Tang,&nbsp;Jiawen Zhang,&nbsp;Yuji Shen,&nbsp;Chen Li,&nbsp;Gaoqi He","doi":"10.1002/cav.2279","DOIUrl":"https://doi.org/10.1002/cav.2279","url":null,"abstract":"<p>Evacuation scene simulation has become one important approach for public safety decision-making. Although existing research has considered various factors, including social forces, panic emotions, and so forth, there is a lack of consideration of how complex environmental factors affect human psychology and behavior. The main idea of this paper is to model complex evacuation environmental factors from the perspective of knowledge and explore pedestrians' emergency response mechanisms to this knowledge. Thus, a knowledge-driven dynamic perception model (KDPM) for evacuation scene simulation is proposed in this paper. This model combines three modules: knowledge dissemination, dynamic scene perception, and stress response. Both scenario knowledge and hazard source knowledge are extracted and expressed. The improved intelligent agent perception model is designed by adopting position determination. Moreover, a general adaptation syndrome (GAS) model is first presented by introducing a modified stress system model. Experimental results show that the proposed model is closer to the reality of real data sets.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowd evacuation simulation based on hierarchical agent model and physics-based character control 基于分层代理模型和基于物理的角色控制的人群疏散模拟
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2263
Jianming Ye, Zhen Liu, Tingting Liu, Yanhui Wu, Yuanyi Wang
{"title":"Crowd evacuation simulation based on hierarchical agent model and physics-based character control","authors":"Jianming Ye,&nbsp;Zhen Liu,&nbsp;Tingting Liu,&nbsp;Yanhui Wu,&nbsp;Yuanyi Wang","doi":"10.1002/cav.2263","DOIUrl":"https://doi.org/10.1002/cav.2263","url":null,"abstract":"<p>Crowd evacuation has gained increasing attention in recent years. The agent-based method has shown a superior capability to simulate complex behaviors during crowd evacuation simulation. For agent modeling, most existing methods only consider the decision process but ignore the detailed physical motion. In this article, we propose a hierarchical framework for crowd evacuation simulation, which combines the agent decision model with the agent motion model. In the decision model, we integrate emotional contagion and scene information to determine global path planning and local collision avoidance. In the motion model, we introduce a physics-based character control method and control agent motion using deep reinforcement learning. Based on the decision strategy, the decision model can use a signal to control the agent motion in the motion model. Compared with existing methods, our framework can simulate physical interactions between agents and the environment. The results of the crowd evacuation simulation demonstrate that our framework can simulate crowd evacuation with physical fidelity.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-particle debris flow simulation based on SPH 基于 SPH 的双粒子泥石流模拟
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2261
Jiaxiu Zhang, Meng Yang, Xiaomin Li, Qun'ou Jiang, Heng Zhang, Weiliang Meng
{"title":"Two-particle debris flow simulation based on SPH","authors":"Jiaxiu Zhang,&nbsp;Meng Yang,&nbsp;Xiaomin Li,&nbsp;Qun'ou Jiang,&nbsp;Heng Zhang,&nbsp;Weiliang Meng","doi":"10.1002/cav.2261","DOIUrl":"https://doi.org/10.1002/cav.2261","url":null,"abstract":"<p>Debris flow is a highly destructive natural disaster, necessitating accurate simulation and prediction. Existing simulation methods tend to be overly simplified, neglecting the three-dimensional complexity and multiphase fluid interactions, and they also lack comprehensive consideration of soil conditions. We propose a novel two-particle debris flow simulation method based on smoothed particle hydrodynamics (SPH) for enhanced accuracy. Our method employs a sophisticated two-particle model coupling debris flow dynamics with SPH to simulate fluid-solid interaction effectively, which considers various soil factors, dividing terrain into variable and fixed areas, incorporating soil impact factors for realistic simulation. By dynamically updating positions and reconstructing surfaces, and employing GPU and hash lookup acceleration methods, we achieve accurate simulation with significantly efficiency. Experimental results validate the effectiveness of our method across different conditions, making it valuable for debris flow risk assessment in natural disaster management.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiagent trajectory prediction with global-local scene-enhanced social interaction graph network 利用全局-本地场景增强型社会互动图网络进行多机器人轨迹预测
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2237
Xuanqi Lin, Yong Zhang, Shun Wang, Xinglin Piao, Baocai Yin
{"title":"Multiagent trajectory prediction with global-local scene-enhanced social interaction graph network","authors":"Xuanqi Lin,&nbsp;Yong Zhang,&nbsp;Shun Wang,&nbsp;Xinglin Piao,&nbsp;Baocai Yin","doi":"10.1002/cav.2237","DOIUrl":"https://doi.org/10.1002/cav.2237","url":null,"abstract":"<p>Trajectory prediction is essential for intelligent autonomous systems like autonomous driving, behavior analysis, and service robotics. Deep learning has emerged as the predominant technique due to its superior modeling capability for trajectory data. However, deep learning-based models face challenges in effectively utilizing scene information and accurately modeling agent interactions, largely due to the complexity and uncertainty of real-world scenarios. To mitigate these challenges, this study presents a novel multiagent trajectory prediction model, termed the global-local scene-enhanced social interaction graph network (GLSESIGN), which incorporates two pivotal strategies: global-local scene information utilization and a social adaptive attention graph network. The model hierarchically learns scene information relevant to multiple intelligent agents, thereby enhancing the understanding of complex scenes. Additionally, it adaptively captures social interactions, improving adaptability to diverse interaction patterns through sparse graph structures. This model not only improves the understanding of complex scenes but also accurately predicts future trajectories of multiple intelligent agents by flexibly modeling intricate interactions. Experimental validation on public datasets substantiates the efficacy of the proposed model. This research offers a novel model to address the complexity and uncertainty in multiagent trajectory prediction, providing more accurate predictive support in practical application scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highlight mask-guided adaptive residual network for single image highlight detection and removal 用于单幅图像高光检测和去除的高光遮罩引导自适应残差网络
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-27 DOI: 10.1002/cav.2271
Shuaibin Wang, Li Li, Juan Wang, Tao Peng, Zhenwei Li
{"title":"Highlight mask-guided adaptive residual network for single image highlight detection and removal","authors":"Shuaibin Wang,&nbsp;Li Li,&nbsp;Juan Wang,&nbsp;Tao Peng,&nbsp;Zhenwei Li","doi":"10.1002/cav.2271","DOIUrl":"https://doi.org/10.1002/cav.2271","url":null,"abstract":"<p>Specular highlights detection and removal is a challenging task. Although various methods exist for removing specular highlights, they often fail to effectively preserve the color and texture details of objects after highlight removal due to the high brightness and nonuniform distribution characteristics of highlights. Furthermore, when processing scenes with complex highlight properties, existing methods frequently encounter performance bottlenecks, which restrict their applicability. Therefore, we introduce a highlight mask-guided adaptive residual network (HMGARN). HMGARN comprises three main components: detection-net, adaptive-removal network (AR-Net), and reconstruct-net. Specifically, detection-net can accurately predict highlight mask from a single RGB image. The predicted highlight mask is then inputted into the AR-Net, which adaptively guides the model to remove specular highlights and estimate an image without specular highlights. Subsequently, reconstruct-net is used to progressively refine this result, remove any residual specular highlights, and construct the final high-quality image without specular highlights. We evaluated our method on the public dataset (SHIQ) and confirmed its superiority through comparative experimental results.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person 以关键点为导向的自适应卷积和实例归一化,实现任意人物的连续反式人脸重现
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-24 DOI: 10.1002/cav.2256
Shibiao Xu, Miao Hua, Jiguang Zhang, Zhaohui Zhang, Xiaopeng Zhang
{"title":"Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person","authors":"Shibiao Xu,&nbsp;Miao Hua,&nbsp;Jiguang Zhang,&nbsp;Zhaohui Zhang,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2256","DOIUrl":"https://doi.org/10.1002/cav.2256","url":null,"abstract":"<p>Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth- and point-wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SocialVis: Dynamic social visualization in dense scenes via real-time multi-object tracking and proximity graph construction SocialVis:通过实时多目标跟踪和邻近图构建实现密集场景中的动态社交可视化
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-24 DOI: 10.1002/cav.2272
Bowen Li, Wei Li, Jingqi Wang, Weiliang Meng, Jiguang Zhang, Xiaopeng Zhang
{"title":"SocialVis: Dynamic social visualization in dense scenes via real-time multi-object tracking and proximity graph construction","authors":"Bowen Li,&nbsp;Wei Li,&nbsp;Jingqi Wang,&nbsp;Weiliang Meng,&nbsp;Jiguang Zhang,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2272","DOIUrl":"https://doi.org/10.1002/cav.2272","url":null,"abstract":"<p>To monitor and assess social dynamics and risks at large gatherings, we propose “SocialVis,” a comprehensive monitoring system based on multi-object tracking and graph analysis techniques. Our SocialVis includes a camera detection system that operates in two modes: a real-time mode, which enables participants to track and identify close contacts instantly, and an offline mode that allows for more comprehensive post-event analysis. The dual functionality not only aids in preventing mass gatherings or overcrowding by enabling the issuance of alerts and recommendations to organizers, but also allows for the generation of proximity-based graphs that map participant interactions, thereby enhancing the understanding of social dynamics and identifying potential high-risk areas. It also provides tools for analyzing pedestrian flow statistics and visualizing paths, offering valuable insights into crowd density and interaction patterns. To enhance system performance, we designed the SocialDetect algorithm in conjunction with the BYTE tracking algorithm. This combination is specifically engineered to improve detection accuracy and minimize ID switches among tracked objects, leveraging the strengths of both algorithms. Experiments on both public and real-world datasets validate that our SocialVis outperforms existing methods, showing <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>1</mn>\u0000 <mo>.</mo>\u0000 <mn>2</mn>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ 1.2% $$</annotation>\u0000 </semantics></math> improvement in detection accuracy and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>45</mn>\u0000 <mo>.</mo>\u0000 <mn>4</mn>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ 45.4% $$</annotation>\u0000 </semantics></math> reduction in ID switches in dense pedestrian scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSANet: A lightweight hybrid network for human action recognition in virtual sports DSANet:用于虚拟运动中人类动作识别的轻量级混合网络
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-24 DOI: 10.1002/cav.2274
Zhiyong Xiao, Feng Yu, Li Liu, Tao Peng, Xinrong Hu, Minghua Jiang
{"title":"DSANet: A lightweight hybrid network for human action recognition in virtual sports","authors":"Zhiyong Xiao,&nbsp;Feng Yu,&nbsp;Li Liu,&nbsp;Tao Peng,&nbsp;Xinrong Hu,&nbsp;Minghua Jiang","doi":"10.1002/cav.2274","DOIUrl":"https://doi.org/10.1002/cav.2274","url":null,"abstract":"<p>Human activity recognition (HAR) has significant potential in virtual sports applications. However, current HAR networks often prioritize high accuracy at the expense of practical application requirements, resulting in networks with large parameter counts and computational complexity. This can pose challenges for real-time and efficient recognition. This paper proposes a hybrid lightweight DSANet network designed to address the challenges of real-time performance and algorithmic complexity. The network utilizes a multi-scale depthwise separable convolutional (Multi-scale DWCNN) module to extract spatial information and a multi-layer Gated Recurrent Unit (Multi-layer GRU) module for temporal feature extraction. It also incorporates an improved channel-space attention module called RCSFA to enhance feature extraction capability. By leveraging channel, spatial, and temporal information, the network achieves a low number of parameters with high accuracy. Experimental evaluations on UCIHAR, WISDM, and PAMAP2 datasets demonstrate that the network not only reduces parameter counts but also achieves accuracy rates of 97.55%, 98.99%, and 98.67%, respectively, compared to state-of-the-art networks. This research provides valuable insights for the virtual sports field and presents a novel network for real-time activity recognition deployment in embedded devices.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer FrseGAN:基于 GAN 并结合变换器的自由式可编辑面部化妆转移系统
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-24 DOI: 10.1002/cav.2235
Weifeng Xu, Pengjie Wang, Xiaosong Yang
{"title":"FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer","authors":"Weifeng Xu,&nbsp;Pengjie Wang,&nbsp;Xiaosong Yang","doi":"10.1002/cav.2235","DOIUrl":"https://doi.org/10.1002/cav.2235","url":null,"abstract":"<p>Makeup in real life varies widely and is personalized, presenting a key challenge in makeup transfer. Most previous makeup transfer techniques divide the face into distinct regions for color transfer, frequently neglecting details like eyeshadow and facial contours. Given the successful advancements of Transformers in various visual tasks, we believe that this technology holds large potential in addressing pose, expression, and occlusion differences. To explore this, we propose novel pipeline which combines well-designed Convolutional Neural Network with Transformer to leverage the advantages of both networks for high-quality facial makeup transfer. This enables hierarchical extraction of both local and global facial features, facilitating the encoding of facial attributes into pyramid feature maps. Furthermore, a Low-Frequency Information Fusion Module is proposed to address the problem of large pose and expression variations which exist between the source and reference faces by extracting makeup features from the reference and adapting them to the source. Experiments demonstrate that our method produces makeup faces that are visually more detailed and realistic, yielding superior results.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAN-Based Multi-Decomposition Photo Cartoonization 基于 GAN 的多分解照片卡通化
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-05-23 DOI: 10.1002/cav.2248
Wenqing Zhao, Jianlin Zhu, Jin Huang, Ping Li, Bin Sheng
{"title":"GAN-Based Multi-Decomposition Photo Cartoonization","authors":"Wenqing Zhao,&nbsp;Jianlin Zhu,&nbsp;Jin Huang,&nbsp;Ping Li,&nbsp;Bin Sheng","doi":"10.1002/cav.2248","DOIUrl":"https://doi.org/10.1002/cav.2248","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Cartoon images play a vital role in film production, scientific and educational animation, video games, and other fields, and are one of the key visual expressions of artistic creation. However, since hand-crafted cartoon images often require a great deal of time and effort on the part of professional artists, it is necessary to be able to automatically transform real-world images into different styles of cartoon images. Although cartoon images vary from artist to artist, cartoon images generally have the unique characteristics of being highly simplified and abstract, with clear edges, smooth color shading, and relatively simple textures. However, existing image cartoonization methods tend to create a number of problems when performing style transfer, which mainly include: (1) the resulting generated images do not have obvious cartoon-style textures; and (2) the generated images are prone to structural confusion, color artifacts, and loss of the original image content. Therefore, it is also a great challenge in the field of image cartoonization to be able to make a good balance between style transfer and content keeping.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>In this paper, we propose a GAN-based multi-attention mechanism for image cartoonization to address the above issues. The method combines the residual block used to extract deep network features in the generator with the attention mechanism, and further strengthens the perceptual ability of the generative model to cartoon images through the adaptive feature correction of the attention module to improve the cartoon features of the generated images. At the same time, we also introduce the attention mechanism in the convolution block of the discriminator, which is used to further reduce the image visual quality problem caused by the style transfer process. By introducing the attention mechanism into the generator and discriminator models of the generative adversarial network, our method enables the generated images to have obvious cartoon-style features while effectively improving the image's visual quality.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>A large number of quantitative, qualitative, and ablation experiments are conducted to demonstrate the advantages of our method in the field of image cartoonization and the role of each module in the method.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141085021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信