Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
Body Part Segmentation of Anime Characters
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-12-17 DOI: 10.1002/cav.2295
Zhenhua Ou, Xueting Liu, Chengze Li, Zhenkun Wen, Ping Li, Zhijian Gao, Huisi Wu
{"title":"Body Part Segmentation of Anime Characters","authors":"Zhenhua Ou,&nbsp;Xueting Liu,&nbsp;Chengze Li,&nbsp;Zhenkun Wen,&nbsp;Ping Li,&nbsp;Zhijian Gao,&nbsp;Huisi Wu","doi":"10.1002/cav.2295","DOIUrl":"https://doi.org/10.1002/cav.2295","url":null,"abstract":"<div>\u0000 \u0000 <p>Semantic segmentation is an important approach to present the perceptual semantic understanding of an image, which is of significant usage in various applications. Especially, body part segmentation is designed for segmenting body parts of human characters to assist different editing tasks, such as style editing, pose transfer, and animation production. Since segmentation requires pixel-level precision in semantic labeling, classic heuristics-based methods generally have unstable performance. With the deployment of deep learning, a great step has been taken in segmenting body parts of human characters in natural photographs. However, the existing models are purely trained on natural photographs and generally obtain incorrect segmentation results when applied on anime character images, due to the large visual gap between training data and testing data. In this article, we present a novel approach to achieving body part segmentation of cartoon characters via a pose-based graph-cut formulation. We demonstrate the use of the acquired body part segmentation map in various image editing tasks, including conditional generation, style manipulation, pose transfer, and video-to-anime.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Incremental 3D Model Renewal for Urban Scenes With Appearance Changes
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-12-11 DOI: 10.1002/cav.70004
Yuan Xiong, Zhong Zhou
{"title":"Fast and Incremental 3D Model Renewal for Urban Scenes With Appearance Changes","authors":"Yuan Xiong,&nbsp;Zhong Zhou","doi":"10.1002/cav.70004","DOIUrl":"https://doi.org/10.1002/cav.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Urban 3D models with high-resolution details are the basis of various mixed reality and geographic information systems. Fast and accurate urban reconstruction from aerial photographs has attracted intense attention. Existing methods exploit multi-view geometry information from landscape patterns with similar illumination conditions and terrain appearance. In practice, urban models become obsolete over time due to human activities. Mainstream reconstruction pipelines rebuild the whole scene even if the main part of them remains unchanged. This paper proposes a novel wrapping-based incremental modeling framework to reuse existing models and renew them with new meshes efficiently. The paper illustrates a pose optimization method with illumination-based augmentation and virtual bundle adjustment. Besides, a high-performance wrapping-based meshing method is proposed for fast reconstruction. Experimental results show that the proposed method can achieve higher performance and quality than state-of-the-art methods.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diverse Motions and Responses in Crowd Simulation 人群模拟中的各种运动和反应
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-26 DOI: 10.1002/cav.70002
Yiwen Ma, Tingting Liu, Zhen Liu
{"title":"Diverse Motions and Responses in Crowd Simulation","authors":"Yiwen Ma,&nbsp;Tingting Liu,&nbsp;Zhen Liu","doi":"10.1002/cav.70002","DOIUrl":"https://doi.org/10.1002/cav.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>A challenge in crowd simulation is to generate diverse pedestrian motions in virtual environments. Nowadays, there is a greater emphasis on the diversity and authenticity of pedestrian movements in crowd simulation, while most traditional models primarily focus on collision avoidance and motion continuity. Recent studies have enhanced realism through data-driven approaches that exploit the movement patterns of pedestrians from real data for trajectory prediction. However, they have not taken into account the body-part motions of pedestrians. Differing from these approaches, we innovatively utilize learning-based character motion and physics animation to enhance the diversity of pedestrian motions in crowd simulation. The proposed method can provide a promising avenue for more diverse crowds and is realized by a novel framework that deeply integrates motion synthesis and physics animation with crowd simulation. The framework consists of three main components: the learning-based motion generator, which is responsible for generating diverse character motions; the hybrid simulation, which ensures the physical realism of pedestrian motions; and the velocity-based interface, which assists in integrating navigation algorithms with the motion generator. Experiments have been conducted to verify the effectiveness of the proposed method in different aspects. The visual results demonstrate the feasibility of our approach.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142737568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters 面向外观无关 3D 角色的面部运动重定位管道
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-19 DOI: 10.1002/cav.70001
ChangAn Zhu, Chris Joslin
{"title":"A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters","authors":"ChangAn Zhu,&nbsp;Chris Joslin","doi":"10.1002/cav.70001","DOIUrl":"https://doi.org/10.1002/cav.70001","url":null,"abstract":"<p>3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142674057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Front-End Security: Protecting User Data and Privacy in Web Applications 加强前端安全:在网络应用程序中保护用户数据和隐私
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-13 DOI: 10.1002/cav.70003
Oleksandr Tkachenko, Vadim Goncharov, Przemysław Jatkiewicz
{"title":"Enhancing Front-End Security: Protecting User Data and Privacy in Web Applications","authors":"Oleksandr Tkachenko,&nbsp;Vadim Goncharov,&nbsp;Przemysław Jatkiewicz","doi":"10.1002/cav.70003","DOIUrl":"https://doi.org/10.1002/cav.70003","url":null,"abstract":"<div>\u0000 \u0000 <p>Conducting research on this subject remains relevant in light of the rapid development of technology and the emergence of new threats in cybersecurity, requiring constant updating of knowledge and protection methods. The purpose of the study is to identify effective front-end security methods and technologies that help ensure the protection of user data and their privacy when using web applications or sites. A methodology that defines the steps and processes for effective front-end security and user data protection is developed. The research identifies the primary security threats, including cross-site scripting (XSS), cross-site request forgery (CSRF), and SQL injections, and evaluates existing front-end security methods such as Content Security Policy (CSP), HTTPS, authentication, and authorization mechanisms. The findings highlight the effectiveness of these measures in mitigating security risks, providing a clear assessment of their advantages and limitations. Key recommendations for developers include the integration of modern security protocols, regular updates, and comprehensive security training. This study offers practical insights to improve front-end security and enhance user data protection in an evolving digital landscape.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Roaming of Cultural Heritage Based on Image Processing 基于图像处理的文化遗产虚拟漫游
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-10 DOI: 10.1002/cav.70000
Junzhe Chen, Xing She, Yuanxin Fan, Wenwen Shao
{"title":"Virtual Roaming of Cultural Heritage Based on Image Processing","authors":"Junzhe Chen,&nbsp;Xing She,&nbsp;Yuanxin Fan,&nbsp;Wenwen Shao","doi":"10.1002/cav.70000","DOIUrl":"https://doi.org/10.1002/cav.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>With the digital protection and development of cultural heritage as a focus, an analysis of the trends in cultural heritage digitization reveals the importance of digital technology in this field, as demonstrated by the application of virtual reality (VR) to the protection and development of the Lingjiatan site. The implementation of the Lingjiatan roaming system involves sequential steps, including image acquisition, image splicing, and roaming system production. A user test was conducted to evaluate the usability and user experience of the system. The results show that the system operates normally, with smooth interactive functions that allow users to tour the Lingjiatan site virtually. Users can learn about Lingjiatan's culture from this virtual environment. This study further explores the system's potential for site preservation and development, and its role in the integration of cultural heritage and tourism.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PainterAR: A Self-Painting AR Interface for Mobile Devices PainterAR:移动设备的自绘 AR 界面
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-07 DOI: 10.1002/cav.2296
Yuan Ma, Yinghan Shi, Lizhi Zhao, Xuequan Lu, Been-Lirn Duh, Meili Wang
{"title":"PainterAR: A Self-Painting AR Interface for Mobile Devices","authors":"Yuan Ma,&nbsp;Yinghan Shi,&nbsp;Lizhi Zhao,&nbsp;Xuequan Lu,&nbsp;Been-Lirn Duh,&nbsp;Meili Wang","doi":"10.1002/cav.2296","DOIUrl":"https://doi.org/10.1002/cav.2296","url":null,"abstract":"<div>\u0000 \u0000 <p>Painting is a complex and creative process that involves the use of various drawing skills to create artworks. The concept of training artificial intelligence models to imitate this process is referred to as neural painting. To enable ordinary people to engage in the process of painting, we propose PainterAR, a novel interface that renders any paintings stroke-by-stroke in an immersive and realistic augmented reality (AR) environment. PainterAR is composed of two components: the neural painting model and the AR interface. Regarding the neural painting model, unlike previous models, we introduce the Kullback–Leibler divergence to replace the original Wasserstein distance existed in the baseline paint transformer model, which solves an important problem of encountering different scales of strokes (big or small) during painting. We then design an interactive AR interface, which allows users to upload an image and display the creation process of the neural painting model on the virtual drawing board. Experiments demonstrate that the paintings generated by our improved neural painting model are more realistic and vivid than previous neural painting models. The user study demonstrates that users prefer to control the painting process interactively in our AR environment.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled Edge Physics Algorithms for Collaborative XR Simulations 用于 XR 协作模拟的解耦边缘物理算法
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-03 DOI: 10.1002/cav.2294
George Kokiadis, Antonis Protopsaltis, Michalis Morfiadakis, Nick Lydatakis, George Papagiannakis
{"title":"Decoupled Edge Physics Algorithms for Collaborative XR Simulations","authors":"George Kokiadis,&nbsp;Antonis Protopsaltis,&nbsp;Michalis Morfiadakis,&nbsp;Nick Lydatakis,&nbsp;George Papagiannakis","doi":"10.1002/cav.2294","DOIUrl":"https://doi.org/10.1002/cav.2294","url":null,"abstract":"<div>\u0000 \u0000 <p>This work proposes a novel approach to transform any modern game engine pipeline, for optimized performance and enhanced user experiences in extended reality (XR) environments decoupling the physics engine from the game engine pipeline and using a client-server <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>N</mi>\u0000 <mo>−</mo>\u0000 <mn>1</mn>\u0000 </mrow>\u0000 <annotation>$$ N-1 $$</annotation>\u0000 </semantics></math> architecture creates a scalable solution, efficiently serving multiple graphics clients on head-mounted displays (HMDs) with a single physics engine on edge-cloud infrastructure. This approach ensures better synchronization in multiplayer scenarios without introducing overhead in single-player experiences, maintaining session continuity despite changes in user participation. Relocating the Physics Engine to an edge or cloud node reduces strain on local hardware, dedicating more resources to high-quality rendering and unlocking the full potential of untethered HMDs. We present four algorithms that decouple the physics engine, increasing frame rates and Quality of Experience (QoE) in VR simulations, supporting advanced interactions, numerous physics objects, and multiuser sessions with over 100 concurrent users. Incorporating a Geometric Algebra interpolator reduces inter-calls between dissected parts, maintaining QoE and easing network stress. Experimental validation, with more than 100 concurrent users, 10,000 physics objects, and softbody simulations, confirms the technical viability of the proposed architecture, showcasing transformative capabilities for more immersive and collaborative XR applications without compromising performance.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VTSIM: Attention-Based Recurrent Neural Network for Intersection Vehicle Trajectory Simulation VTSIM:用于交叉路口车辆轨迹模拟的基于注意力的递归神经网络
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-11-03 DOI: 10.1002/cav.2298
Jingyao Liu, Tianlu Mao, Zhaoqi Wang
{"title":"VTSIM: Attention-Based Recurrent Neural Network for Intersection Vehicle Trajectory Simulation","authors":"Jingyao Liu,&nbsp;Tianlu Mao,&nbsp;Zhaoqi Wang","doi":"10.1002/cav.2298","DOIUrl":"https://doi.org/10.1002/cav.2298","url":null,"abstract":"<div>\u0000 \u0000 <p>Simulating vehicle trajectories at intersections is one of the challenging tasks in traffic simulation. Existing methods are often ineffective due to the complexity and diversity of lane topologies at intersections, as well as the numerous interactions affecting vehicle motion. To address this issue, we propose a deep learning based vehicle trajectory simulation method. First, we employ a vectorized representation to uniformly extract features from traffic elements such as pedestrians, vehicles, and lanes. By fusing all factors that influence vehicle motion, this representation makes our method suitable for a variety of intersections. Second, we propose a deep learning model, which has an attention network to dynamically extract features from the surrounding environment of the vehicles. To address the issue of vehicles continuously entering and exiting the simulation scene, we employ an asynchronous recurrent neural network for the extraction of temporal features. Comparative evaluations against existing rule-based and deep learning-based methods demonstrate our model's superior simulation accuracy. Furthermore, experimental validation on public datasets demonstrates that our model can simulate vehicle trajectories among the urban intersections with different topologies including those not present in the training dataset.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training Climbing Roses by Constrained Graph Search 通过受限图谱搜索训练攀爬玫瑰
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2024-10-15 DOI: 10.1002/cav.2297
Wataru Umezawa, Tomohiko Mukai
{"title":"Training Climbing Roses by Constrained Graph Search","authors":"Wataru Umezawa,&nbsp;Tomohiko Mukai","doi":"10.1002/cav.2297","DOIUrl":"https://doi.org/10.1002/cav.2297","url":null,"abstract":"<p>Cultivated climbing roses are skillfully shaped by arranging their stems manually against support walls to enhance their aesthetic appeal. This study introduces a procedural technique designed to replicate the branching pattern of climbing roses, simulating the manual training process. The central idea of the proposed approach is the conceptualization of tree modeling as a constrained path-finding problem. The primary goal is to optimize the stem structure to achieve the most impressive floral display. The proposed method operates iteratively, generating multiple stems while applying the objective function in each iteration for maximizing coverage on the support wall. Our approach offers a diverse range of tree forms employing only a few parameters, which eliminates the requirement for specialized knowledge in cultivation or plant ecology.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2297","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142439041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信