{"title":"A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters","authors":"ChangAn Zhu, Chris Joslin","doi":"10.1002/cav.70001","DOIUrl":"https://doi.org/10.1002/cav.70001","url":null,"abstract":"<p>3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142674057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Front-End Security: Protecting User Data and Privacy in Web Applications","authors":"Oleksandr Tkachenko, Vadim Goncharov, Przemysław Jatkiewicz","doi":"10.1002/cav.70003","DOIUrl":"https://doi.org/10.1002/cav.70003","url":null,"abstract":"<div>\u0000 \u0000 <p>Conducting research on this subject remains relevant in light of the rapid development of technology and the emergence of new threats in cybersecurity, requiring constant updating of knowledge and protection methods. The purpose of the study is to identify effective front-end security methods and technologies that help ensure the protection of user data and their privacy when using web applications or sites. A methodology that defines the steps and processes for effective front-end security and user data protection is developed. The research identifies the primary security threats, including cross-site scripting (XSS), cross-site request forgery (CSRF), and SQL injections, and evaluates existing front-end security methods such as Content Security Policy (CSP), HTTPS, authentication, and authorization mechanisms. The findings highlight the effectiveness of these measures in mitigating security risks, providing a clear assessment of their advantages and limitations. Key recommendations for developers include the integration of modern security protocols, regular updates, and comprehensive security training. This study offers practical insights to improve front-end security and enhance user data protection in an evolving digital landscape.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Roaming of Cultural Heritage Based on Image Processing","authors":"Junzhe Chen, Xing She, Yuanxin Fan, Wenwen Shao","doi":"10.1002/cav.70000","DOIUrl":"https://doi.org/10.1002/cav.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>With the digital protection and development of cultural heritage as a focus, an analysis of the trends in cultural heritage digitization reveals the importance of digital technology in this field, as demonstrated by the application of virtual reality (VR) to the protection and development of the Lingjiatan site. The implementation of the Lingjiatan roaming system involves sequential steps, including image acquisition, image splicing, and roaming system production. A user test was conducted to evaluate the usability and user experience of the system. The results show that the system operates normally, with smooth interactive functions that allow users to tour the Lingjiatan site virtually. Users can learn about Lingjiatan's culture from this virtual environment. This study further explores the system's potential for site preservation and development, and its role in the integration of cultural heritage and tourism.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PainterAR: A Self-Painting AR Interface for Mobile Devices","authors":"Yuan Ma, Yinghan Shi, Lizhi Zhao, Xuequan Lu, Been-Lirn Duh, Meili Wang","doi":"10.1002/cav.2296","DOIUrl":"https://doi.org/10.1002/cav.2296","url":null,"abstract":"<div>\u0000 \u0000 <p>Painting is a complex and creative process that involves the use of various drawing skills to create artworks. The concept of training artificial intelligence models to imitate this process is referred to as neural painting. To enable ordinary people to engage in the process of painting, we propose PainterAR, a novel interface that renders any paintings stroke-by-stroke in an immersive and realistic augmented reality (AR) environment. PainterAR is composed of two components: the neural painting model and the AR interface. Regarding the neural painting model, unlike previous models, we introduce the Kullback–Leibler divergence to replace the original Wasserstein distance existed in the baseline paint transformer model, which solves an important problem of encountering different scales of strokes (big or small) during painting. We then design an interactive AR interface, which allows users to upload an image and display the creation process of the neural painting model on the virtual drawing board. Experiments demonstrate that the paintings generated by our improved neural painting model are more realistic and vivid than previous neural painting models. The user study demonstrates that users prefer to control the painting process interactively in our AR environment.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Kokiadis, Antonis Protopsaltis, Michalis Morfiadakis, Nick Lydatakis, George Papagiannakis
{"title":"Decoupled Edge Physics Algorithms for Collaborative XR Simulations","authors":"George Kokiadis, Antonis Protopsaltis, Michalis Morfiadakis, Nick Lydatakis, George Papagiannakis","doi":"10.1002/cav.2294","DOIUrl":"https://doi.org/10.1002/cav.2294","url":null,"abstract":"<div>\u0000 \u0000 <p>This work proposes a novel approach to transform any modern game engine pipeline, for optimized performance and enhanced user experiences in extended reality (XR) environments decoupling the physics engine from the game engine pipeline and using a client-server <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>N</mi>\u0000 <mo>−</mo>\u0000 <mn>1</mn>\u0000 </mrow>\u0000 <annotation>$$ N-1 $$</annotation>\u0000 </semantics></math> architecture creates a scalable solution, efficiently serving multiple graphics clients on head-mounted displays (HMDs) with a single physics engine on edge-cloud infrastructure. This approach ensures better synchronization in multiplayer scenarios without introducing overhead in single-player experiences, maintaining session continuity despite changes in user participation. Relocating the Physics Engine to an edge or cloud node reduces strain on local hardware, dedicating more resources to high-quality rendering and unlocking the full potential of untethered HMDs. We present four algorithms that decouple the physics engine, increasing frame rates and Quality of Experience (QoE) in VR simulations, supporting advanced interactions, numerous physics objects, and multiuser sessions with over 100 concurrent users. Incorporating a Geometric Algebra interpolator reduces inter-calls between dissected parts, maintaining QoE and easing network stress. Experimental validation, with more than 100 concurrent users, 10,000 physics objects, and softbody simulations, confirms the technical viability of the proposed architecture, showcasing transformative capabilities for more immersive and collaborative XR applications without compromising performance.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VTSIM: Attention-Based Recurrent Neural Network for Intersection Vehicle Trajectory Simulation","authors":"Jingyao Liu, Tianlu Mao, Zhaoqi Wang","doi":"10.1002/cav.2298","DOIUrl":"https://doi.org/10.1002/cav.2298","url":null,"abstract":"<div>\u0000 \u0000 <p>Simulating vehicle trajectories at intersections is one of the challenging tasks in traffic simulation. Existing methods are often ineffective due to the complexity and diversity of lane topologies at intersections, as well as the numerous interactions affecting vehicle motion. To address this issue, we propose a deep learning based vehicle trajectory simulation method. First, we employ a vectorized representation to uniformly extract features from traffic elements such as pedestrians, vehicles, and lanes. By fusing all factors that influence vehicle motion, this representation makes our method suitable for a variety of intersections. Second, we propose a deep learning model, which has an attention network to dynamically extract features from the surrounding environment of the vehicles. To address the issue of vehicles continuously entering and exiting the simulation scene, we employ an asynchronous recurrent neural network for the extraction of temporal features. Comparative evaluations against existing rule-based and deep learning-based methods demonstrate our model's superior simulation accuracy. Furthermore, experimental validation on public datasets demonstrates that our model can simulate vehicle trajectories among the urban intersections with different topologies including those not present in the training dataset.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Training Climbing Roses by Constrained Graph Search","authors":"Wataru Umezawa, Tomohiko Mukai","doi":"10.1002/cav.2297","DOIUrl":"https://doi.org/10.1002/cav.2297","url":null,"abstract":"<p>Cultivated climbing roses are skillfully shaped by arranging their stems manually against support walls to enhance their aesthetic appeal. This study introduces a procedural technique designed to replicate the branching pattern of climbing roses, simulating the manual training process. The central idea of the proposed approach is the conceptualization of tree modeling as a constrained path-finding problem. The primary goal is to optimize the stem structure to achieve the most impressive floral display. The proposed method operates iteratively, generating multiple stems while applying the objective function in each iteration for maximizing coverage on the support wall. Our approach offers a diverse range of tree forms employing only a few parameters, which eliminates the requirement for specialized knowledge in cultivation or plant ecology.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2297","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142439041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acanthus ornament generation using layout subdivisions with parameterized motifs","authors":"Yuka Komeda, Wataru Umezawa, Tomohiko Mukai","doi":"10.1002/cav.2292","DOIUrl":"https://doi.org/10.1002/cav.2292","url":null,"abstract":"<p>Acanthus ornaments frequently adorn Western architectural designs. The placement of these motifs varies according to the decorative object, with some having motifs arranged in a grid pattern. In this study, we propose a procedural modeling system aimed at generating acanthus ornaments on a planar surface. The proposed approach initially establishes the layout of boundary grids for acanthus motifs by subdividing the target surface into nonuniform grids. Subsequently, the medial axis line of the acanthus motif is generated to optimally conform to each boundary grid, employing a parameterized representation of deformable motifs. The three-dimensional motif shapes are ultimately constructed from the medial axis, with the shape parameters adjusted to improve aesthetic appeal. The proposed system generates various acanthus ornaments through rule-based layout subdivisions, offering users the option to select their preferred design while adjusting the motif shapes with a few manual parameters.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 5","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2292","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human action recognition in immersive virtual reality based on multi-scale spatio-temporal attention network","authors":"Zhiyong Xiao, Yukun Chen, Xinlei Zhou, Mingwei He, Li Liu, Feng Yu, Minghua Jiang","doi":"10.1002/cav.2293","DOIUrl":"https://doi.org/10.1002/cav.2293","url":null,"abstract":"<p>Wearable human action recognition (HAR) has practical applications in daily life. However, traditional HAR methods solely focus on identifying user movements, lacking interactivity and user engagement. This paper proposes a novel immersive HAR method called MovPosVR. Virtual reality (VR) technology is employed to create realistic scenes and enhance the user experience. To improve the accuracy of user action recognition in immersive HAR, a multi-scale spatio-temporal attention network (MSSTANet) is proposed. The network combines the convolutional residual squeeze and excitation (CRSE) module with the multi-branch convolution and long short-term memory (MCLSTM) module to extract spatio-temporal features and automatically select relevant features from action signals. Additionally, a multi-head attention with shared linear mechanism (MHASLM) module is designed to facilitate information interaction, further enhancing feature extraction and improving accuracy. The MSSTANet network achieves superior performance, with accuracy rates of 99.33% and 98.83% on the publicly available WISDM and PAMPA2 datasets, respectively, surpassing state-of-the-art networks. Our method showcases the potential to display user actions and position information in a virtual world, enriching user experiences and interactions across diverse application scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 5","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142313218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinrong Hu, Meng Wang, Junping Liu, Jinxing Liang, Kai Yang, Ruiqi Luo, Fei Fang, Tao Peng
{"title":"Knitted fabric simulation: A survey","authors":"Xinrong Hu, Meng Wang, Junping Liu, Jinxing Liang, Kai Yang, Ruiqi Luo, Fei Fang, Tao Peng","doi":"10.1002/cav.2262","DOIUrl":"https://doi.org/10.1002/cav.2262","url":null,"abstract":"<p>Knitted fabric simulation seeks to create lifelike virtual representations of various knitted items like sweaters and socks using mathematical models and advanced simulation techniques. Significant advancements have been achieved in this area. Visual simulations of these fabrics now boast enhanced details, textures, and colors, closely resembling real-world counterparts. Additionally, physical simulations have improved the movement and deformation aspects of virtual knitted fabrics, enhancing their realism. However, challenges remain. One major concern is the computational demands of these simulations, which can hinder real-time interactions. Another issue is refining the accuracy of material models, especially when simulating diverse fibers and textures. This paper aims to review the current state of knitted fabric simulation and speculate on its future directions.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}