{"title":"A Control Simulation of Multiple Bubbles for Representing Desired Shapes","authors":"Naruo Nishio, Syuhei Sato, Kaisei Sakurai, Keiko Nakamoto","doi":"10.1002/cav.70037","DOIUrl":"https://doi.org/10.1002/cav.70037","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper presents a control simulation that represents user-desired shapes using multiple connected soap bubbles. A previous method attempted to control a single soap bubble using external forces. However, due to the strong surface tension making spherical babbles, elongated shapes could not be achieved. To address this issue, this paper aims to develop a control simulation that achieves diverse soap bubble shapes by dividing the target shape into connected soap bubbles. In our approach, we first generate an initial soap bubble configuration composed of multiple bubbles to represent the target shape. Then, by applying external forces to each bubble, we simulate the bubbles to maintain their shape along the target form. We use an implicit-function-like representation for the connected soap bubbles and develop a new polygonizer that makes shapes including the internal faces of bubbles. By demonstrating examples with various target shapes such as objects and text, we show the effectiveness of our proposed control method.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144126048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Talk With Socrates: Relation Between Perceived Agent Personality and User Personality in LLM-Based Natural Language Dialogue Using Virtual Reality","authors":"Mehmet Efe Sak, Sinan Sonlu, Uğur Güdükbay","doi":"10.1002/cav.70033","DOIUrl":"https://doi.org/10.1002/cav.70033","url":null,"abstract":"<div>\u0000 \u0000 <p>Large Language Models (LLMs) offer almost immediate human-like quality responses to user queries. Conversational agent systems support natural language dialogues utilizing LLM backends in combination with Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) technologies, enabling life-like characters in virtual environments. This study investigates the relationship between user personality and perceived agent personality in LLM-based natural language dialogue. We adopt a Virtual Reality (VR) setting where the user can talk with the agent that assumes the role of Socrates, the famous philosopher. To this end, we utilize a three-dimensional (3D) avatar model resembling Socrates and use specific LLM prompts to get stylistic answers from OpenAI's Chat Completions Application Programming Interface (API). Our user study measures the agent's personality and the system's ease of use, quality, realism, and immersion concerning the user's self-reported personality. The results suggest that the user's conscientiousness, extraversion, and emotional stability have a moderate effect on certain personality factors and system qualities. User conscientiousness affects the perceived ease of use, quality, and realism, while user extraversion affects perceived agent conscientiousness, system realism, and immersion. Additionally, the user's emotional stability correlates with perceived extraversion and agreeableness.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antoine Oger, Geoffrey Gorisse, Sylvain Fleury, Olivier Christmann
{"title":"MemorIA, an Architecture for Creating Interactive AI Historical Agents in Educational Contexts","authors":"Antoine Oger, Geoffrey Gorisse, Sylvain Fleury, Olivier Christmann","doi":"10.1002/cav.70032","DOIUrl":"https://doi.org/10.1002/cav.70032","url":null,"abstract":"<p>This article presents the architecture of MemorIA, an integrative system that combines existing AI technologies into a coherent educational framework for creating interactive historical agents, with the aim of fostering students' learning interest. MemorIA generates animated digital portraits of historical figures, synchronizing facial expressions with synthesized speech to enable natural conversations with students. The system leverages NVIDIA Audio2Face for real-time facial animation with first-order motion model for portrait manipulation, achieving fluid interaction through low-latency audio-visual streaming. To assess our architecture in a field situation, we conducted a pilot study in middle school history classes, where students and teachers engaged in direct conversation with a virtual Julius Caesar during Roman history lessons. Students asked questions about ancient Rome, receiving contextually appropriate responses. While qualitative feedback suggests a positive trend toward increased student participation, some weaknesses and ethical considerations emerged. Based on this assessment, we discuss implementation challenges, suggest architectural improvements, and explore potential applications across various disciplines.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzzy Sampling With Qualified Uniformity Properties for Implicitly Defined Curves and Surfaces","authors":"Mingxiao Hu, Linlin Ge, Xujie Li","doi":"10.1002/cav.70022","DOIUrl":"https://doi.org/10.1002/cav.70022","url":null,"abstract":"<div>\u0000 \u0000 <p>Sampled point clouds, particularly with prelabeled annotations and ground truth metrics, are frequently used in computer graphics and machine learning. In this work, we focus on a fuzzy sampling approach for such point clouds with qualified uniformity properties. After abstracting the uniformity requirements, a novel approach to sampling point clouds from implicitly defined curves/surfaces is proposed. The approach deliberately combines techniques including isodeviation dispatch, curvature compensation, and normalized distance blue noise. The experimental results show various sampled point clouds with uniform visual effects and statistical metrics. Moreover, the comparisons in terms of distance, density, and thickness uniformity with state-of-the-art methods exhibit the approach's advantages. Due to its low cost, ground truth, and annotation easiness features, the method will be smoothly applied in deep learning and computer animation.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Multi-Feature Fusion Shadow Puppet Motifs Generation Based on CSPMotifsGAN and Cultural Heritage Preservation","authors":"Hui Liang, Rui Wang","doi":"10.1002/cav.70047","DOIUrl":"https://doi.org/10.1002/cav.70047","url":null,"abstract":"<div>\u0000 \u0000 <p>As quintessential cultural symbols in traditional shadow puppetry, artistic motifs encapsulate profound historical narratives and serve as vital conduits for intangible cultural heritage preservation. However, this craft confronts existential threats from digital entertainment proliferation and practitioner attrition. To address these challenges, this study proposes CSPMotifsGAN, an enhanced CycleGAN framework for constructing a motif data set through three-stage processing: adaptive denoising, hierarchical classification, and multi-branch feature extraction (contour, texture, color). By integrating adversarial loss, cycle-consistency loss, and identity preservation loss, the model effectively resolves color distortion and textural degradation inherent in conventional CycleGAN. Experimental results demonstrate significant improvements: Fréchet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM), validated through both subjective evaluations and statistical analysis.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thuc-Long Ha, Juan Verde, Julien Bert, Hadrien Courtecuisse
{"title":"Toward Fluoroscopy Guided Robotic Needle Insertion for Radio Frequency Ablation","authors":"Thuc-Long Ha, Juan Verde, Julien Bert, Hadrien Courtecuisse","doi":"10.1002/cav.70025","DOIUrl":"https://doi.org/10.1002/cav.70025","url":null,"abstract":"<p>This article presents a fluoroscopy image-based registration method along with a comprehensive protocol for robotic needle insertion in radiofrequency ablation (RFA) to treat liver cancer. The proposed method uses real-time fluoroscopic images acquired from a C-ARM system and integrates an inverse finite element (FE) simulation to compute robotic commands for accurate and adaptive needle steering. The registration procedure is fully automated and involves the injection of multiple radiopaque markers into the liver, enabling precise anatomical registration and targeted tumor localization. A key challenge addressed in this work is the integration of this image-based registration with the inverse biomechanical simulation used to guide the robot during insertion. We describe how registration constraints can be mapped onto the surface of the biomechanical model to ensure consistent alignment between image data and robotic actuation. Designed to be adaptable to varying levels of radiologist expertise and applicable across a wide range of tumor locations, this method provides a robust and versatile solution for improving the accuracy and safety of minimally invasive liver cancer treatments.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144085081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Control Mapping Strategies on Task Performance, Motivation, and Engagement of Participants in a VR Upper Limb Training Task: A Randomized Controlled Trial","authors":"Binhao Huang, Jian Lv, Ligang Qiang","doi":"10.1002/cav.70019","DOIUrl":"https://doi.org/10.1002/cav.70019","url":null,"abstract":"<div>\u0000 \u0000 <p>This study explores the use of virtual reality (VR) in upper limb rehabilitation, comparing bionic and non-bionic control strategies. While VR shows potential for immersive, engaging rehabilitation, recent findings question the effectiveness of prioritizing user rehabilitation needs over bionic strategies. Few studies have examined the impact of these strategies on motor performance, motivation, and engagement. To address this, we designed two hand motion control systems—bionic and non-bionic—using a virtual block test. The results show that the bionic control strategy improves early training experiences and motor performance, but with practice, the non-bionic control group demonstrates greater adaptability, motor flexibility, and learning efficiency. This suggests that while bionic strategies may be beneficial in early stages, arbitrary control systems offer better long-term outcomes.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144074564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Reality in Engineering Education: An Application for Electronic Circuits Laboratory","authors":"Sanaa Iriqat, Fahri Vatansever","doi":"10.1002/cav.70018","DOIUrl":"https://doi.org/10.1002/cav.70018","url":null,"abstract":"<div>\u0000 \u0000 <p>Engineering education faces challenges in effectively conveying complex theoretical knowledge and practical skills. Traditional teaching tools like smart boards, education kits, web pages, and computer-based simulators often fall short in bridging the gap between theory and hands-on application. Augmented reality (AR) has emerged as a promising solution to fill this educational gap by providing immersive and interactive learning experiences. In this study, an AR-based application has been developed for operational amplifiers. Agile methodology and the ADDIE learning development model were used in the system development and instructional design of this application. This application, which requires only a smartphone and a breadboard, detects the circuit marker when users point their smartphone camera at it. After detection, the user can interact with the interface to choose whether the system shows input–output signals for different component values, the output voltage formula, a 3D model, or a brief lecture about the used amplifier. These virtual elements are overlaid onto the real-world breadboard based on the user's selection. This integration of AR technology provides an immersive, interactive learning experience, allowing students to visualize and interact with circuit elements in real time.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LFGarNet: Loose-Fitting Garment Animation With Multi-Attribute-Aware Graph Network","authors":"Peng Zhang, Bo Fei, Meng Wei, Jiamei Zhan, Kexin Wang, Youlong Lv","doi":"10.1002/cav.70017","DOIUrl":"https://doi.org/10.1002/cav.70017","url":null,"abstract":"<div>\u0000 \u0000 <p>Current AI animation generation methods excel in tight-fitting clothing scenarios but struggle with deformation distortion and the gradual loss of wrinkles over extended simulations in loose-fitting clothing. To address these issues, we propose a multi-attribute-aware Graph Network. This approach mitigates the gradual loss of wrinkles by dividing animation sequences into multiple stages based on motion categories, recognizing that identical body postures can cause different clothing deformations due to varying motion tendencies. In each stage, we first restore coarse, globally guided deformations based on the motion category, followed by enhancing detailed features. We observed that garments within the same sport category exhibit similar local wrinkles and that the degree of fit to the body varies significantly across different regions of the same garment. We introduce two specific clothing attributes: “looseness” and “deformity,” which relate to local wrinkles and have physical significance. A clothing attribute encoder perceives these attributes and constructs a clothing graph model to estimate detailed features. Our method effectively handles clothing deformations across various motion types, including extreme postures, with qualitative and quantitative analyses confirming its effectiveness.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavioral Gait Biometrics in VR: Is the Use of Synthetic Samples Able to Increase Person Identification Metrics?","authors":"Aleksander Sawicki","doi":"10.1002/cav.70016","DOIUrl":"https://doi.org/10.1002/cav.70016","url":null,"abstract":"<div>\u0000 \u0000 <p>In this paper, we present an approach to build a biometric system capable of identifying subjects based on gait. The experiments were carried out with a proprietary gait corpus collected from 100 subjects. In the data acquisition process, we used a commercially available perception neuron body suit equipped with motion sensors and dedicated to entertainment in the VR domain. Classification was performed using two variants of the CNN architecture and evaluated using cross-day validation. A novelty in the presented approach was the exploration of research areas related to the usage of synthetically generated samples. Experiments were conducted for two types of preprocessing—a low-pass filtering of the signals using a 3rd- or 1st-order Butterworth filter. For the first variant, the synthetic samples generated by the long short-term memory-mixture density network (LSTM-MDN) model allowed us to increase the F1-score from 0.928 to 0.966. Meanwhile, in the second case from 0.970 to 0.978 F1-score.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}