{"title":"Exploring the Use of Eye-Tracking and Virtual Reality to Optimize the Navigational Capabilities of Subway Platform Signage: An Experimental Study on Pedestrians' Perceived Satisfaction","authors":"Zehua Wen, Jiahao Wan, Wen Li, Xiaoyang Guo, Sifan Jia, Zihe Wang","doi":"10.1002/cav.70112","DOIUrl":"https://doi.org/10.1002/cav.70112","url":null,"abstract":"<div>\u0000 \u0000 <p>Signage is critical for subway platform navigation, but there has been limited validation research on signage navigational capabilities. This study involves two experiments based on user research study. The first experiment involves 42 participants exploring design factors that influence the navigational satisfaction of signage and identifying problematic signage, using eye-tracking and virtual reality technology. The second, involving 30 participants, evaluates how optimization solutions improve such signage navigational satisfaction. At the same time, the entropy-weighted TOPSIS method verified findings. Results showed significant linear relationships between signage satisfaction and eye-tracking metrics (AFD, FFD, FC; <i>p</i> < 0.05) and design elements (size, content, text size, text color, position, angle; <i>p</i> < 0.01). Multiple linear regression models had high goodness of fit (<i>R</i><sup>2</sup> = 0.948 and adjusted <i>R</i><sup>2</sup> = 0.944 in the first experiment; <i>R</i><sup>2</sup> = 0.899 and adjusted <i>R</i><sup>2</sup> = 0.888 in the second experiment). Optimization increased AFD, FC, FFD by 3–5 times, while the optimized model retains high explanatory power (<i>R</i><sup>2</sup> = 0.888). The optimal solution selected by the entropy-weighted TOPSIS method is highly consistent with experimental results, confirming optimized schemes enhanced navigational satisfaction. This study provides practical pathways and data support for quantitative evaluation and optimization of subway signage.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147714976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongrui Wang, Juliang Xiao, Yifan Niu, Ziao Lin, Haitao Liu
{"title":"A Dynamic Voxel Grid Augmented Reality Interaction Method Based on Octree","authors":"Hongrui Wang, Juliang Xiao, Yifan Niu, Ziao Lin, Haitao Liu","doi":"10.1002/cav.70111","DOIUrl":"https://doi.org/10.1002/cav.70111","url":null,"abstract":"<div>\u0000 \u0000 <p>Augmented reality (AR) technology enables the interaction between physical objects and virtual objects, which enriches the expression forms of virtual information. At present, most relevant research focuses on the projection and positioning of AR applications. However, the interaction precision of AR applications is generally low due to the limited computing capability of related devices. Based on this, this paper proposes an octree-based dynamic voxel grid representation method. Virtual objects are constructed with voxel models, and during interaction, the voxel grids of the virtual model are first merged and then decomposed in real time. This method can reduce the substantial memory overhead caused by the improvement of model accuracy, improve computational efficiency in the high-precision interaction process, and ensure the real-time performance of interaction. By integrating this method into the field of mechanical machining, we have constructed a hybrid robotic machining trajectory verification system. Experimental results demonstrate that this method can improve computational efficiency and realize high-precision interaction.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147715051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual Control of Difficulty for Visual-Search-Based Games in 3D Virtual Environments","authors":"Semihanur Aktay, M. Abdullah Bulbul","doi":"10.1002/cav.70108","DOIUrl":"https://doi.org/10.1002/cav.70108","url":null,"abstract":"<div>\u0000 \u0000 <p>The arrangement of objects in a 3D gaming environment is key to guiding players' visual attention. The human visual system is naturally drawn to salient areas, characterized by high contrast and importance, which directly affect performance in games like hidden-object and first-person shooters. This study investigates how improving user experience can be achieved by varying game difficulty according to objects' saliency. We created two games with three distinct levels of difficulty in which players utilized virtual reality to locate targets in various environments. Game difficulty was modulated by the saliency of these objects within their environments. In the second experiment, we applied the same structure but focused on how varying the materials and colors of objects influenced detection. Our findings show that controlling object saliency and visual characteristics can effectively manage game difficulty, offering new possibilities for adaptive and immersive gaming experiences.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147567213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Cost-Effective Arduino-Based Interface for Simulating the Gait of Able-Bodied Individuals, Persons With Disabilities, and Zombies in Immersive Virtual Reality","authors":"Jong-Hyun Kim","doi":"10.1002/cav.70103","DOIUrl":"https://doi.org/10.1002/cav.70103","url":null,"abstract":"<div>\u0000 \u0000 <p>This study proposes a low-cost, Arduino-based walking interface designed to enhance user immersion in virtual reality (VR) content. The interface detects various walking motions such as walking, running, and limping, and reflects these movements in the virtual environment to deepen the user's sense of immersion. To achieve this, a sensor device based on Arduino was developed to analyze acceleration data from a gyroscope sensor. This data was then integrated with the Unity3D engine to synchronize character animations and first-person visual effects. Additionally, a multi-user environment was implemented using Bluetooth connectivity with mobile devices, allowing both head-mounted display (HMD) and non-HMD users to share the same virtual experience. Owing to its low cost and high compatibility, this interface can provide an immersive VR experience not only for general users but also for those with mobility impairments, suggesting its potential for applications in fields such as healthcare, education, and psychological rehabilitation.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147564997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Therapeutic Potential of VR-Based ASMR Animation: A Comparative Study on Relaxation and Sleep Aid","authors":"Jiahao Du, Lihua You, Jian Jun Zhang","doi":"10.1002/cav.70101","DOIUrl":"https://doi.org/10.1002/cav.70101","url":null,"abstract":"<p>Although numerous studies have explored relaxation and sleep aid through Autonomous Sensory Meridian Response (ASMR) videos or conventional Virtual Reality (VR) relaxation methods, the integration of VR 3D animation with ASMR and its comparison to traditional VR relaxation methods remains underexplored. To address this gap, we investigate a standardized process for creating a VR-based ASMR animation game and its impact on triggering the ASMR tingling sensation in VR environments. We also developed a VR 3D environment game featuring four natural environments, along with one ASMR video as a control group. A comprehensive experiment was conducted to compare the effectiveness of these three relaxation methods. Forty-seven participants aged 18–35 from Bournemouth University were recruited and divided into three experimental groups. Participants' emotional and physiological responses were monitored using both subjective questionnaires and physiological data collection that is, heart rate (HR) and electrodermal activity (EDA). Our findings show that VR-based ASMR animation game effectively triggers the ASMR tingling experience and offers superior relaxation, sleep assistance, and emotional regulation compared to watching ASMR videos and conventional VR relaxation methods, resulting in a significant reduction in anxiety and stress, as well as increased feelings of calmness and sleepiness.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70101","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147562792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SqueezeNet-ImpLinknet Architecture for Crowd Anomaly Detection With Improved R-CNN-Based Segmentation","authors":"Jyoti Ambadas Kendule, Kailash J. Karande","doi":"10.1002/cav.70100","DOIUrl":"https://doi.org/10.1002/cav.70100","url":null,"abstract":"<div>\u0000 \u0000 <p>Crowd anomaly detection is a critical aspect of ensuring public safety in various domains such as surveillance and security. Ensuring public safety in crowded environments requires accurate and efficient crowd anomaly detection. This research proposes an innovative approach to crowd anomaly detection using the SqueezeNet-ImpLinknet architecture. The input images are first preprocessed using a median filtering technique. Then, object segmentation takes place using an Improved Mask Region-based CNN. It incorporates batch normalization, ReLU activation, and an advanced Scale Dot Product attention mechanism to improve segmentation accuracy and computational efficiency. Subsequently, features such as the Improved SLBT feature, capturing shape and texture information, color features, and LGTrP features are extracted. Then, anomaly detection is performed using a hybrid model that integrates SqueezeNet and Improved Linknet models. The Improved LinkNet model enhances feature representation by integrating an attention mechanism in the encoder and a novel ReLUSignmax activation function in the decoder, overcoming limitations of conventional architectures. The approach is evaluated on the widely used UCSD Anomaly Detection Dataset, achieving superior performance with accuracy ranging from 0.939 to 0.975 and a specificity of 0.987 at 90% training data. The proposed approach offers a robust solution for intelligent surveillance in crowded environments.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147563005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prototype XR Elastodynamics System for Disaster Medical Response","authors":"Xu Wang, Daichi Aoki, Soichi Murakami, Takashi Shimoe, Taku Senoo, Hiroaki Date, Toshiaki Shichinohe, Takashige Abe, Satoshi Kanai, Atsushi Konno","doi":"10.1002/cav.70105","DOIUrl":"https://doi.org/10.1002/cav.70105","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper presents a prototype XR system for disaster medical response, demonstrating the feasibility of real-time interactive elastodynamics simulations in emergency scenarios. The system delivers an end-to-end workflow utilizing XR technology, from on-site data acquisition to remote simulation. Specifically, we propose an image-guided mesh-processing pipeline that converts photographs of injured individuals into solver-ready tetrahedral meshes. We also develop a constraint-based elastodynamics solver capable of simulating deformable bodies and visualizing internal stresses. Additionally, the system integrates multiple advanced XR devices and addresses the coordinate-alignment problem between these devices and the simulator. We validate the system's performance in both AR/VR modes, under textured and stress-visualization configurations, and demonstrate its applicability for remote medical guidance. Beyond whole-body elastic simulations, we conduct preliminary organ-level experiments to inform future remote surgical applications. This prototype, validated using a two-room setup, provides a feasible solution for remote emergency medical response.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147562791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Na Chen, Haoyu Wang, Yitong Hu, Ruifang Zhang, Zhipeng Zhang
{"title":"Development and Preliminary Validation of an Immersive Virtual Reality Serious Game for Subway Fire Emergency Plan Training","authors":"Na Chen, Haoyu Wang, Yitong Hu, Ruifang Zhang, Zhipeng Zhang","doi":"10.1002/cav.70102","DOIUrl":"https://doi.org/10.1002/cav.70102","url":null,"abstract":"<div>\u0000 \u0000 <p>Emergency plan training is very important for subway staff to respond quickly in the event of a fire emergency in a subway station. Compared to traditional training methods, immersive virtual reality serious games (IVR SGs) are gradually attracting attention. However, there are few studies currently on the application of this training method to fire emergency plan training for subway staff. Therefore, an IVR SG for subway intern staff fire emergency plan training was designed and developed. Firstly, the prototype development process for this IVR SG was described, then the effectiveness of the developed IVR SG training was validated through being compared with traditional written text training from the perspectives of knowledge acquisition and retention, self-efficacy improvement, and users' virtual reality (VR) experience. The results showed that the developed IVR SG was effective in terms of knowledge acquisition and self-efficacy improvement, and was more effective than the traditional written text training in short-term and long-term training effectiveness. In addition, the VR experience that the developed IVR SG provided to users was also acceptable. The study can provide some insights for the prototype development and validation of effectiveness on IVR SG for the emergency plan training.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147315571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Virtual Instructor-Led System for Assessing and Guiding Middle School Physics Experiments","authors":"Fengming Wang, Zhigeng Pan, Fuchang Liu, Yu Lu","doi":"10.1002/cav.70090","DOIUrl":"https://doi.org/10.1002/cav.70090","url":null,"abstract":"<div>\u0000 \u0000 <p>Interactive computer technology is deeply integrated into traditional teaching methods. The traditional teaching of physics experiments in secondary schools suffers from the inability of teachers to provide timely guidance to students, the difficulty of controlling experimental variables, and the lack of uniformity in evaluation criteria. To address the aforementioned issues, we have developed an innovative system to improve secondary school physics education using computer vision-based interaction with virtual humans and sensors. The proposed system captures experimental data in real time so that student performance can be accurately monitored and assessed. Teachers can effortlessly configure experiments through simple coding, while the system leverages a multimodal macrolanguage model to offer contextual feedback and guidance. The system generates a virtual teacher that offers step-by-step guidance and real-time feedback. Usability tests indicate that the system significantly improves student engagement and comprehension of complex physics concepts, highlighting its potential to transform traditional science education. The advantage of real-time assessment and guidance in secondary school physics experiments is that it enables students to grasp abstract concepts in a more intuitive and comprehensible manner.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Learners' Attention Under Audiovisual Cues in Virtual Reality With a Deep Learning Model","authors":"Chen Kang, Kunyan Li","doi":"10.1002/cav.70099","DOIUrl":"https://doi.org/10.1002/cav.70099","url":null,"abstract":"<div>\u0000 \u0000 <p>Effective audiovisual cueing can significantly enhance learners' attention to educational resources in the Virtual Reality (VR). However, predicting the impact of multimodal cueing on learners' attention in immersive teaching environments remains a challenging task. To address this, we propose a deep learning model named Attention Prediction Model (APM). This model employs RevFCN to extract visual and auditory cue features and incorporates a tailored Upsample-Aggregation Fusion Module (UAFM) to integrate multimodal representations. Additionally, an SANet is introduced to effectively combine the advantages of spatial and channel attention. Trained on our constructed dataset, APM achieved an attention prediction accuracy of 81.6%. These findings offer both theoretical and practical implications for the application of multimodal cueing in VR-based instructional design.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"37 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}