Computer Animation and Virtual Worlds最新文献

筛选
英文 中文
Editorial issue 34.6 第 34.6 期社论
IF 0.9 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-12-28 DOI: 10.1002/cav.2227
Nadia Magnenat Thalmann, Daniel Thalmann
{"title":"Editorial issue 34.6","authors":"Nadia Magnenat Thalmann, Daniel Thalmann","doi":"10.1002/cav.2227","DOIUrl":"10.1002/cav.2227","url":null,"abstract":"<p>This issue contains 12 regular papers. In the first paper, Hong Li et al. present an animation translation method based on edge enhancement and coordinate attention, which is called FAEC-GAN. They design a novel edge discrimination network to identify the edge features of images, so that the generated anime images can present clear and coherent lines. And the coordinate attention module is introduced in the encoder to adapt the model to the geometric changes in translation, to produce more realistic animation images. In addition, the method combines the focal frequency loss and pixel loss, which can pay attention to both the frequency domain information and pixel information of the generated image to improve the visual effect of the image.</p><p>In the second paper, Rahul Jain et al. propose an algorithm to convert a depth video into a single dynamic image known as a linked motion image (LMI). The LMI has been given to a classifier consisting of an ensemble of three modified pre-trained convolutional neural networks (CNNs). The experiments were conducted using two datasets: a multimodal large-scale EgoGesture dataset and The MSR Gesture 3D dataset. For the EgoGesture dataset, the proposed method achieved an accuracy of 92.91%, which is better than the state-of-the-art methods. For the MSR Gesture 3D dataset, the proposed method accuracy is 100%, which outperforms the state-of-the-art methods. The recognition accuracy and precision of each gesture are also highlighted in this work.</p><p>In the third paper, Rustam Akhunov et al. propose a set of experiments to aid the evaluation of the main categories of fluid-boundary interactions that are important in computer animation, i.e. no motion (resting) fluid, tangential and normal motion of a fluid with respect to the boundary, and a fluid impacting a corner. They propose 10 experiments, comprising experimental setup and quantitative evaluation with optional visual inspections, that are arranged in four groups which focus on one of the main category of fluid-boundary interactions. The authors use these experiments to evaluate three particle-based boundary handling methods, that is, Pressure Mirroring (PM), Pressure Boundaries (PB) and Moving Least Squares Pressure Extrapolation (MLS), in combination with two incompressible SPH fluid simulation methods, namely IISPH and DFSPH.</p><p>In the fourth paper, Shenghuan Zhao et al. present three Extended Reality (XR) apps (AR, MR, and VR) to interactively visualize façade fenestration geometries and indoor illuminance simulations. Then XR technologies are assessed by 120 students and young architects, from task performance and engagement level two aspects. The task performance is measured by correct rate and time consumption two indicators, while the engagement level is measured by usability and interest two indicators. Evaluation results show that compared to AR and VR, MR is the best XR technology for this aim. VR outperforms AR on three indicators except","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"34 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2227","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139065759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the impact of non-verbal cues on user experience in immersive virtual reality 探索非语言线索对沉浸式虚拟现实用户体验的影响
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-12-19 DOI: 10.1002/cav.2224
Elena Dzardanova, Vasiliki Nikolakopoulou, Vlasios Kasapakis, Spyros Vosinakis, Ioannis Xenakis, Damianos Gavalas
{"title":"Exploring the impact of non-verbal cues on user experience in immersive virtual reality","authors":"Elena Dzardanova,&nbsp;Vasiliki Nikolakopoulou,&nbsp;Vlasios Kasapakis,&nbsp;Spyros Vosinakis,&nbsp;Ioannis Xenakis,&nbsp;Damianos Gavalas","doi":"10.1002/cav.2224","DOIUrl":"10.1002/cav.2224","url":null,"abstract":"<p>Face-to-face communication relies extensively on non-verbal cues (NVCs) which complement, or at times dominate, the communicative process as they convey emotions with intense salience, thus definitively affecting interpersonal communication. The capture, transference, and subsequent interpretation of NVCs becomes complicated in computer-mediated communicative processes, particularly in shared virtual worlds, for which there is growing interest both in regard to NVCs technological integration and their affective impact. This paper presents a between-groups experimental setup which is facilitated in immersive virtual reality (IVR), and examines NVCs effects on user experience, with special emphasis on degree of attention toward each NVC as an isolated controlled variable of a scripted performance by a virtual character (VC). This study aims to evaluate NVCs fidelity based on the capabilities of the motion-capture technologies utilized to address cue integration developmental challenges and examines NVCs impact on users' perceived realism of the VC, their empathy toward him, and the degree of social presence experienced. To meet the objectives set the affective impact of low-fidelity automated NVCs and high-fidelity real-time captured NVCs were compared. The findings of the evaluation suggest that although NVCs do impact user experience to an extent, their effects are notably more subtle compared to previous studies.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2224","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138816499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wav2Lip-HR: Synthesising clear high-resolution talking head in the wild Wav2Lip-HR:在野外合成清晰的高分辨率话头
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-12-15 DOI: 10.1002/cav.2226
Chao Liang, Qinghua Wang, Yunlin Chen, Minjie Tang
{"title":"Wav2Lip-HR: Synthesising clear high-resolution talking head in the wild","authors":"Chao Liang,&nbsp;Qinghua Wang,&nbsp;Yunlin Chen,&nbsp;Minjie Tang","doi":"10.1002/cav.2226","DOIUrl":"10.1002/cav.2226","url":null,"abstract":"<p>Talking head generation aims to synthesize a photo-realistic speaking video with accurate lip motion. While this field has attracted more attention in recent audio-visual researches, most existing methods do not achieve the simultaneous improvement of lip synchronization and visual quality. In this paper, we propose Wav2Lip-HR, a neural-based audio-driven high-resolution talking head generation method. With our technique, all required to generate a clear high-resolution lip sync talking video is an image/video of the target face and an audio clip of any speech. The primary benefit of our method is that it generates clear high-resolution videos with sufficient facial details, rather than the ones just be large-sized with less clarity. We first analyze key factors that limit the clarity of generated videos and then put forth several important solutions to address the problem, including data augmentation, model structure improvement and a more effective loss function. Finally, we employ several efficient metrics to evaluate the clarity of images generated by our proposed approach as well as several widely used metrics to evaluate lip-sync performance. Numerous experiments demonstrate that our method has superior performance on visual quality and lip synchronization when compared to other existing schemes.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Botanical-based simulation of color change in fruit ripening: Taking tomato as an example 基于植物学的果实成熟过程中颜色变化模拟——以番茄为例
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-11-23 DOI: 10.1002/cav.2225
Yixin Xu, Shiguang Liu
{"title":"Botanical-based simulation of color change in fruit ripening: Taking tomato as an example","authors":"Yixin Xu,&nbsp;Shiguang Liu","doi":"10.1002/cav.2225","DOIUrl":"10.1002/cav.2225","url":null,"abstract":"<p>The color change of plant fruit in ripening is a typical time-varying phenomenon involving various factors. Due to its complexity and biodiversity, it is challenging to model this phenomenon. To address this issue, we take the tomato as an example and propose a botanical-based framework considering variety, environment, phytohormone, and genes to simulate fruit color change during the ripening process. Specifically, we propose a first-order kinetic model that integrates varietal, environmental, and phytohormonal factors to represent the variation of pigment concentrations in the pericarp. Moreover, we introduce a logistic model to describe the change in pigment concentration in the epidermis. Based on the gene expression pathway of tomato color in botany, we propose a genotype-to-phenotype simulation method to represent its biodiversity. An improved method is proposed to convert pigment concentrations into color accurately. Furthermore, we propose a gradient descent-based method to assist the user in quickly setting pigment concentration parameters. Experiments verified that the proposed framework can simulate a wide range of tomato colors. Both qualitative and quantitative experiments validated the proposed method. Furthermore, our framework can be applied to more fruits.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D visualization-based augmented reality application for brain tumor segmentation 基于三维可视化的增强现实脑肿瘤分割应用
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-11-03 DOI: 10.1002/cav.2223
Mohamed Amine Guerroudji, Kahina Amara, Mohamed Lichouri, Nadia Zenati, Mostefa Masmoudi
{"title":"A 3D visualization-based augmented reality application for brain tumor segmentation","authors":"Mohamed Amine Guerroudji,&nbsp;Kahina Amara,&nbsp;Mohamed Lichouri,&nbsp;Nadia Zenati,&nbsp;Mostefa Masmoudi","doi":"10.1002/cav.2223","DOIUrl":"10.1002/cav.2223","url":null,"abstract":"<div>\u0000 \u0000 <p>Every year on June 8th, the globe observes World Brain Tumor Day to raise awareness and educate people about brain cancer, encompassing both noncancerous (benign) and cancerous (malignant) growths. Research in the field of brain cancer plays a vital role in supporting medical professionals. In this context, augmented reality (AR) technology has emerged as a valuable tool, enabling surgeons to visualize underlying structures and offering a cost-effective and time-efficient alternative. Our study focuses on the efficient segmentation of brain tumor classes using Magnetic Resonance Imaging (MRI) and incorporates a three-stage approach: preprocessing, segmentation, and 3D reconstruction &amp; AR display. In the preprocessing stage, a Gaussian filter is applied to mitigate intensity heterogeneity. Segmentation and detection are achieved using active geometric contour models, complemented by morphological operations. To establish 3D brain tumor reconstruction, a genuine scene is virtually integrated using 3D Slicer software. The proposed methodology was validated using a genuine patient dataset comprising 496 MRI scans obtained from the local Bab El Oued university hospital center. The results demonstrate the effectiveness of our approach in achieving accurate 3D brain tumor reconstruction, efficient tumor extraction, and augmented reality visualization. The obtained segmentation results showcased an impressive accuracy of 98.61%, outperforming existing state-of-the-art methods and affirming the efficacy of our proposed strategy.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135821484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Integration and Application of Extended Reality (XR) Technologies within the General Practice Primary Medical Care Setting: A Systematic Review 扩展现实(XR)技术在全科实践初级医疗保健环境中的集成和应用:系统综述
4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-11-02 DOI: 10.3390/virtualworlds2040021
Donovan Jones, Roberto Galvez, Darrell Evans, Michael Hazelton, Rachel Rossiter, Pauletta Irwin, Peter S. Micalos, Patricia Logan, Lorraine Rose, Shanna Fealy
{"title":"The Integration and Application of Extended Reality (XR) Technologies within the General Practice Primary Medical Care Setting: A Systematic Review","authors":"Donovan Jones, Roberto Galvez, Darrell Evans, Michael Hazelton, Rachel Rossiter, Pauletta Irwin, Peter S. Micalos, Patricia Logan, Lorraine Rose, Shanna Fealy","doi":"10.3390/virtualworlds2040021","DOIUrl":"https://doi.org/10.3390/virtualworlds2040021","url":null,"abstract":"The COVID-19 pandemic instigated a paradigm shift in healthcare delivery with a rapid adoption of technology-enabled models of care, particularly within the general practice primary care setting. The emergence of the Metaverse and its associated technology mediums, specifically extended reality (XR) technology, presents a promising opportunity for further industry transformation. Therefore, the objective of this study was to explore the current application and utilisation of XR technologies within the general practice primary care setting to establish a baseline for tracking its evolution and integration. A systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted and registered with the international database of prospectively registered systematic reviews as PROSPERO-CRD42022339905. Eleven articles met the inclusion criteria and were quality appraised and included for review. All databases searched, inclusive of search terms, are supplied to enhance the transparency and reproducibility of the findings. All study interventions used virtual reality technology exclusively. The application of virtual reality within the primary care setting was grouped under three domains: (1) childhood vaccinations, (2) mental health, and (3) health promotion. There is immense potential for the future application of XR technologies within the general practice primary care setting. As technology evolves, healthcare practitioners, XR technology specialists, and researchers should collaborate to harness the full potential of implementing XR mediums.","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"10 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135972887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Self-Learning in Higher Education with Virtual and Augmented Reality Role Games: Students’ Perceptions 利用虚拟与扩增实境角色游戏促进高等教育自学:学生的认知
4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-10-30 DOI: 10.3390/virtualworlds2040020
Luis Valladares Ríos, Ricardo Acosta-Diaz, Pedro C. Santana-Mancilla
{"title":"Enhancing Self-Learning in Higher Education with Virtual and Augmented Reality Role Games: Students’ Perceptions","authors":"Luis Valladares Ríos, Ricardo Acosta-Diaz, Pedro C. Santana-Mancilla","doi":"10.3390/virtualworlds2040020","DOIUrl":"https://doi.org/10.3390/virtualworlds2040020","url":null,"abstract":"This study investigates how virtual and augmented reality role games impact self-learning in higher education settings. A qualitative research–action approach that involved creating augmented reality micro-stories to encourage creativity and critical thinking was used. Through role-playing, students collaborated and gained a deeper understanding of the course, improving their self-learning abilities. The findings indicate that incorporating virtual and augmented reality into higher education positively affects self-learning, promoting active student engagement and meaningful learning experiences. Additionally, students perceive these immersive educational methods as bridging the gap between virtual and in-person learning environments, ultimately leading to enhanced educational results.","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136102294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Issue 34.5 第34.5期编辑
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-10-18 DOI: 10.1002/cav.2222
Nadia Magnenat Thalmann, Daniel Thalmann
{"title":"Editorial Issue 34.5","authors":"Nadia Magnenat Thalmann,&nbsp;Daniel Thalmann","doi":"10.1002/cav.2222","DOIUrl":"https://doi.org/10.1002/cav.2222","url":null,"abstract":"&lt;p&gt;This issue contains 12 papers. Seven papers have been selected from the CASA 2022 Aninex workshop by the program committee chaired by Professor Jian Chang from Bournemouth University. These papers have been extensively revised and then reviewed by the CAVW editorial team. The last five papers are regular papers.&lt;/p&gt;&lt;p&gt;In the first paper, Wenshu Zhang et al. present Struct2Hair, a novel single-viewed hair modeling approach by extracting hair shape descriptor (HSD). The HSD is defined as the fundamental structure-aware feature, which is a combination of critical shapes in a hairstyle. A complete dataset of critical hair shapes is constructed from a known database of 3D hair models.&lt;/p&gt;&lt;p&gt;In the second paper, Yanrui Xu et al. propose a novel boundary-distance based adaptive method for SPH fluid simulation. The signed-distance field constructed concerning the coupling boundary is introduced to determine particle resolution in different spatial positions. The resolution is maximal within a specific distance to the boundary and decreases smoothly as the distance increases until a threshold is reached. The sizes of the particles are then adjusted toward the resolution via splitting and merging. Additionally, a wake flow preservation mechanism is introduced to keep the particle resolution at a high level for a period of time after a particle flows through the boundary object to prevent the loss of flow details.&lt;/p&gt;&lt;p&gt;In the third paper, Tingting Li et al. propose a point cloud synthesis method based on stochastic differential equations (SDEs). They view the point cloud generation process as smoothly transforming from a known prior distribution toward the high-likelihood shape by point-level denoising. They introduce a conditional corrector sampler to improve the quality of point clouds. They additionally prove that their approach can be trained in an auto-encoding fashion and reconstruct point cloud faithfully. Furthermore, their model can be extended on a downstream application of point clouds completion. Experimental results demonstrate the effectiveness and efficiency of their method.&lt;/p&gt;&lt;p&gt;In the fourth paper, Shuqing Yu et al. present a multiscale framework with visual field analysis branch to improve estimation accuracy. The model is based on the feature pyramids and predicts vision field to help gaze estimation. In particular, the authors analyze the effect of the multiscale component and the visual field branch on challenging benchmark datasets: MPIIGaze and EYEDIAP. Based on these studies, their proposed PerimetryNet significantly outperforms state-of-the-art methods. In addition, the multiscale mechanism and visual field branch can be easily applied to existing network architecture for gaze estimation.&lt;/p&gt;&lt;p&gt;The fifth paper by Junheng Fang et al. focuses on the emergence of position-based simulation approaches that has quickly developed a group of new topics in the computer graphics community. These approaches are popular due to their advant","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"34 5","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50152182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metaverse as Tech for Good: Current Progress and Emerging Opportunities 作为技术的元宇宙:当前的进展和新出现的机会
4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-10-17 DOI: 10.3390/virtualworlds2040019
Muhammad Zahid Iqbal, Abraham G. Campbell
{"title":"Metaverse as Tech for Good: Current Progress and Emerging Opportunities","authors":"Muhammad Zahid Iqbal, Abraham G. Campbell","doi":"10.3390/virtualworlds2040019","DOIUrl":"https://doi.org/10.3390/virtualworlds2040019","url":null,"abstract":"Metaverse is an upcoming transformative technology that will impact our future society with immersive experiences. The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. The Metaverse concept is the most recent trend to encapsulate and define the potential new digital landscape. However, with the introduction of 5G with high speed and low latency advancements in the hardware and software with the graphics power to display millions of polygons in 3D and blockchain technology, this concept is no longer fiction. This transition from today’s Internet to a spatially embodied Internet is, at its core, a transition from 2D to 3D interactions taking place in multiple virtual universes. In recent years, augmented virtual reality has created possibilities in the private and professional spheres. The new Virtual Reality (VR) headsets and Augmented Reality (AR) glasses can provide immersion in the physical sense. Technology must offer realistic experiences for users to turn this concept into reality. This paper focuses on the potential use cases and benefits of the Metaverse as a tech for good. The research paper outlines the potential areas where a positive impact could occur, highlights recent progress, and discusses the issues around trust, ethics, and cognitive load.","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135994126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting foot interaction of reaction time training system 支持反应时间训练系统的脚部互动
IF 1.1 4区 计算机科学
Computer Animation and Virtual Worlds Pub Date : 2023-10-09 DOI: 10.1002/cav.2219
Chenyu Zang, Wei Gai, Haodong Li, Chenzhi Xing, Wenfei Wang, Dongli Li, Gaorong Lv, Chenglei Yang
{"title":"Supporting foot interaction of reaction time training system","authors":"Chenyu Zang,&nbsp;Wei Gai,&nbsp;Haodong Li,&nbsp;Chenzhi Xing,&nbsp;Wenfei Wang,&nbsp;Dongli Li,&nbsp;Gaorong Lv,&nbsp;Chenglei Yang","doi":"10.1002/cav.2219","DOIUrl":"10.1002/cav.2219","url":null,"abstract":"<div>\u0000 \u0000 <p>Reaction time, the ability to detect, process and respond to stimuli, is one of the fundamental factors in human computer interaction, and is a key cognitive skill in clinical and healthy populations. Good reaction time allows us to respond to stimuli and situations with agility and efficiency. How to train and improve a person's reaction time has become an important research question. In this paper, we present a new training genre which combines the user-centered personalized training objects generation with precise tracking of foot interaction.Virtual objects are created with respect to the user's features and historical training effectiveness, and are in motion. A foot tracking algorithm based on three-Gaussian model is designed to support interaction with stepping on moving virtual objects. We present the design and implementation of the system, as well as user studies. Findings illustrate that the reaction time performances are significantly improved following of seven days of training based on foot interaction.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135146223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信