Computers & Graphics-Uk最新文献

筛选
英文 中文
Prompt2Color: A prompt-based framework for image-derived color generation and visualization optimization Prompt2Color:一个基于提示的框架,用于图像派生的颜色生成和可视化优化
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104419
Jiayun Hu , Shiqi Jiang , Haiwen Huang , Shuqi Liu , Yun Wang , Changbo Wang , Chenhui Li
{"title":"Prompt2Color: A prompt-based framework for image-derived color generation and visualization optimization","authors":"Jiayun Hu ,&nbsp;Shiqi Jiang ,&nbsp;Haiwen Huang ,&nbsp;Shuqi Liu ,&nbsp;Yun Wang ,&nbsp;Changbo Wang ,&nbsp;Chenhui Li","doi":"10.1016/j.cag.2025.104419","DOIUrl":"10.1016/j.cag.2025.104419","url":null,"abstract":"<div><div>Color is powerful in communicating information in visualizations. However, crafting palettes that improve readability and capture readers’ attention often demands substantial effort, even for seasoned designers. Existing text-based palette generation results in limited and predictable combinations, and finding suitable reference images to extract colors without a clear idea is both tedious and frustrating. In this work, we present Prompt2Color, a novel framework for generating color palettes using prompts. To simplify the process of finding relevant images, we first adopt a concretization approach to visualize the prompts. Furthermore, we introduce an attention-based method for color extraction, which allows for the mining of the visual representations of the prompts. Finally, we utilize a knowledge base to refine the palette and generate the background color to meet aesthetic and design requirements. Evaluations, including quantitative metrics and user experiments, demonstrate the effectiveness of our method.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104419"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProbTalk3D-X: Prosody enhanced non-deterministic emotion controllable speech-driven 3D facial animation synthesis ProbTalk3D-X:韵律增强的非确定性情绪可控语音驱动的3D面部动画合成
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104358
Kazi Injamamul Haque, Sichun Wu, Zerrin Yumak
{"title":"ProbTalk3D-X: Prosody enhanced non-deterministic emotion controllable speech-driven 3D facial animation synthesis","authors":"Kazi Injamamul Haque,&nbsp;Sichun Wu,&nbsp;Zerrin Yumak","doi":"10.1016/j.cag.2025.104358","DOIUrl":"10.1016/j.cag.2025.104358","url":null,"abstract":"<div><div>Audio-driven 3D facial animation synthesis has been an active field of research with attention from both academia and industry. While there are promising results in this area, recent approaches largely focus on lip-sync and identity control, neglecting the role of emotions and emotion control in the generative process. That is mainly due to the lack of emotionally rich facial animation data and algorithms that can synthesize speech animations with emotional expressions at the same time. In addition, the majority of the models are deterministic, meaning given the same audio input, they produce the same output motion. We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations. In this paper, we present ProbTalk3D-X by extending a prior work ProbTalk3D- a two staged VQ-VAE based non-deterministic model, by additionally incorporating prosody features for improved facial accuracy using an emotionally rich facial animation dataset, 3DMEAD. Further, we present a comprehensive comparison of non-deterministic emotion controllable models (including new extended experimental models) leveraging VQ-VAE, VAE and diffusion techniques. We provide an extensive comparative analysis of the experimental models against the recent 3D facial animation synthesis approaches, by evaluating the results objectively, qualitatively, and with a perceptual user study. We highlight several objective metrics that are more suitable for evaluating stochastic outputs and use both in-the-wild and ground truth data for subjective evaluation. Our evaluation demonstrates that ProbTalk3D-X and original ProbTalk3D achieve superior performance compared to state-of-the-art emotion-controlled, deterministic and non-deterministic models. We recommend watching the supplementary video for visual quality judgment. The entire codebase including the extended models is publicly available.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104358"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation 外观作为可靠的证据:调和外观和生成先验的单目运动估计
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104404
Zipei Chen , Yumeng Li , Zhong Ren, Yao-Xiang Ding, Kun Zhou
{"title":"Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation","authors":"Zipei Chen ,&nbsp;Yumeng Li ,&nbsp;Zhong Ren,&nbsp;Yao-Xiang Ding,&nbsp;Kun Zhou","doi":"10.1016/j.cag.2025.104404","DOIUrl":"10.1016/j.cag.2025.104404","url":null,"abstract":"<div><div>Monocular motion estimation in real scenes is challenging with the presence of noisy and possibly occluded detections. The recent method proposes to introduce a diffusion-based generative motion prior, which treats input detections as noisy partial evidence and generates motion through denoising. This advances robustness and motion quality, yet regardless of whether the denoised motion is close to visual observation, which often causes misalignment. In this work, we propose to reconcile model appearance and motion prior, which enables appearance to play the crucial role of providing reliable noise-free visual evidence for accurate visual alignment. The appearance is modeled by the radiance of both scene and human for joint differentiable rendering. To achieve this with monocular RGB input without mask and depth, we propose a semantic-perturbed mode estimation method to faithfully estimate static scene radiance from dynamic input with complex occlusion relationships, and a polyline depth calibration method to leverage knowledge from depth estimation model to recover the missing depth information. Meanwhile, to leverage knowledge from motion prior and reconcile it with the appearance guidance during optimization, we also propose an occlusion-aware gradient merging strategy. Experimental results demonstrate that our method achieves better-aligned tracking results while maintaining competitive motion quality. Our code is released at <span><span>https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104404"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D reconstruction and precision evaluation of industrial components via Gaussian Splatting 基于高斯溅射的工业部件三维重建与精度评价
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104422
Guodong Sun , Dingjie Liu , Zeyu Yang , Shaoran An , Yang Zhang
{"title":"3D reconstruction and precision evaluation of industrial components via Gaussian Splatting","authors":"Guodong Sun ,&nbsp;Dingjie Liu ,&nbsp;Zeyu Yang ,&nbsp;Shaoran An ,&nbsp;Yang Zhang","doi":"10.1016/j.cag.2025.104422","DOIUrl":"10.1016/j.cag.2025.104422","url":null,"abstract":"<div><div>Traditional 3D reconstruction methods for industrial components present significant limitations. Structured light and laser scanning require costly equipment, complex procedures, and remain sensitive to scan completeness and occlusions. These constraints restrict their application in settings with budget and expertise limitations. Deep learning approaches reduce hardware requirements but fail to accurately reconstruct complex industrial surfaces with real-world data. Industrial components feature intricate geometries and surface irregularities that challenge current deep learning techniques. These methods also demand substantial computational resources, limiting industrial implementation. This paper presents a 3D reconstruction and measurement system based on Gaussian Splatting. The method incorporates adaptive modifications to address the unique surface characteristics of industrial components, ensuring both accuracy and efficiency. To resolve scale and pose discrepancies between the reconstructed Gaussian model and ground truth, a robust scaling and registration pipeline has been developed. This pipeline enables precise evaluation of reconstruction quality and measurement accuracy. Comprehensive experimental evaluations demonstrate that our approach achieves high-precision reconstruction, with an average Chamfer Distance of 2.24 and a mean F1 Score of 0.19, surpassing existing methods. Additionally, the average scale error is 2.41%. The proposed system enables reliable dimensional measurements using only consumer-grade cameras, significantly reducing equipment costs and simplifying operation, thereby improving the accessibility of 3D reconstruction in industrial applications. A publicly available industrial component dataset has been constructed to serve as a benchmark for future research. The dataset and code are available at <span><span>https://github.com/ldj0o/IndustrialComponentGS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104422"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-efficient filter-guided diffusion with domain transform filtering 具有域变换滤波的内存高效滤波引导扩散
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-08 DOI: 10.1016/j.cag.2025.104389
Gustavo Lopes Tamiosso, Caetano Müller, Lucas Spagnolo Bombana, Manuel M. Oliveira
{"title":"Memory-efficient filter-guided diffusion with domain transform filtering","authors":"Gustavo Lopes Tamiosso,&nbsp;Caetano Müller,&nbsp;Lucas Spagnolo Bombana,&nbsp;Manuel M. Oliveira","doi":"10.1016/j.cag.2025.104389","DOIUrl":"10.1016/j.cag.2025.104389","url":null,"abstract":"<div><div>Diffusion models are powerful tools for image synthesis and editing, yet preserving structural content from a guidance image remains challenging. Filter-Guided Diffusion (FGD) tackles this by applying edge-preserving filtering at each denoising step. However, the original FGD relies on joint bilateral filtering, which incurs high VRAM and computational costs, limiting its scalability to high-resolution images. We propose <strong>Domain Transform Filter-Guided Diffusion (DT-FGD)</strong>, a lightweight variant that replaces bilateral filtering with the efficient domain transform filter and introduces a normalization strategy for the guidance image’s latent representation. DT-FGD achieves significantly lower VRAM usage and faster inference while improving structural consistency. Our method produces images that better align with the text prompt and vary smoothly under filter parameter changes, leading to more predictable outcomes. Experiments show that DT-FGD can reduce VRAM consumption by over 50%, accelerates inference, and scales to high resolutions on a single GPU—unlike prior approaches. We further present a variant that offers even greater memory savings at the cost of additional inference time. DT-FGD enables structure-preserving diffusion on resource-constrained hardware and opens new directions for high-resolution, controllable image synthesis.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104389"},"PeriodicalIF":2.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145060471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SHREC 2025: Partial retrieval benchmark SHREC 2025:部分检索基准
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-06 DOI: 10.1016/j.cag.2025.104397
Bart Iver van Blokland , Isaac Aguirre , Ivan Sipiran , Benjamin Bustos , Silvia Biasotti , Giorgio Palmieri
{"title":"SHREC 2025: Partial retrieval benchmark","authors":"Bart Iver van Blokland ,&nbsp;Isaac Aguirre ,&nbsp;Ivan Sipiran ,&nbsp;Benjamin Bustos ,&nbsp;Silvia Biasotti ,&nbsp;Giorgio Palmieri","doi":"10.1016/j.cag.2025.104397","DOIUrl":"10.1016/j.cag.2025.104397","url":null,"abstract":"<div><div>Partial retrieval is a long-standing problem in the 3D Object Retrieval community. Its main difficulties arise from how to define 3D local descriptors in a way that makes them effective for partial retrieval and robust to common real-world issues, such as occlusion, noise, or clutter, when dealing with 3D data. This SHREC track is based on the newly proposed ShapeBench benchmark to evaluate the matching performance of local descriptors. We propose an experiment consisting of three increasing levels of difficulty, where we combine different filters to simulate real-world issues related to the partial retrieval task. Our main findings show that classic 3D local descriptors like Spin Image are robust to several of the tested filters (and their combinations), but more recent learned local descriptors like GeDI can be competitive for some specific filters. Finally, no 3D local descriptor was able to successfully handle the hardest level of difficulty.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104397"},"PeriodicalIF":2.8,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transferable class statistics and multi-scale feature approximation for 3D object detection 三维目标检测的可转移类统计和多尺度特征逼近
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-06 DOI: 10.1016/j.cag.2025.104421
Hao Peng , Hong Sang , Yajing Ma , Ping Qiu , Chao Ji
{"title":"Transferable class statistics and multi-scale feature approximation for 3D object detection","authors":"Hao Peng ,&nbsp;Hong Sang ,&nbsp;Yajing Ma ,&nbsp;Ping Qiu ,&nbsp;Chao Ji","doi":"10.1016/j.cag.2025.104421","DOIUrl":"10.1016/j.cag.2025.104421","url":null,"abstract":"<div><div>This paper investigates multi-scale feature approximation and transferable features for object detection from point clouds. Multi-scale features are critical for object detection from point clouds. However, multi-scale feature learning usually involves multiple neighborhood searches and scale-aware layers, which can hinder efforts to achieve lightweight models and may not be conducive to research constrained by limited computational resources. This paper approximates point-based multi-scale features from a single neighborhood based on knowledge distillation. To compensate for the loss of constructive diversity in a single neighborhood, this paper designs a transferable feature embedding mechanism. Specifically, class-aware statistics are employed as transferable features given the small computational cost. In addition, this paper introduces the central weighted intersection over union for localization to alleviate the misalignment brought by the center offset in optimization. Note that the method presented in this paper saves computational costs. Extensive experiments on public datasets demonstrate the effectiveness of the proposed method. <em>The code will be released at</em><span><span><em>https://github.com/blindopen/TSM-Det-Pointcloud-</em></span><svg><path></path></svg></span><em>.</em></div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104421"},"PeriodicalIF":2.8,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct slicing NURBS objects: A numerically dependable form 直接切片NURBS对象:数字可靠的形式
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-04 DOI: 10.1016/j.cag.2025.104418
Silvio de Barros Melo
{"title":"Direct slicing NURBS objects: A numerically dependable form","authors":"Silvio de Barros Melo","doi":"10.1016/j.cag.2025.104418","DOIUrl":"10.1016/j.cag.2025.104418","url":null,"abstract":"<div><div>In additive manufacturing, a three-dimensional model is constructed by sequentially adding material layers. Digitally, complex objects are typically modeled using parametric representations, with Non-Uniform Rational B-Splines (NURBS) surfaces being among the most prominent. Slicing NURBS surfaces—a critical operation in the additive manufacturing workflow—has long been a challenge for 3D modelers due to the complexities inherent in their free-form geometries when intersecting with a plane. Traditionally, this challenge is addressed by converting NURBS representations into meshes composed of approximating triangles. While this approach simplifies the intersection process, it often comes at the expense of accuracy or requires significant computational resources to maintain precision. Although direct slicing methods exist, they encounter various limitations. In this work, we introduce an efficient, numerically robust, conversion-free method for sampling points at the intersection of cutting planes and NURBS objects, termed IsoSlicer. This approach supports NURBS surfaces of any degree while achieving arbitrary accuracy requirements with guaranteed numerical stability.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104418"},"PeriodicalIF":2.8,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SHREC 2025: Retrieval of Optimal Objects for Multi-modal Enhanced Language and Spatial Assistance (ROOMELSA) SHREC 2025:多模态增强语言和空间辅助(ROOMELSA)的最佳对象检索
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-03 DOI: 10.1016/j.cag.2025.104400
Trong-Thuan Nguyen , Viet-Tham Huynh , Quang-Thuc Nguyen , Hoang-Phuc Nguyen , Long Le Bao , Thai Hoang Minh , Minh Nguyen Anh , Thang Nguyen Tien , Phat Nguyen Thuan , Huy Nguyen Phong , Bao Huynh Thai , Vinh-Tiep Nguyen , Duc-Vu Nguyen , Phu-Hoa Pham , Minh-Huy Le-Hoang , Nguyen-Khang Le , Minh-Chinh Nguyen , Minh-Quan Ho , Ngoc-Long Tran , Hien-Long Le-Hoang , Minh-Triet Tran
{"title":"SHREC 2025: Retrieval of Optimal Objects for Multi-modal Enhanced Language and Spatial Assistance (ROOMELSA)","authors":"Trong-Thuan Nguyen ,&nbsp;Viet-Tham Huynh ,&nbsp;Quang-Thuc Nguyen ,&nbsp;Hoang-Phuc Nguyen ,&nbsp;Long Le Bao ,&nbsp;Thai Hoang Minh ,&nbsp;Minh Nguyen Anh ,&nbsp;Thang Nguyen Tien ,&nbsp;Phat Nguyen Thuan ,&nbsp;Huy Nguyen Phong ,&nbsp;Bao Huynh Thai ,&nbsp;Vinh-Tiep Nguyen ,&nbsp;Duc-Vu Nguyen ,&nbsp;Phu-Hoa Pham ,&nbsp;Minh-Huy Le-Hoang ,&nbsp;Nguyen-Khang Le ,&nbsp;Minh-Chinh Nguyen ,&nbsp;Minh-Quan Ho ,&nbsp;Ngoc-Long Tran ,&nbsp;Hien-Long Le-Hoang ,&nbsp;Minh-Triet Tran","doi":"10.1016/j.cag.2025.104400","DOIUrl":"10.1016/j.cag.2025.104400","url":null,"abstract":"<div><div>Recent 3D retrieval systems are typically designed for simple, controlled scenarios, such as identifying an object from a cropped image or a brief description. However, real-world scenarios are more complex, often requiring the recognition of an object in a cluttered scene based on a vague, free-form description. To this end, we present ROOMELSA, a new benchmark designed to evaluate a model’s ability to interpret natural language. Specifically, ROOMELSA attends to a specific region within a panoramic room image and accurately retrieves the corresponding 3D model from a large database. In addition, our ROOMELSA dataset includes over 1,600 apartment scenes, nearly 5,200 rooms, and more than 44,000 targeted queries. Empirically, while coarse object retrieval is largely solved, only one top-performing model consistently ranked the correct match first across nearly all test cases. Notably, a lightweight CLIP-based model also performed well, although it struggled with subtle variations in materials, part structures, and contextual cues, resulting in occasional errors. Notably, these findings highlight the importance of tightly integrating visual and language understanding. By bridging the gap between scene-level grounding and fine-grained 3D retrieval, ROOMELSA establishes a new benchmark for advancing robust, real-world 3D recognition systems.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104400"},"PeriodicalIF":2.8,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized eXtended Reality experiences to enhance the rehabilitation process of stroke survivors: A scoping review 个性化的扩展现实体验,以加强中风幸存者的康复过程:范围审查
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-03 DOI: 10.1016/j.cag.2025.104411
Inês Figueiredo , Bernardo Marques , Sérgio Oliveira , Bianca Guerreiro , Samuel Silva , Paula Amorim , Tiago Araújo , Liliana Vale Costa , Carlos Ferreira , Paulo Dias , Beatriz Sousa Santos
{"title":"Personalized eXtended Reality experiences to enhance the rehabilitation process of stroke survivors: A scoping review","authors":"Inês Figueiredo ,&nbsp;Bernardo Marques ,&nbsp;Sérgio Oliveira ,&nbsp;Bianca Guerreiro ,&nbsp;Samuel Silva ,&nbsp;Paula Amorim ,&nbsp;Tiago Araújo ,&nbsp;Liliana Vale Costa ,&nbsp;Carlos Ferreira ,&nbsp;Paulo Dias ,&nbsp;Beatriz Sousa Santos","doi":"10.1016/j.cag.2025.104411","DOIUrl":"10.1016/j.cag.2025.104411","url":null,"abstract":"<div><div>Stroke affects millions globally, resulting in physical impairments such as paralysis, speech challenges, and cognitive deficits like memory loss. Rehabilitation plays a vital role in recovery, helping regain lost functions, and achieve greater independence, facilitating reintegration into daily life. Despite this, rehabilitation programs rely on standardized approaches that fail to accommodate the unique needs and goals of individual stroke survivors. This lack of personalization can lead to frustration, loss of motivation, and reduced engagement, ultimately hindering recovery and slowing progress. One possible approach to help overcome these challenges is eXtended Reality (XR), offering immersive, adaptable virtual environments. XR enables the creation of dynamic and customizable exercises tailored to the specific needs, abilities, and preferences of each individual. These experiences can be interactive and engaging, improving motivation and fostering active participation compared to traditional methods. XR also allows for real-time tracking and feedback, making the process both more effective and enjoyable. This work contributes to the field by presenting a scoping review on personalized XR experiences for stroke rehabilitation, resulting from an analysis of 39 publications within the SCOPUS database covering the period from 2020 to 2024. The review provides insights into trends, advancements, and challenges, identifying opportunities for future development in this area. By consolidating knowledge in this field, we aim to help guide the development of personalized XR solutions, ultimately improving rehabilitation outcomes and quality of life for stroke survivors and their caregivers.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104411"},"PeriodicalIF":2.8,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信