Spyros Vosinakis , Panayiotis Koutsabasis , George Anastassakis , Andreas Papasalouros , Kostas Damianidis
{"title":"Designing and evaluating an immersive VR experience of a historic sailing ship in museum contexts","authors":"Spyros Vosinakis , Panayiotis Koutsabasis , George Anastassakis , Andreas Papasalouros , Kostas Damianidis","doi":"10.1016/j.cag.2025.104439","DOIUrl":"10.1016/j.cag.2025.104439","url":null,"abstract":"<div><div>Museums and exhibitions can benefit from immersive technologies by embodying visitors in rich interactive environments, where they can experience digitally reconstructed scenes and stories of the past. Nevertheless, public-space Virtual Reality (VR) interactions need to be short in duration, carefully designed to communicate the intended message, and optimized for the user experience, especially for first-time users. This paper contributes to the ongoing research on user experience in VR for cultural heritage through the presentation of the design and user evaluation of an installation that immerses users on board a historic sailing ship and has been part of a museum exhibition. We present the process of reconstructing the ship and developing the application with emphasis on design choices about the user experience (scene presentation, content delivery, navigation and interaction modes, assistance, etc.). We have performed a thorough user experience evaluation and present its results and our reflections on design issues regarding public VR installations for museums.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104439"},"PeriodicalIF":2.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Including reflections in real-time voxel-based global illumination","authors":"Alejandro Cosin-Ayerbe, Gustavo Patow","doi":"10.1016/j.cag.2025.104449","DOIUrl":"10.1016/j.cag.2025.104449","url":null,"abstract":"<div><div>Despite advances in rendering techniques, achieving high-quality real-time global illumination remains a significant challenge in Computer Graphics. While offline methods produce photorealistic lighting effects by accurately simulating light transport, real-time approaches struggle with the computational complexity of global illumination, particularly when handling dynamic scenes and moving light sources. Existing solutions often rely on precomputed data structures or approximate techniques, which either lack flexibility or introduce artifacts that degrade visual fidelity. In this work, we build upon previous research on a voxel-based real-time global illumination method to efficiently incorporate reflections and interreflections for both static and dynamic objects. Our approach leverages a voxelized scene representation, combined with a strategy for ray tracing camera-visible reflections, to ensure accurate materials while maintaining high performance. Key contributions include: (i) a high-quality material system capable of diffuse, glossy, and specular interreflections for both static and dynamic scene objects (ii) a highly-performant screen-space material model with a low memory consumption; and (iii) an open-source full implementation for further research and development. Our method outperforms state-of-the-art academic and industrial techniques, achieving higher quality and better temporal stability without requiring excessive computational resources. By enabling real-time global illumination with reflections, our work lays the foundation for more advanced rendering systems, ultimately moving closer to the visual fidelity of offline rendering while maintaining interactivity.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104449"},"PeriodicalIF":2.8,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyi Wang , Jialong Ye , Guangtao Zhang , Honglei Guo
{"title":"PersonalityLens: Visualizing in-depth analysis for LLM-driven personality insights","authors":"Xiaoyi Wang , Jialong Ye , Guangtao Zhang , Honglei Guo","doi":"10.1016/j.cag.2025.104452","DOIUrl":"10.1016/j.cag.2025.104452","url":null,"abstract":"<div><div>Large Language Models (LLMs) have demonstrated strong potential for text-based personality assessment and are increasingly adopted by domain experts as assistive tools. Rather than focusing on prediction accuracy, users now prioritize insight-driven analysis, using LLMs to explore large volumes of written and spoken language through simple verbal prompts. However, a gap remains between LLM-detected personality traits and users’ ability to contextualize these outputs within established psychological theories and mechanisms. Existing tools often lack support for multi-level insights and fail to capture the dynamic evolution of traits and facets over time, limiting deeper analysis. To address this, we propose PersonalityLens, a visual analysis tool designed to enhance insight discovery in personality analysis. Our design is informed by a comprehensive requirements analysis with domain experts and supports: (1) in-depth exploration of detected traits and their corresponding utterances, supporting insights at varying levels of granularity, (2) exploration of how personality traits and facets dynamically evolve in finer contexts over time, (3) alignment of traits and facets with psychological theories. We present two complementary case studies — one based on fictional TV dialogue and the other on therapeutic interactions — demonstrating PersonalityLens’s adaptability to diverse analytic goals and contexts. A qualitative think-aloud user study shows that PersonalityLens supports context-aware interpretation and insight discovery. Building on these findings, we outline design implications to inspire future research and enhance psychotherapy tools with integrated personality analysis for mental health support.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104452"},"PeriodicalIF":2.8,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Feng , Long Li , Weitao Li , Bo Li , Junao Shen
{"title":"CaRoLS: Condition-adaptive multi-level road layout synthesis","authors":"Tian Feng , Long Li , Weitao Li , Bo Li , Junao Shen","doi":"10.1016/j.cag.2025.104451","DOIUrl":"10.1016/j.cag.2025.104451","url":null,"abstract":"<div><div>Synthesizing road layouts, which define the spatial structure of cities, is critical for many urban applications. Conventional deep learning methods, however, struggle to handle both unconditional and conditional inputs, and rarely capture the multi-level complexity of real road networks. We propose CaRoLS, a unified two-stage method for condition-adaptive multi-level road layout synthesis. Specifically, the Multi-level Layout Reconstruction stage uses a pre-trained variational autoencoder to encode a real-world road layout into a latent representation and then reconstructs the image. The Condition-adaptive Representation Generation stage employs a diffusion model to generate a latent representation from Gaussian noise, or from noise combined with an optional conditioning image containing natural and socio-economic information. This design balances computational efficiency with the ability to model continuous data. To further enhance output quality, we introduce a Condition-aware Decoder Block module that integrates global context and local details, replacing the standard U-Net decoder blocks in the diffusion model. Experiments on an Australian metropolitan dataset show that CaRoLS outperforms representative general and specialized synthesis methods. Compared to the current state-of-the-art methods, improvements reach up to 36.47% and 4.05% in image and topological metrics for the unconditional mode, and 56.25% and 3.18% in the conditional mode. These results demonstrate that CaRoLS generates multi-level road layouts with strong structural fidelity and high connectivity, and provides a unified pipeline for both unconditional and conditional synthesis.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104451"},"PeriodicalIF":2.8,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Foreword to special section: Highlights from EuroVA 2024","authors":"Hans-Jörg Schulz, Marco Angelini","doi":"10.1016/j.cag.2025.104450","DOIUrl":"10.1016/j.cag.2025.104450","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104450"},"PeriodicalIF":2.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Flávio de Andrade Araujo , Angelo Brandelli Costa , Soraia Raupp Musse
{"title":"Examining the attribution of gender and the perception of emotions in virtual humans","authors":"Victor Flávio de Andrade Araujo , Angelo Brandelli Costa , Soraia Raupp Musse","doi":"10.1016/j.cag.2025.104446","DOIUrl":"10.1016/j.cag.2025.104446","url":null,"abstract":"<div><div>Virtual Humans (VHs) are becoming increasingly realistic, raising questions about how users perceive their gender and emotions. In this study, we investigate how textually assigned gender and visual facial features influence both gender attribution and emotion recognition in VHs. Two experiments were conducted. In the first, participants evaluated a nonbinary VH animated with expressions performed by both male and female actors. In the second part, participants assessed binary male and female VHs animated by either real actors or data-driven facial styles. Results show that users often rely on textual gender cues and facial features to assign gender to VHs. Emotion recognition was more accurate when expressions were performed by actresses or derived from facial styles, particularly in nonbinary models. Notably, participants more consistently attributed gender according to textual cues when the VH was visually androgynous, suggesting that the absence of strong gendered facial markers increases the reliance on textual information. These findings offer insights for designing more inclusive and perceptually coherent virtual agents.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104446"},"PeriodicalIF":2.8,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjie Liu, Ling You, Xiaoyan Yang, Dingbo Lu, Yang Li, Changbo Wang
{"title":"CeRF: Convolutional neural radiance derivative fields for new view synthesis","authors":"Wenjie Liu, Ling You, Xiaoyan Yang, Dingbo Lu, Yang Li, Changbo Wang","doi":"10.1016/j.cag.2025.104447","DOIUrl":"10.1016/j.cag.2025.104447","url":null,"abstract":"<div><div>Recently, Neural Radiance Fields (NeRF) has seen a surge in popularity, driven by its ability to generate high-fidelity novel view synthesized images. However, unexpected “floating ghost” artifacts usually emerge with limited training views and intricate optical phenomena. This issue stems from the inherent ambiguities in radiance fields, rooted in the fundamental volume rendering equation and the unrestricted learning paradigms in multi-layer perceptrons. In this paper, we introduce Convolutional Neural Radiance Fields (CeRF), a novel approach to model the derivatives of radiance along rays and solve the ambiguities through a fully neural rendering pipeline. To this end, a single-surface selection mechanism involving both a modified softmax function and an ideal point is proposed to implement our radiance derivative fields. Furthermore, a structured neural network architecture with 1D convolutional operations is employed to further boost the performance by extracting latent ray representations. Extensive experiments demonstrate the promising results of our proposed model compared with existing state-of-the-art approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104447"},"PeriodicalIF":2.8,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mateus Pinto da Silva , Sabrina P.L.P. Correa , Mariana A.R. Schaefer , Julio C.S. Reis , Ian M. Nunes , Jefersson A. dos Santos , Hugo N. Oliveira
{"title":"Advancing agricultural remote sensing: A comprehensive review of deep supervised and Self-Supervised Learning for crop monitoring","authors":"Mateus Pinto da Silva , Sabrina P.L.P. Correa , Mariana A.R. Schaefer , Julio C.S. Reis , Ian M. Nunes , Jefersson A. dos Santos , Hugo N. Oliveira","doi":"10.1016/j.cag.2025.104434","DOIUrl":"10.1016/j.cag.2025.104434","url":null,"abstract":"<div><div>Deep Learning based on Remote Sensing has become a powerful tool to increase agricultural productivity, mitigate the effects of climate change, and monitor deforestation. However, there is a lack of standardization and appropriate taxonomic classification of the literature available in the context of informatics. Taking advantage of the categories already available in the literature, this paper provides an overview of the relevant literature categorized into five main applications: Parcel Segmentation, Crop Mapping, Crop Yielding, Land Use and Land Cover, and Change Detection. We review notable trends, including the transition from traditional to deep learning, convolutional models, recurrent and attention-based models, and generative strategies. We also map the use of Self-Supervised Learning through contrastive, non-contrastive, data masking and hybrid semi-supervised pretraining for the aforementioned applications with an experimental benchmark for Post-Harvest Crop Mapping models, and present our solution, SITS-Siam, which achieves top performance on two of the three datasets tested. In addition, we provide a comprehensive overview of publicly available datasets for these applications and also unlabeled datasets for Remote Sensing in general. We hope that our work can be useful as a guide for future work in this context. The benchmark code and the pre-trained weights are available in <span><span>https://github.com/mateuspinto/rs-agriculture-survey-extended</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104434"},"PeriodicalIF":2.8,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shufang Zhang , Hang Qian , Minxue Ni , Yaxuan Li , Wenxin Ding , Jun Liu
{"title":"Diffusion model-based size variable virtual try-on technology and evaluation method","authors":"Shufang Zhang , Hang Qian , Minxue Ni , Yaxuan Li , Wenxin Ding , Jun Liu","doi":"10.1016/j.cag.2025.104448","DOIUrl":"10.1016/j.cag.2025.104448","url":null,"abstract":"<div><div>With the rapid development of electronic commerce, virtual try-on technology has become an essential tool to satisfy consumers’ personalised clothing preferences. Diffusion-based virtual try-on systems aim to naturally align garments with target individuals, generating realistic and detailed try-on images. However, existing methods overlook the importance of garment size variations in meeting personalised consumer needs. To address this, we propose a novel virtual try-on method named SV-VTON, which introduces garment sizing concepts into virtual try-on tasks. The SV-VTON method first generates refined masks for multiple garment sizes, then integrates these masks with garment images at varying proportions, enabling virtual try-on simulations across different sizes. In addition, we develop a specialised size evaluation module to quantitatively assess the accuracy of size variations. This module calculates differences between generated size increments and international sizing standards, providing objective measurements of size accuracy. To further validate SV-VTON’s generalisation capability across different models, we conduct experiments on multiple SOTA Diffusion models. The results demonstrate that SV-VTON consistently achieves precise multi-size virtual try-on across various SOTA models, and validates the effectiveness and rationality of the proposed method, significantly fulfilling users’ personalised multi-size virtual try-on requirements.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104448"},"PeriodicalIF":2.8,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The vividness of mental imagery in virtual reality: A study on multisensory experiences in virtual tourism","authors":"Mariana Magalhães , Miguel Melo , António Coelho , Maximino Bessa","doi":"10.1016/j.cag.2025.104443","DOIUrl":"10.1016/j.cag.2025.104443","url":null,"abstract":"<div><div>This paper aims to evaluate how different combinations of multisensory stimuli affect the vividness of users’ mental imagery in the context of virtual tourism. To this end, a between-subjects experimental study was conducted with 94 participants, who were allocated to either a positive or a negative immersive virtual environment. The positive environment contained only pleasant multisensory stimuli, whereas the negative contained only unpleasant stimuli. For each of the virtual experiences, a multisensory treasure hunt was developed, where each object found corresponded to a planned combination of stimuli (positive or negative, accordingly). The results showed that positive stimuli involving a higher number of sensory modalities resulted in higher reported vividness. In contrast, when the same multisensory modalities were delivered with negative stimuli, vividness levels decreased — an effect we attribute to potential cognitive overload. Nevertheless, some reduced negative combinations (audiovisual with smell and audiovisual with haptics) remained effective, indicating that olfactory and haptic cues play an important role in shaping users’ vividness of mental imagery, even in negative contexts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104443"},"PeriodicalIF":2.8,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}