Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-09DOI: 10.1016/j.cag.2026.104538
Zedan Zheng , Yudi Tan , Zhuo Su , Fan Zhou , Baoquan Zhao
{"title":"NEGS-Avatar: Normal Embedded Gaussians for 2D avatar from monocular video","authors":"Zedan Zheng , Yudi Tan , Zhuo Su , Fan Zhou , Baoquan Zhao","doi":"10.1016/j.cag.2026.104538","DOIUrl":"10.1016/j.cag.2026.104538","url":null,"abstract":"<div><div>Creating realistic human avatars from monocular RGB videos is a long-standing and challenging problem. Existing implicit NeRF-based methods typically lack explicit geometric information in feature representation. Although 3D Gaussian Splatting (3DGS) has recently emerged as an explicit point-cloud-based alternative, information about geometric details like normal information is still missing in such an unstructured representation. In this paper, we present NEGS-Avatar, a novel approach to modeling animatable 2D human avatars from monocular videos using 3DGS. Our method incorporates normal information into 3D Gaussians as a learnable property to construct directed 3DGS to improve body appearance modeling. The normal information, along with other properties like positions, rotations and scales, is predicted based on the given body pose to model pose-dependent non-rigid deformation. The Gaussians are then transformed into actor posed space using linear blend skinning to realize pose animation. In addition, we develop a locality-aware adaptive density control strategy, which utilizes normal variance in local areas to facilitate effective Gaussain densification. Last but not the least, we propose to separate the specular and diffuse components for color prediction, thereby forming a more accurate, interpretable, and controllable appearance prediction model. Experimental results demonstrate that NEGS-Avatar achieves state-of-the-art performance both qualitatively and quantitatively, especially in the details of the clothing surface. The code is available at <span><span>https://github.com/Zheng-ZD/NEGS-Avatar.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104538"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-01-23DOI: 10.1016/j.cag.2026.104533
Andrea Giachetti, Umberto Castellani, Ariel Caputo, Valeria Garro, Nicola Capece
{"title":"Foreword to the Special Section on Smart Tools and Applications in Graphics (STAG 2024)","authors":"Andrea Giachetti, Umberto Castellani, Ariel Caputo, Valeria Garro, Nicola Capece","doi":"10.1016/j.cag.2026.104533","DOIUrl":"10.1016/j.cag.2026.104533","url":null,"abstract":"<div><div>This Special Section contains extended and revised versions of selected papers presented at the 11th Conference on Smart Tools and Applications in Graphics (STAG 2024), held in Verona (Italy) on November 14–15, 2024. Three papers were selected by appointed members of the Program Committee; their extended versions were subsequently submitted and further reviewed by experts. The resulting collection comprises contributions spanning a broad range of topics, including navigation in mixed reality, reinforcement learning for intelligent agents in 3D environments, and interactive image relighting using neural networks.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104533"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-05DOI: 10.1016/j.cag.2026.104539
Muhammad Saqlain , José M. Merigó , Poom Kumam , Joaquim Jorge
{"title":"Golden anniversary of Computers & Graphics: A bibliometric overview","authors":"Muhammad Saqlain , José M. Merigó , Poom Kumam , Joaquim Jorge","doi":"10.1016/j.cag.2026.104539","DOIUrl":"10.1016/j.cag.2026.104539","url":null,"abstract":"<div><div><em>Computers & Graphics</em> celebrates its golden anniversary in 2025. Motivated by this special event, this study presents a comprehensive bibliometric analysis of the journal, identifying key research trends, frequently cited authors, institutions, countries, and major citation patterns. The work retrieves data from the Web of Science (WoS) core collection and Scopus databases and utilizes bibliometric tools such as <em>VOS viewer</em> and <em>bibliometrix</em> software. We analyse the keyword evolution; co-citation networks and bibliographic coupling of the documents published in <em>Computers & Graphics</em>. The distribution of topics indicates increased attention to artificial intelligence–based methods, including deep learning, point cloud processing, and virtual reality, alongside established rendering and simulation techniques. Additionally, the bibliometric analysis of productive authors, institutions and countries, indicate increased publication and citation activity associated with institutions in Asian countries, especially China. Beyond broader trends, this study also highlights <em>Computers & Graphics’</em> recent initiatives that emphasize transparency and reproducibility, such as the graphics replicability stamp and the special sections, which bridge academic conferences and high-quality journal publications. This study serves as a reference for researchers seeking to understand the historical trajectory, emerging trends, and evolving editorial priorities in computer graphics research.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104539"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-11DOI: 10.1016/j.cag.2026.104540
Hongwei Lin, Michela Mortara, Zichun Zhong
{"title":"Foreword to the Special Section on Shape Modeling International 2025 (SMI 2025)","authors":"Hongwei Lin, Michela Mortara, Zichun Zhong","doi":"10.1016/j.cag.2026.104540","DOIUrl":"10.1016/j.cag.2026.104540","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104540"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-14DOI: 10.1016/j.cag.2026.104547
Hao Jin , Hengyuan Chang , Xiaoxuan Xie , Zhengyang Wang , Xusheng Du , Shaojun Hu , Haoran Xie
{"title":"Sketch-guided stylized landscape cinemagraph synthesis","authors":"Hao Jin , Hengyuan Chang , Xiaoxuan Xie , Zhengyang Wang , Xusheng Du , Shaojun Hu , Haoran Xie","doi":"10.1016/j.cag.2026.104547","DOIUrl":"10.1016/j.cag.2026.104547","url":null,"abstract":"<div><div>Designing stylized cinemagraphs is challenging due to the difficulty in customizing complex and expressive flow elements. To achieve intuitive and detailed control of the generated cinemagraphs, sketches provide a feasible solution to convey personalized design requirements beyond text inputs. In this paper, we propose Sketch2Cinemagraph, a sketch-guided framework that enables the conditional generation of stylized cinemagraphs from freehand sketches. Sketch2Cinemagraph adopts text prompts for initial landscape generation and provides sketch controls for both spatial and motion cues. The latent diffusion model first generates target stylized landscape images along with realistic versions. Then, a pre-trained object detection model obtains masks for the flow regions. We propose a latent motion diffusion model to estimate motion field in fluid regions of the generated landscape images. The input motion sketches serve as the conditions to control the generated motion fields in the masked fluid regions with the prompt. To synthesize cinemagraph frames, the pixels within fluid regions are warped to target locations at each timestep using a U-Net based frame generator. The results verified that Sketch2Cinemagraph can generate aesthetically appealing stylized cinemagraphs with continuous temporal flow from sketch inputs. We showcase the advantages of Sketch2Cinemagraph through qualitative and quantitative comparisons against the state-of-the-art approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104547"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-01-22DOI: 10.1016/j.cag.2026.104536
Jaume Ros, Alessio Arleo, Fernando Paulovich
{"title":"Enhanced Force-Scheme: A fast and accurate global dimensionality reduction method","authors":"Jaume Ros, Alessio Arleo, Fernando Paulovich","doi":"10.1016/j.cag.2026.104536","DOIUrl":"10.1016/j.cag.2026.104536","url":null,"abstract":"<div><div>Global nonlinear Dimensionality Reduction (DR) methods excel at capturing complex features of datasets while preserving their overall high-dimensional structure when projecting them into a lower-dimensional space. Force-Scheme (FS) is one such method, used in a variety of domains. However, its use is still hindered by distortions and high computational cost. In this paper, we introduce <em>Enhanced Force-Scheme</em> (EFS), a revisited approach to solve the optimization problem posed by FS. We build on the core ideas of the original FS algorithm and introduce a more advanced optimization framework grounded in gradient-based optimization, which yields higher-quality layouts. Additionally, we elaborate on multiple strategies to accelerate the computation of projections using EFS, thereby facilitating its use on large datasets. Finally, we compare it with FS and other popular DR techniques and show that, among the methods tested, EFS best captures global structure while still performing well on local metrics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104536"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-10DOI: 10.1016/j.cag.2026.104545
Madhav Sachdeva , Christopher Narayanan , Marvin Wiedenkeller , Jana Sedlakova , Jürgen Bernard
{"title":"Systematic validation of LLM-generated structured data — A design space and remaining challenges","authors":"Madhav Sachdeva , Christopher Narayanan , Marvin Wiedenkeller , Jana Sedlakova , Jürgen Bernard","doi":"10.1016/j.cag.2026.104545","DOIUrl":"10.1016/j.cag.2026.104545","url":null,"abstract":"<div><div>Large language models (LLMs) are increasingly being used in academia and practice to generate structured data, supporting crucial data enrichment tasks such as imputing missing values, labeling data items, and generating synthetic datasets. However, these benefits rely on the validation of LLM-generated data to address known issues of LLMs, including hallucinations, inconsistencies, logical contradictions, and biases. Despite its importance and the significant growth of validation approaches in both diversity and count, the space opened up by these validation approaches remains unstructured. Based on a systematic literature review, we present a design space for approaches to the validation of LLM-generated structured data. The design space structures these approaches along two primary dimensions: <em>Data Source</em> and <em>Granularity</em>, and extends them with three complementary dimensions: <em>Visualization</em> techniques, Interaction techniques, and Workflow phases. Together, these dimensions form the descriptive, evaluative, and generative power of the design space. We apply the design space to demonstrate its utility through the analysis of three representative LLM-based validation approaches for structured data. Moreover, we reflect on the development process of <em>Val-LLM</em>, an interactive visual tool for multi-granularity validation, leveraging the design space as guideline in a novel approach. The results show that the design space enables researchers and practitioners to systematically characterize validation methods and guide the design of interactive systems for validation. We conclude by discussing limitations, remaining challenges, opportunities to extend the design space and to advance future validation research and practice.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104545"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-13DOI: 10.1016/j.cag.2026.104546
Alessio Arleo , Jan Byška , Monique Meuschke
{"title":"Foreword to special section on 15th Eurographics workshop on visual computing for biology and medicine","authors":"Alessio Arleo , Jan Byška , Monique Meuschke","doi":"10.1016/j.cag.2026.104546","DOIUrl":"10.1016/j.cag.2026.104546","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104546"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computers & Graphics-UkPub Date : 2026-04-01Epub Date: 2026-02-18DOI: 10.1016/j.cag.2026.104544
Maria Ribalta-Albado , Pere-Pau Vázquez
{"title":"Evaluating LLMs’ abilities to create charts, a systematic approach","authors":"Maria Ribalta-Albado , Pere-Pau Vázquez","doi":"10.1016/j.cag.2026.104544","DOIUrl":"10.1016/j.cag.2026.104544","url":null,"abstract":"<div><div>The use of generative models, especially those based on pretrained transformers, has become a common practice in code development. Tools such as GitHub Copilot, Cursor, and the direct use of conversational chatbots have proven useful to accelerate the development of applications. Unfortunately, generative models are unable to determine what is correct or wrong, and their outputs may contain errors. Their stochastic nature does not guarantee a single solution for the same problem, either. Furthermore, the output depends largely on the prompt issued by the user. To assess the capabilities of LLMs, some benchmarks have been proposed. Unfortunately, they often rely on ground truth data that may not be available. As a result, the extent to which modern LLMs can create charts needs further investigation. This work contributes to the understanding of the generative models’ ability to create charts in three ways: <em>(a)</em> Creating a dataset of prompts, data sources, and chart types to analyze, <em>(b)</em> Designing a set of systematic experiments that cover a wide range of commonly used charts, and variations of the visual variables, and <em>(c)</em> by empirically analyzing the performance of a large set of LLMs of different sizes, including Claude, CodeLlama, Gemini, Gemma, GPT4o, Llama 3.1, and Mixtral. Our results indicate that even the most advanced LLMs have room for improvement.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104544"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}