Computers & Graphics-Uk最新文献

筛选
英文 中文
NEGS-Avatar: Normal Embedded Gaussians for 2D avatar from monocular video NEGS-Avatar:用于单目视频2D avatar的正常嵌入高斯分布
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-09 DOI: 10.1016/j.cag.2026.104538
Zedan Zheng , Yudi Tan , Zhuo Su , Fan Zhou , Baoquan Zhao
{"title":"NEGS-Avatar: Normal Embedded Gaussians for 2D avatar from monocular video","authors":"Zedan Zheng ,&nbsp;Yudi Tan ,&nbsp;Zhuo Su ,&nbsp;Fan Zhou ,&nbsp;Baoquan Zhao","doi":"10.1016/j.cag.2026.104538","DOIUrl":"10.1016/j.cag.2026.104538","url":null,"abstract":"<div><div>Creating realistic human avatars from monocular RGB videos is a long-standing and challenging problem. Existing implicit NeRF-based methods typically lack explicit geometric information in feature representation. Although 3D Gaussian Splatting (3DGS) has recently emerged as an explicit point-cloud-based alternative, information about geometric details like normal information is still missing in such an unstructured representation. In this paper, we present NEGS-Avatar, a novel approach to modeling animatable 2D human avatars from monocular videos using 3DGS. Our method incorporates normal information into 3D Gaussians as a learnable property to construct directed 3DGS to improve body appearance modeling. The normal information, along with other properties like positions, rotations and scales, is predicted based on the given body pose to model pose-dependent non-rigid deformation. The Gaussians are then transformed into actor posed space using linear blend skinning to realize pose animation. In addition, we develop a locality-aware adaptive density control strategy, which utilizes normal variance in local areas to facilitate effective Gaussain densification. Last but not the least, we propose to separate the specular and diffuse components for color prediction, thereby forming a more accurate, interpretable, and controllable appearance prediction model. Experimental results demonstrate that NEGS-Avatar achieves state-of-the-art performance both qualitatively and quantitatively, especially in the details of the clothing surface. The code is available at <span><span>https://github.com/Zheng-ZD/NEGS-Avatar.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104538"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the Special Section on Smart Tools and Applications in Graphics (STAG 2024) 关于图形中的智能工具和应用的特别部分(STAG 2024)前言
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-01-23 DOI: 10.1016/j.cag.2026.104533
Andrea Giachetti, Umberto Castellani, Ariel Caputo, Valeria Garro, Nicola Capece
{"title":"Foreword to the Special Section on Smart Tools and Applications in Graphics (STAG 2024)","authors":"Andrea Giachetti,&nbsp;Umberto Castellani,&nbsp;Ariel Caputo,&nbsp;Valeria Garro,&nbsp;Nicola Capece","doi":"10.1016/j.cag.2026.104533","DOIUrl":"10.1016/j.cag.2026.104533","url":null,"abstract":"<div><div>This Special Section contains extended and revised versions of selected papers presented at the 11th Conference on Smart Tools and Applications in Graphics (STAG 2024), held in Verona (Italy) on November 14–15, 2024. Three papers were selected by appointed members of the Program Committee; their extended versions were subsequently submitted and further reviewed by experts. The resulting collection comprises contributions spanning a broad range of topics, including navigation in mixed reality, reinforcement learning for intelligent agents in 3D environments, and interactive image relighting using neural networks.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104533"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Golden anniversary of Computers & Graphics: A bibliometric overview 计算机与图形学的黄金周年纪念:文献计量学概述
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-05 DOI: 10.1016/j.cag.2026.104539
Muhammad Saqlain , José M. Merigó , Poom Kumam , Joaquim Jorge
{"title":"Golden anniversary of Computers & Graphics: A bibliometric overview","authors":"Muhammad Saqlain ,&nbsp;José M. Merigó ,&nbsp;Poom Kumam ,&nbsp;Joaquim Jorge","doi":"10.1016/j.cag.2026.104539","DOIUrl":"10.1016/j.cag.2026.104539","url":null,"abstract":"<div><div><em>Computers &amp; Graphics</em> celebrates its golden anniversary in 2025. Motivated by this special event, this study presents a comprehensive bibliometric analysis of the journal, identifying key research trends, frequently cited authors, institutions, countries, and major citation patterns. The work retrieves data from the Web of Science (WoS) core collection and Scopus databases and utilizes bibliometric tools such as <em>VOS viewer</em> and <em>bibliometrix</em> software. We analyse the keyword evolution; co-citation networks and bibliographic coupling of the documents published in <em>Computers &amp; Graphics</em>. The distribution of topics indicates increased attention to artificial intelligence–based methods, including deep learning, point cloud processing, and virtual reality, alongside established rendering and simulation techniques. Additionally, the bibliometric analysis of productive authors, institutions and countries, indicate increased publication and citation activity associated with institutions in Asian countries, especially China. Beyond broader trends, this study also highlights <em>Computers &amp; Graphics’</em> recent initiatives that emphasize transparency and reproducibility, such as the graphics replicability stamp and the special sections, which bridge academic conferences and high-quality journal publications. This study serves as a reference for researchers seeking to understand the historical trajectory, emerging trends, and evolving editorial priorities in computer graphics research.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104539"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the Special Section on Shape Modeling International 2025 (SMI 2025) 形状建模国际2025 (SMI 2025)特别部分前言
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-11 DOI: 10.1016/j.cag.2026.104540
Hongwei Lin, Michela Mortara, Zichun Zhong
{"title":"Foreword to the Special Section on Shape Modeling International 2025 (SMI 2025)","authors":"Hongwei Lin,&nbsp;Michela Mortara,&nbsp;Zichun Zhong","doi":"10.1016/j.cag.2026.104540","DOIUrl":"10.1016/j.cag.2026.104540","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104540"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sketch-guided stylized landscape cinemagraph synthesis 草图引导风格化景观电影合成
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-14 DOI: 10.1016/j.cag.2026.104547
Hao Jin , Hengyuan Chang , Xiaoxuan Xie , Zhengyang Wang , Xusheng Du , Shaojun Hu , Haoran Xie
{"title":"Sketch-guided stylized landscape cinemagraph synthesis","authors":"Hao Jin ,&nbsp;Hengyuan Chang ,&nbsp;Xiaoxuan Xie ,&nbsp;Zhengyang Wang ,&nbsp;Xusheng Du ,&nbsp;Shaojun Hu ,&nbsp;Haoran Xie","doi":"10.1016/j.cag.2026.104547","DOIUrl":"10.1016/j.cag.2026.104547","url":null,"abstract":"<div><div>Designing stylized cinemagraphs is challenging due to the difficulty in customizing complex and expressive flow elements. To achieve intuitive and detailed control of the generated cinemagraphs, sketches provide a feasible solution to convey personalized design requirements beyond text inputs. In this paper, we propose Sketch2Cinemagraph, a sketch-guided framework that enables the conditional generation of stylized cinemagraphs from freehand sketches. Sketch2Cinemagraph adopts text prompts for initial landscape generation and provides sketch controls for both spatial and motion cues. The latent diffusion model first generates target stylized landscape images along with realistic versions. Then, a pre-trained object detection model obtains masks for the flow regions. We propose a latent motion diffusion model to estimate motion field in fluid regions of the generated landscape images. The input motion sketches serve as the conditions to control the generated motion fields in the masked fluid regions with the prompt. To synthesize cinemagraph frames, the pixels within fluid regions are warped to target locations at each timestep using a U-Net based frame generator. The results verified that Sketch2Cinemagraph can generate aesthetically appealing stylized cinemagraphs with continuous temporal flow from sketch inputs. We showcase the advantages of Sketch2Cinemagraph through qualitative and quantitative comparisons against the state-of-the-art approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104547"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note for Issue 135 of Computers & Graphics 《计算机与图形学》第135期编辑说明
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-03-06 DOI: 10.1016/j.cag.2026.104567
Joaquim Jorge (Editor-in-Chief)
{"title":"Editorial Note for Issue 135 of Computers & Graphics","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.cag.2026.104567","DOIUrl":"10.1016/j.cag.2026.104567","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104567"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Force-Scheme: A fast and accurate global dimensionality reduction method 增强力方案:一种快速、准确的全局降维方法
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-01-22 DOI: 10.1016/j.cag.2026.104536
Jaume Ros, Alessio Arleo, Fernando Paulovich
{"title":"Enhanced Force-Scheme: A fast and accurate global dimensionality reduction method","authors":"Jaume Ros,&nbsp;Alessio Arleo,&nbsp;Fernando Paulovich","doi":"10.1016/j.cag.2026.104536","DOIUrl":"10.1016/j.cag.2026.104536","url":null,"abstract":"<div><div>Global nonlinear Dimensionality Reduction (DR) methods excel at capturing complex features of datasets while preserving their overall high-dimensional structure when projecting them into a lower-dimensional space. Force-Scheme (FS) is one such method, used in a variety of domains. However, its use is still hindered by distortions and high computational cost. In this paper, we introduce <em>Enhanced Force-Scheme</em> (EFS), a revisited approach to solve the optimization problem posed by FS. We build on the core ideas of the original FS algorithm and introduce a more advanced optimization framework grounded in gradient-based optimization, which yields higher-quality layouts. Additionally, we elaborate on multiple strategies to accelerate the computation of projections using EFS, thereby facilitating its use on large datasets. Finally, we compare it with FS and other popular DR techniques and show that, among the methods tested, EFS best captures global structure while still performing well on local metrics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104536"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematic validation of LLM-generated structured data — A design space and remaining challenges 法学硕士生成的结构化数据的系统验证-一个设计空间和剩下的挑战
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-10 DOI: 10.1016/j.cag.2026.104545
Madhav Sachdeva , Christopher Narayanan , Marvin Wiedenkeller , Jana Sedlakova , Jürgen Bernard
{"title":"Systematic validation of LLM-generated structured data — A design space and remaining challenges","authors":"Madhav Sachdeva ,&nbsp;Christopher Narayanan ,&nbsp;Marvin Wiedenkeller ,&nbsp;Jana Sedlakova ,&nbsp;Jürgen Bernard","doi":"10.1016/j.cag.2026.104545","DOIUrl":"10.1016/j.cag.2026.104545","url":null,"abstract":"<div><div>Large language models (LLMs) are increasingly being used in academia and practice to generate structured data, supporting crucial data enrichment tasks such as imputing missing values, labeling data items, and generating synthetic datasets. However, these benefits rely on the validation of LLM-generated data to address known issues of LLMs, including hallucinations, inconsistencies, logical contradictions, and biases. Despite its importance and the significant growth of validation approaches in both diversity and count, the space opened up by these validation approaches remains unstructured. Based on a systematic literature review, we present a design space for approaches to the validation of LLM-generated structured data. The design space structures these approaches along two primary dimensions: <em>Data Source</em> and <em>Granularity</em>, and extends them with three complementary dimensions: <em>Visualization</em> techniques, Interaction techniques, and Workflow phases. Together, these dimensions form the descriptive, evaluative, and generative power of the design space. We apply the design space to demonstrate its utility through the analysis of three representative LLM-based validation approaches for structured data. Moreover, we reflect on the development process of <em>Val-LLM</em>, an interactive visual tool for multi-granularity validation, leveraging the design space as guideline in a novel approach. The results show that the design space enables researchers and practitioners to systematically characterize validation methods and guide the design of interactive systems for validation. We conclude by discussing limitations, remaining challenges, opportunities to extend the design space and to advance future validation research and practice.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104545"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to special section on 15th Eurographics workshop on visual computing for biology and medicine 第15届欧洲图形学生物和医学视觉计算研讨会特别部分的前言
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-13 DOI: 10.1016/j.cag.2026.104546
Alessio Arleo , Jan Byška , Monique Meuschke
{"title":"Foreword to special section on 15th Eurographics workshop on visual computing for biology and medicine","authors":"Alessio Arleo ,&nbsp;Jan Byška ,&nbsp;Monique Meuschke","doi":"10.1016/j.cag.2026.104546","DOIUrl":"10.1016/j.cag.2026.104546","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104546"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating LLMs’ abilities to create charts, a systematic approach 评估法学硕士创建图表的能力,这是一个系统的方法
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2026-04-01 Epub Date: 2026-02-18 DOI: 10.1016/j.cag.2026.104544
Maria Ribalta-Albado , Pere-Pau Vázquez
{"title":"Evaluating LLMs’ abilities to create charts, a systematic approach","authors":"Maria Ribalta-Albado ,&nbsp;Pere-Pau Vázquez","doi":"10.1016/j.cag.2026.104544","DOIUrl":"10.1016/j.cag.2026.104544","url":null,"abstract":"<div><div>The use of generative models, especially those based on pretrained transformers, has become a common practice in code development. Tools such as GitHub Copilot, Cursor, and the direct use of conversational chatbots have proven useful to accelerate the development of applications. Unfortunately, generative models are unable to determine what is correct or wrong, and their outputs may contain errors. Their stochastic nature does not guarantee a single solution for the same problem, either. Furthermore, the output depends largely on the prompt issued by the user. To assess the capabilities of LLMs, some benchmarks have been proposed. Unfortunately, they often rely on ground truth data that may not be available. As a result, the extent to which modern LLMs can create charts needs further investigation. This work contributes to the understanding of the generative models’ ability to create charts in three ways: <em>(a)</em> Creating a dataset of prompts, data sources, and chart types to analyze, <em>(b)</em> Designing a set of systematic experiments that cover a wide range of commonly used charts, and variations of the visual variables, and <em>(c)</em> by empirically analyzing the performance of a large set of LLMs of different sizes, including Claude, CodeLlama, Gemini, Gemma, GPT4o, Llama 3.1, and Mixtral. Our results indicate that even the most advanced LLMs have room for improvement.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104544"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书