Computers & Graphics-Uk最新文献

筛选
英文 中文
Reconstruction-based distillation for anomaly detection 基于重构的异常检测蒸馏
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-19 DOI: 10.1016/j.cag.2025.104328
Mengyang Zhao, Qiang Guo
{"title":"Reconstruction-based distillation for anomaly detection","authors":"Mengyang Zhao,&nbsp;Qiang Guo","doi":"10.1016/j.cag.2025.104328","DOIUrl":"10.1016/j.cag.2025.104328","url":null,"abstract":"<div><div>Anomaly detection plays an important role in industrial production. Recent advances have established knowledge distillation as a prominent anomaly detection method, leveraging the paradigm where a student network learns feature representations from a pre-trained teacher network. In practice, this traditional feature imitation strategy leads to overgeneralization, which degrades detection performance. To mitigate this limitation, this paper proposes a reconstruction-based distillation network that replaces direct feature imitation with feature reconstruction. This method improves the student network’s understanding of the semantic information of features. In addition, to improve the accuracy of the student network in predicting anomalous regions, we introduce a prediction consistency loss to ensure that the predictions of the student network are consistent in the training phase with the inference phase. Extensive experiments on the MVTec AD and VisA datasets validate the effectiveness and generalization capability of our method. On the MVTec AD benchmark, our method achieves 99.61% image-level AUROC for anomaly detection and 98.23% pixel-level AUROC for anomaly localization.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104328"},"PeriodicalIF":2.8,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ℂ3-palette: Co-saliency based colorization for comparing categorical visualizations 调色板:用于比较分类可视化的基于共显着性的着色
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-18 DOI: 10.1016/j.cag.2025.104379
Kecheng Lu , Xubin Chai , Yi Hou , Yunhai Wang
{"title":"ℂ3-palette: Co-saliency based colorization for comparing categorical visualizations","authors":"Kecheng Lu ,&nbsp;Xubin Chai ,&nbsp;Yi Hou ,&nbsp;Yunhai Wang","doi":"10.1016/j.cag.2025.104379","DOIUrl":"10.1016/j.cag.2025.104379","url":null,"abstract":"<div><div>Visual comparison within juxtaposed views is an essential part of interactive data analysis. In this paper, we propose a co-saliency model to characterize the most co-salient features among juxtaposed labeled data visualizations while maintaining class discrimination in the individual visualizations. Based on this model, we present a comparison-driven color design framework, enabling the automatic generation of colors that maximizes co-saliency among juxtaposed visualizations for better identifying items with the largest magnitude change between two data sets. We conducted two online controlled experiments to compare our colorizations of bar charts and scatterplots with results produced by existing single view-based color design methods. We further present an interactive system and conduct a case study to demonstrate the usefulness of our method for comparing juxtaposed line charts. The results show that our approach is able to generate high quality color palettes in support of visual comparisons of juxtaposed categorical visualizations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104379"},"PeriodicalIF":2.8,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view isosurface similarity analysis for transfer function design in direct volume rendering 直接体绘制中传递函数设计的多视点等值面相似性分析
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-18 DOI: 10.1016/j.cag.2025.104343
Pei Li , Cheng Chen , Fang Wang , Xiaorong Zhang , Yaobin Wang
{"title":"Multi-view isosurface similarity analysis for transfer function design in direct volume rendering","authors":"Pei Li ,&nbsp;Cheng Chen ,&nbsp;Fang Wang ,&nbsp;Xiaorong Zhang ,&nbsp;Yaobin Wang","doi":"10.1016/j.cag.2025.104343","DOIUrl":"10.1016/j.cag.2025.104343","url":null,"abstract":"<div><div>Transfer function (TF) design is crucial in direct volume rendering, yet it faces challenges due to the lack of semantic information about volumetric data. In this paper, we propose an approach to extract the semantic information by clustering isovalues based on a novel isosurface similarity measure and an optimized clustering strategy. The measure is derived from the visual appearance of multi-view rendered images rather than spatial properties. It is designed to more closely model human visual perception mechanisms and supports efficient computation via GPU acceleration. The clustering strategy incorporates both isosurface similarity and isovalue distance to classify volumetric structures and guide semi-automatic TF design. Our proposed approach facilitates the identification of representative isosurfaces and enables users to interactively refine the TF. We demonstrate the effectiveness and generality of our approach across diverse datasets, including medical imaging, industrial CT scans, flow simulations, and combustion data.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104343"},"PeriodicalIF":2.8,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiDAR-3DGS: LiDAR reinforcement for multimodal initialization of 3D Gaussian Splats LiDAR- 3dgs:用于三维高斯条纹多模态初始化的LiDAR增强
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-16 DOI: 10.1016/j.cag.2025.104293
Hansol Lim , Hanbeom Chang , Jongseong Brad Choi , Chul Min Yeum
{"title":"LiDAR-3DGS: LiDAR reinforcement for multimodal initialization of 3D Gaussian Splats","authors":"Hansol Lim ,&nbsp;Hanbeom Chang ,&nbsp;Jongseong Brad Choi ,&nbsp;Chul Min Yeum","doi":"10.1016/j.cag.2025.104293","DOIUrl":"10.1016/j.cag.2025.104293","url":null,"abstract":"<div><div>In this paper, we introduce LiDAR-3DGS, a novel approach for integrating LiDAR data into 3D Gaussian Splatting to enhance scene reconstructions. Rather than relying solely on image-based features, we integrate LiDAR-based features as initialization. To achieve this, we present a novel sampling technique – ChromaFilter – which prioritizes LiDAR points based on color diversity. It effectively samples important features while sparsifying redundant points. Experimental results on both a custom dataset and the ETH3D dataset show consistent improvements in PSNR and SSIM. A ChromaFilter sampling density of <em>n</em> = 10 yields a notable 7.064% gain on PSNR and 0.564% gain on SSIM on the custom dataset, while ETH3D reconstructions exhibit an average PSNR increase of 4.915% and SSIM gain of 0.5951%. Our method provides practical solution for incorporating LiDAR data into 3DGS. Because many operational industrial robots are already equipped with both LiDAR and cameras, our method can be easily adopted to industrial robots to reconstruct more accurate 3DGS models for engineering and remote inspections.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104293"},"PeriodicalIF":2.8,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FanNet: A mesh convolution operator for learning dense maps FanNet:用于学习密集映射的网格卷积算子
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-16 DOI: 10.1016/j.cag.2025.104320
Güneş Sucu, Sinan Kalkan, Yusuf Sahillioğlu
{"title":"FanNet: A mesh convolution operator for learning dense maps","authors":"Güneş Sucu,&nbsp;Sinan Kalkan,&nbsp;Yusuf Sahillioğlu","doi":"10.1016/j.cag.2025.104320","DOIUrl":"10.1016/j.cag.2025.104320","url":null,"abstract":"<div><div>In this paper, we introduce a fast, simple and novel mesh convolution operator for learning dense shape correspondences. Instead of calculating weights between nodes, we explicitly aggregate node features by serializing neighboring vertices in a fan-shaped order. Thereafter, we use a fully connected layer to encode vertex features combined with the local neighborhood information. Finally, we feed the resulting features into the multi-resolution functional maps module to acquire the final maps. We demonstrate that our method works well in both supervised and unsupervised settings, and can be applied to isometric shapes with arbitrary triangulation and resolution. We evaluate the proposed method on two widely-used benchmark datasets, FAUST and SCAPE. Our results show that FanNet runs significantly faster and provides on-par or better performance than the related state-of-the-art shape correspondence methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104320"},"PeriodicalIF":2.8,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural implicit curve: A robust curve modeling approach on surface meshes 神经隐式曲线:曲面网格的鲁棒曲线建模方法
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-14 DOI: 10.1016/j.cag.2025.104351
Jintong Wang , Qi Zhang , Yun Zhang , Yao Jin , Huaxiong Zhang , Lili He
{"title":"Neural implicit curve: A robust curve modeling approach on surface meshes","authors":"Jintong Wang ,&nbsp;Qi Zhang ,&nbsp;Yun Zhang ,&nbsp;Yao Jin ,&nbsp;Huaxiong Zhang ,&nbsp;Lili He","doi":"10.1016/j.cag.2025.104351","DOIUrl":"10.1016/j.cag.2025.104351","url":null,"abstract":"<div><div>Traditional implicit curve modeling methods on surface meshes, such as variational approaches, are often plagued by numerical instability and heavy reliance on mesh quality, severely limiting their reliability in practical applications. To address these challenges, we propose Neural Implicit Curve Modeling on Meshes (NICMM), a novel framework that integrates Neural Implicit Method with geometric constraints for robust curve design. NICMM leverages physics-driven loss functions to encode positional, smoothness, and other customized constraints, alleviates numerical instabilities and inaccuracies arising from low-quality meshes, such as convergence failures. The framework incorporates specialized modules (e.g., Efficient Channel Attention and Light GLU) to enhance feature extraction and computational efficiency and introduces a two-stage training strategy combining pre-training with rapid convergence optimization. Extensive experiments on the SHREC16 dataset demonstrate that NICMM has proven its mettle by outperforming traditional variational approaches in robustness. In the face of highly degraded meshes replete with elongated and near-degenerate elements, NICMM not only excels in generating high-fidelity curves but also maintains computational efficiency comparable to existing variational method, thereby showcasing its remarkable balance between accuracy and performance. Furthermore, NICMM also supports feature-aware curve design, enabling alignment with user-specified regions and obstacle avoidance through a unified guidance mechanism. This work establishes a new paradigm for manifold curve modeling, with significant potential in CAD/CAM systems, virtual surgery, and other domains that require precise and adaptive geometric design.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104351"},"PeriodicalIF":2.8,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A generic query-modify framework for volumetric mesh processing 一种用于体积网格处理的通用查询修改框架
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-14 DOI: 10.1016/j.cag.2025.104371
Guillaume Damiand , Vincent Nivoliers , Romain Pascual
{"title":"A generic query-modify framework for volumetric mesh processing","authors":"Guillaume Damiand ,&nbsp;Vincent Nivoliers ,&nbsp;Romain Pascual","doi":"10.1016/j.cag.2025.104371","DOIUrl":"10.1016/j.cag.2025.104371","url":null,"abstract":"<div><div>We introduce a query-modify framework for automating volumetric mesh processing. Our method enables flexible and efficient modifications of geometric structures composed of multiple volumes with minimal user-implemented code. Modifications are provided as rules consisting of a query mesh and a target mesh representing structural information to be extracted and replaced. The rules enable both localized queries to be matched with a portion of an input mesh and targeted modifications on the matched portion of the input mesh. Our approach generalizes standard mesh manipulations and adapts to various applications, including geometric modeling, remeshing, and topology-aware transformations. We showcase our framework on several use cases, including the first complete implementation of a tetrahedral recombination method based on 171 cases, exhaustively classifying all possible recombinations. Our framework allows for arbitrarily connected collections of volumes as queries, enabling automated and application-driven mesh modifications.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104371"},"PeriodicalIF":2.8,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144860267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training-free style transfer via content-style image inversion 通过内容样式图像反转的无训练风格迁移
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-14 DOI: 10.1016/j.cag.2025.104352
Songlin Lei , Qiuxia Yang , Ke Yang , Zhengpeng Zhao , Yuanyuan Pu
{"title":"Training-free style transfer via content-style image inversion","authors":"Songlin Lei ,&nbsp;Qiuxia Yang ,&nbsp;Ke Yang ,&nbsp;Zhengpeng Zhao ,&nbsp;Yuanyuan Pu","doi":"10.1016/j.cag.2025.104352","DOIUrl":"10.1016/j.cag.2025.104352","url":null,"abstract":"<div><div>Image style transfer aims to adapt a content image to a target style while preserving its structural information. Despite the strong generative capabilities of diffusion models, their application to style transfer faces two key challenges: (1) entangled content-style interplay during denoising, leading to suboptimal stylization, and (2) reliance on computationally expensive optimization (e.g., model fine-tuning or text inversion). To address these issues, we propose a training-free tri-path framework. The content and style paths separately leverage image inversion to extract content and style features, which are shared with the stylization path. Specifically, the content path preserves structure via residual connections and noised *h*-features, while the style path injects style through AdaIN-modulated self-attention features to avoid artifacts. Our method eliminates optimization overhead and ensures harmonious stylization by decoupling content-style control. Experiments demonstrate that our approach effectively retains content fidelity and style accuracy while minimizing artifacts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104352"},"PeriodicalIF":2.8,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDLKF: Signed Distance Linear Kernel Function for surface reconstruction SDLKF:曲面重构的有符号距离线性核函数
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-14 DOI: 10.1016/j.cag.2025.104361
Hao-Xiang Chen, Xiao-Lei Li, Tai-Jiang Mu, Qun-Ce Xu, Shi-Min Hu
{"title":"SDLKF: Signed Distance Linear Kernel Function for surface reconstruction","authors":"Hao-Xiang Chen,&nbsp;Xiao-Lei Li,&nbsp;Tai-Jiang Mu,&nbsp;Qun-Ce Xu,&nbsp;Shi-Min Hu","doi":"10.1016/j.cag.2025.104361","DOIUrl":"10.1016/j.cag.2025.104361","url":null,"abstract":"<div><div>In this paper, we introduce a novel explicit representation for surface reconstruction from multi-view images, named Signed Distance Linear Kernel Function (SDLFK), which simultaneously allows fast rendering and accurate surface reconstruction. The key insight is to use linear kernels to fit the Signed Distance Function (SDF) which has an analytic solution for volume rendering instead of numeric approximation. Specifically, the linear kernel function is defined within ellipsoids and calculated as the signed distance to the principal plane. For each ellipsoid intersected by rays, the expected depth and transmittance can be calculated through volume rendering with a closed-form solution. This procedure allows seamless switching between soft and hard surfaces, where the former facilitates optimization and the latter ensures precise reconstruction. Our evaluations demonstrate that our method improves the detailed geometry compared to state-of-the-art methods while maintaining fast and high-fidelity rendering.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104361"},"PeriodicalIF":2.8,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InkSpirit: An expert knowledge-driven approach for enhancing the visual logic of traditional Chinese painting text-to-image generation InkSpirit:一种专业知识驱动的方法,用于增强传统中国画文本到图像生成的视觉逻辑
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-13 DOI: 10.1016/j.cag.2025.104330
Di Zhang , Chen Yi , Xinyu Gao , Xiangsheng Zeng , Runqiao Xia , Yongbo Jiang , Yingchaojie Feng , Wei Zhang , Wei Chen
{"title":"InkSpirit: An expert knowledge-driven approach for enhancing the visual logic of traditional Chinese painting text-to-image generation","authors":"Di Zhang ,&nbsp;Chen Yi ,&nbsp;Xinyu Gao ,&nbsp;Xiangsheng Zeng ,&nbsp;Runqiao Xia ,&nbsp;Yongbo Jiang ,&nbsp;Yingchaojie Feng ,&nbsp;Wei Zhang ,&nbsp;Wei Chen","doi":"10.1016/j.cag.2025.104330","DOIUrl":"10.1016/j.cag.2025.104330","url":null,"abstract":"<div><div>Traditional Chinese painting (TCP) presents unique challenges for text-to-image models, including composition logic deficiency, lack of inscription semantics, and style deviations. This study proposes the “InkSpirit” framework, employing an expert knowledge-driven approach to address these issues by: (1) constructing a TCP dataset with composition-based Blank Space Principles, (2) building an Artistic conception-Inscription corpus, and (3) designing a generation framework based on ComfyUI workflow for precise control over TCP elements. Experiments demonstrate superior performance in image quality metrics, with validation through expert and user evaluations, advancing the integration of traditional art with AI technology.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104330"},"PeriodicalIF":2.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144908412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信