{"title":"Multi-level feature fusion parallel branching networks for point cloud learning","authors":"Biao Yan , Zhiyong Tao , Sen Lin , Heng Li","doi":"10.1016/j.cag.2025.104221","DOIUrl":"10.1016/j.cag.2025.104221","url":null,"abstract":"<div><div>As a 3D data representation format, point cloud aims to preserve the original geometric information in 3D space. Researchers have developed convolutional networks based on graph structures to overcome the sparse nature of point cloud. However, due to traditional graph convolutional networks’ shallow layers, obtaining the point cloud’s deep semantic information is complicated. This paper proposes a parallel branching network for multi-level point cloud feature fusion. The shallow feature branch constructs the local graph structure of the point cloud by the k-Nearest Neighbor (kNN) algorithm and then uses Multi-Layer Perceptrons (MLPs) to learn the local features of the point cloud. In the deep feature branch, we design a Sampling-Grouping (SG) module to down-sample the point cloud in multiple stages normalize the point cloud to improve the network performance, and then perform feature learning based on the residual network. The proposed network has been tested on benchmark datasets, including ModelNet40, ScanObjectNN, and ShapeNet Part. Our method outperforms most classical algorithms methods in the extensive classification and segmentation datasets in quantitative and qualitative evaluation metrics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104221"},"PeriodicalIF":2.5,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yixuan Li , Baoning Ji , Jie Zhang , Jiazhen Pang , Weibo Li
{"title":"Implicit relevance inference for assembly CAD model retrieval based on design correlation representation","authors":"Yixuan Li , Baoning Ji , Jie Zhang , Jiazhen Pang , Weibo Li","doi":"10.1016/j.cag.2025.104220","DOIUrl":"10.1016/j.cag.2025.104220","url":null,"abstract":"<div><div>Assembly retrieval is a crucial technology for leveraging the extensive design knowledge embedded in CAD product instances. Current methods predominantly employ pairwise similarity measurements, which treat each product model as an isolated entity and overlook the intricate design correlations that reveal high-level design development relationships. To enhance the comprehension of product design correlations within retrieval systems, this paper introduces a novel method for implicit relevance inference in assembly retrieval based on design correlation. We define a part co-occurring relationship to capture the design correlations among assemblies by clustering parts based on shape similarity. At a higher level, all assemblies in the database are constructed as a multiple correlation network based on hypergraph, where the hyperedges represent the part co-occurring relationships. For a given query assembly, the implicit relevance between the query and other assemblies can be calculated by network structure inference. The problem is solved by using a random walk algorithm on the assembly hypergraph network. Comprehensive experiments have shown the effectiveness of the proposed assembly retrieval approach. The proposed method can be seen as an extension of existing pairwise similarity retrieval by further considering assembly relevance, which shows it has versatility and can enhance the effectiveness of existing pairwise similarity retrieval methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104220"},"PeriodicalIF":2.5,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143807721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust at every step: Embedding trust quality gates into the visual data exploration loop for machine learning-based clinical decision support systems","authors":"Dario Antweiler, Georg Fuchs","doi":"10.1016/j.cag.2025.104212","DOIUrl":"10.1016/j.cag.2025.104212","url":null,"abstract":"<div><div>Recent advancements in machine learning (ML) support novel applications in healthcare, most significantly clinical decision support systems (CDSS). The lack of trust hinders acceptance and is one of the main reasons for the limited number of successful implementations in clinical practice. Visual analytics enables the development of trustworthy ML models by providing versatile interactions and visualizations for both data scientists and healthcare professionals (HCPs). However, specific support for HCPs to build trust towards ML models through visual analytics remains underexplored. We propose an extended visual data exploration methodology to enhance trust in ML-based healthcare applications. Based on a literature review on trustworthiness of CDSS, we analyze emerging themes and their implications. By introducing trust quality gates mapped onto the Visual Data Exploration Loop, we provide structured checkpoints for multidisciplinary teams to assess and build trust. We demonstrate the applicability of this methodology in three real-world use cases – policy development, plausibility testing, and model optimization – highlighting its potential to advance trustworthy ML in the healthcare domain.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104212"},"PeriodicalIF":2.5,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianwu Long , Shuang Chen , Kaixin Zhang , Yuanqin Liu , Qi Luo , Yuten Chen
{"title":"Global Sparse Texture Filtering for edge preservation and structural extraction","authors":"Jianwu Long , Shuang Chen , Kaixin Zhang , Yuanqin Liu , Qi Luo , Yuten Chen","doi":"10.1016/j.cag.2025.104213","DOIUrl":"10.1016/j.cag.2025.104213","url":null,"abstract":"<div><div>Extracting meaningful structures from complex texture images remains a significant challenge. Texture image smoothing seeks to retain essential structures while eliminating textures, noise and irrelevant details. However, existing smoothing algorithms often degrade small or weak structural edges when reducing dominant textures. To address this limitation, we propose a novel Global Sparse Texture Filtering (GSTF) algorithm for image smoothing. Our method introduces a texture suppression function that compresses large-scale textures while preserving smaller structures, a window variation mapping is formulated. Combined with window total variation, and leads to the derivation of a novel regularization term. Furthermore, we apply a sparse <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span> norm <span><math><mrow><mo>(</mo><mn>0</mn><mo><</mo><mi>p</mi><mo>≤</mo><mn>1</mn><mo>)</mo></mrow></math></span> to constrain the penalty term, enabling the effective smoothing of multi-scale textures while preserving finer edges. Extensive experiments show that the proposed method is both highly effective and superior to existing techniques.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104213"},"PeriodicalIF":2.5,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kexuan Ban, Yongjian Huai, Xiaoying Nie, Qingkuo Meng, Haifeng Xu
{"title":"SAGS-GNN: Graph Neural Network for self-collision and anisotropy in dynamic garment simulation","authors":"Kexuan Ban, Yongjian Huai, Xiaoying Nie, Qingkuo Meng, Haifeng Xu","doi":"10.1016/j.cag.2025.104216","DOIUrl":"10.1016/j.cag.2025.104216","url":null,"abstract":"<div><div>Garment is an essential component of digital humans, and the accurate representation of dynamic simulation details and wrinkle characteristics is crucial for enhancing the realism of virtual scenes. However, this task remains significantly challenging in complex simulation scenarios. Therefore, we propose a novel garment simulation method based on Graph Neural Networks (GNNs), referred to as SAGS-GNN, which effectively simulates self-collision and cloth anisotropy. To tackle the self-collision problem, we present the repulsive loss term and the maximum depth loss term. These terms effectively simulate the interactions between the vertices of the cloth mesh by jointly constraining their positions, thereby facilitating the self-collision handling of garments. Furthermore, our approach utilizes the Neo-Hookean StVK method to achieve anisotropy in cloth, further reflecting the different wrinkle details of multiple materials during motion. In summary, our SAGS method effectively mitigates the issue of interpenetration among garments, facilitates the realization of anisotropic properties in a variety of fabric materials, and significantly enhances the visual realism of virtual apparel. We evaluate our method on various garment types and materials, demonstrating competitive qualitative and quantitative results.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104216"},"PeriodicalIF":2.5,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Geometric–Photometric Joint Alignment for facial mesh registration","authors":"Xizhi Wang , Yaxiong Wang , Mengjian Li","doi":"10.1016/j.cag.2025.104214","DOIUrl":"10.1016/j.cag.2025.104214","url":null,"abstract":"<div><div>This paper presents a <strong>G</strong>eometric-<strong>P</strong>hotometric <strong>J</strong>oint <strong>A</strong>lignment (GPJA) method, which aligns discrete human expressions at pixel-level accuracy by combining geometric and photometric information. Common practices for registering human heads typically involve aligning landmarks with facial template meshes using geometry processing approaches, but often overlook dense pixel-level photometric consistency. This oversight leads to inconsistent texture parametrization across different expressions, hindering the creation of topologically consistent head meshes widely used in movies and games. GPJA overcomes this limitation by leveraging differentiable rendering to align vertices with target expressions, achieving joint alignment in both geometry and photometric appearances automatically, without requiring semantic annotation or pre-aligned meshes for training. It features a holistic rendering alignment mechanism and a multiscale regularized optimization for robust convergence on large deformation. The method utilizes derivatives at vertex positions for supervision and employs a gradient-based algorithm which guarantees smoothness and avoids topological artifacts during the geometry evolution. Experimental results demonstrate faithful alignment under various expressions, surpassing the conventional non-rigid ICP-based methods and the state-of-the-art deep learning based method. In practical, our method generates meshes of the same subject across diverse expressions, all with the same texture parametrization. This consistency benefits face animation, re-parametrization, and other batch operations for face modeling and applications with enhanced efficiency.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104214"},"PeriodicalIF":2.5,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuxin Liu , Guiqing Li , Yongwei Nie , Feiran Yu , Ping Li , Tonglai Liu , Zhaobo Zhang
{"title":"Implicit-based collision-aware clothed human reconstruction from a single image","authors":"Yuxin Liu , Guiqing Li , Yongwei Nie , Feiran Yu , Ping Li , Tonglai Liu , Zhaobo Zhang","doi":"10.1016/j.cag.2025.104201","DOIUrl":"10.1016/j.cag.2025.104201","url":null,"abstract":"<div><div>This paper tackles the problem of simultaneously reconstructing SMPL human mesh and garments from a single image. SMPL and its extensions have become the de facto standard of skinning. However, SMPL does not have a mechanism to detect penetrations. To efficiently perceive self-/cross-penetrations, we propose DiSMPL, abbreviated as the Deep Implicit SMPL. As an implicit counterpart of SMPL, DiSMPL is able to produce accurate signed distance field (SDF) from the parameters of SMPL. With DiSMPL as an evaluation for collision detection, we subsequently propose our reconstruction method. It employs SMPL and TailorNet as parametric models for body and garment respectively. It formulates an energy to optimize the shape and pose parameters of the human model and the style parameters of the garments. Our energy involves two categories: we first have the shape and pose constraints, which penalize the difference of the 2D joint positions and the clothed human region between the person in image and the projection of the 3D parametric models; we then introduce a constraint between the human body and garments, which utilizes DiSMPL to easily penalize each vertex of garments located inside the human body to reduce the penetration between them. Experiments show that, with the collision constraint imposed by DiSMPL, we can effectively reduce penetration between the reconstructed human body and garments, achieving a better reconstruction effect than previous approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104201"},"PeriodicalIF":2.5,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143758958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing emotional response detection in virtual reality with quantum support vector machine learning","authors":"Allison Bayro, Heejin Jeong","doi":"10.1016/j.cag.2025.104196","DOIUrl":"10.1016/j.cag.2025.104196","url":null,"abstract":"<div><div>Accurate and efficient emotion classification is essential for virtual reality (VR) affective computing, as it allows systems to tailor gameplay, difficulty, and feedback to improve user engagement and support therapeutic outcomes. However, the high-dimensional and multimodal nature of the physiological signals used in these emotion classification models often poses significant challenges for traditional machine learning methods. This study explores the use of quantum support vector machines (QSVM) to improve efficiency and accuracy in classifying emotions within a VR Pong game featuring three conditions (slow-paced, fast-paced, and lag-induced). Physiological signals, including electrocardiogram (ECG), galvanic skin response (GSR), and electromyogram (EMG), were analyzed along with self-reported emotions from the Self-Assessment Manikin (SAM). Traditional SVMs and QSVMs were compared for their ability to classify arousal and valence from the collected physiological signals and self-reported emotions. A QSVM model using circular entanglement achieved 0.693 precision and a 0.923 F1 score for arousal with five features, surpassing the SVM’s 0.648 precision and 0.44 F1 (using nine features). For valence, QSVM achieved 0.637 accuracy and a 0.95 F1 score with five features, exceeding the SVM 0.603 accuracy and 0.31 F1 (with eight features). Our findings demonstrate that QSVMs efficiently handle high-dimensional physiological data while improving classification performance with fewer features. Although physical movement can affect physiological signals, our results indicate that QSVMs remain promising for improving emotion classification in VR and may enable more effective real-time adaptation in immersive environments.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104196"},"PeriodicalIF":2.5,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zongqiang Liu , Qingyao Meng , Daying Lu , Gongzheng Li , Decai Guo , Delong Liu , Chuanxin Liu , Zongxu Yang
{"title":"GloNeRF: Boosting NeRF capabilities and multi-view consistency in low-light environments","authors":"Zongqiang Liu , Qingyao Meng , Daying Lu , Gongzheng Li , Decai Guo , Delong Liu , Chuanxin Liu , Zongxu Yang","doi":"10.1016/j.cag.2025.104209","DOIUrl":"10.1016/j.cag.2025.104209","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) significantly enhances the photorealism and detail richness of images by precisely rendering complex scenes using deep learning models, offering revolutionary improvements in novel view synthesis and three-dimensional scene modeling. However, when processing images captured in low-light conditions, the performance of NeRF can be significantly compromised, resulting in the loss of details and a decline in image quality. Although simply applying 2D low-light enhancement methods can improve image quality, this approach may lead to inconsistencies across multi-views, thereby introducing floating artifacts in the reconstructed neural radiance field. To address this issue, we propose a new framework. Initially, we enhance a series of low-light images using 2D low-light enhancement techniques. Subsequently, after volumetric rendering, we apply a bilateral grid approximation in the process of low-light image enhancement. Finally, we assign a bilateral grid to each training view to accommodate changes induced by low-light enhancement. During the Inference phase, we remove the bilateral grid, directly rendering novel views to ensure consistency across multi-views. Extensive experiments were conducted on three types of low-light datasets, and the results demonstrated satisfactory performance in both qualitative and quantitative evaluations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104209"},"PeriodicalIF":2.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143687153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast simulation of soft-body deformation using connected rigid objects","authors":"Moonjun Chung , Taesoo Kwon , Yejin Kim","doi":"10.1016/j.cag.2025.104202","DOIUrl":"10.1016/j.cag.2025.104202","url":null,"abstract":"<div><div>In the field of computer graphics, physics-based simulations have been actively researched for decades to represent visually realistic motions of soft-body objects. To deform stiff objects, methods such as the finite element method (FEM) and position-based dynamics (PBD) have been traditionally used for physical simulations. However, there are many situations in which it is difficult to perform interactive simulations at high speeds on relatively low-performance devices. In this paper, we propose an approach by which to undertake rapid simulations of soft-body deformation of a 3D mesh model. Assuming an input object with high damping coefficients, we approximated the simulation process using connected rigid objects. To do this, we extracted a skeletal structure from an input mesh and generated collision meshes by clustering and decomposing the skeletal voxels into convex groups. Distributing each contact force to the rigid objects properly, our approach shows that various types of object models can deform convincingly as if they were soft-body objects. Our approach is fully automated for general users and able to simulate both rigid and soft-body objects in the current animation pipeline.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"128 ","pages":"Article 104202"},"PeriodicalIF":2.5,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143687152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}