Francesco Agnelli, Giuseppe Facchi, Giuliano Grossi, Raffaella Lanzarotti
{"title":"KA-GCN: Kernel-Attentive Graph Convolutional Network for 3D face analysis","authors":"Francesco Agnelli, Giuseppe Facchi, Giuliano Grossi, Raffaella Lanzarotti","doi":"10.1016/j.array.2025.100392","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Structure Learning (GSL) methods address the limitations of real-world graphs by refining their structure and representation. This allows Graph Neural Networks (GNNs) to be applied to broader unstructured domains such as 3D face analysis. GSL can be considered as the dynamic learning of connection weights within a layer of message passing in a GNN, and particularly in a Graph Convolutional Network (GCN). A significant challenge for GSL methods arises in scenarios with limited availability of large datasets, a common issue in 3D face analysis, particularly in medical applications. This constraint limits the applicability of data-intensive GNN models, such as Graph Transformers, which, despite their effectiveness, require large amounts of training data. To address this limitation, we propose the Kernel-Attentive Graph Convolutional Network (KA-GCN). Our key finding is that integrating kernel-based and attention-based mechanisms to dynamically refine distances and learn the adjacency matrix within a Graph Structure Learning (GSL) framework enhances the model’s adaptability, making it particularly effective for 3D face analysis tasks and delivering strong performance in data-scarce scenarios. Comprehensive experiments on the Facescape, Headspace, and Florence datasets, evaluating age, sexual dimorphism, and emotion, demonstrate that our approach outperforms state-of-the-art models in both effectiveness and robustness, achieving an average accuracy improvement of 2%. The project page is available on GitHub <span><span><sup>1</sup></span></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"26 ","pages":"Article 100392"},"PeriodicalIF":2.3000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Graph Structure Learning (GSL) methods address the limitations of real-world graphs by refining their structure and representation. This allows Graph Neural Networks (GNNs) to be applied to broader unstructured domains such as 3D face analysis. GSL can be considered as the dynamic learning of connection weights within a layer of message passing in a GNN, and particularly in a Graph Convolutional Network (GCN). A significant challenge for GSL methods arises in scenarios with limited availability of large datasets, a common issue in 3D face analysis, particularly in medical applications. This constraint limits the applicability of data-intensive GNN models, such as Graph Transformers, which, despite their effectiveness, require large amounts of training data. To address this limitation, we propose the Kernel-Attentive Graph Convolutional Network (KA-GCN). Our key finding is that integrating kernel-based and attention-based mechanisms to dynamically refine distances and learn the adjacency matrix within a Graph Structure Learning (GSL) framework enhances the model’s adaptability, making it particularly effective for 3D face analysis tasks and delivering strong performance in data-scarce scenarios. Comprehensive experiments on the Facescape, Headspace, and Florence datasets, evaluating age, sexual dimorphism, and emotion, demonstrate that our approach outperforms state-of-the-art models in both effectiveness and robustness, achieving an average accuracy improvement of 2%. The project page is available on GitHub 1.