KA-GCN: Kernel-Attentive Graph Convolutional Network for 3D face analysis

IF 2.3 Q2 COMPUTER SCIENCE, THEORY & METHODS
Array Pub Date : 2025-04-07 DOI:10.1016/j.array.2025.100392
Francesco Agnelli, Giuseppe Facchi, Giuliano Grossi, Raffaella Lanzarotti
{"title":"KA-GCN: Kernel-Attentive Graph Convolutional Network for 3D face analysis","authors":"Francesco Agnelli,&nbsp;Giuseppe Facchi,&nbsp;Giuliano Grossi,&nbsp;Raffaella Lanzarotti","doi":"10.1016/j.array.2025.100392","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Structure Learning (GSL) methods address the limitations of real-world graphs by refining their structure and representation. This allows Graph Neural Networks (GNNs) to be applied to broader unstructured domains such as 3D face analysis. GSL can be considered as the dynamic learning of connection weights within a layer of message passing in a GNN, and particularly in a Graph Convolutional Network (GCN). A significant challenge for GSL methods arises in scenarios with limited availability of large datasets, a common issue in 3D face analysis, particularly in medical applications. This constraint limits the applicability of data-intensive GNN models, such as Graph Transformers, which, despite their effectiveness, require large amounts of training data. To address this limitation, we propose the Kernel-Attentive Graph Convolutional Network (KA-GCN). Our key finding is that integrating kernel-based and attention-based mechanisms to dynamically refine distances and learn the adjacency matrix within a Graph Structure Learning (GSL) framework enhances the model’s adaptability, making it particularly effective for 3D face analysis tasks and delivering strong performance in data-scarce scenarios. Comprehensive experiments on the Facescape, Headspace, and Florence datasets, evaluating age, sexual dimorphism, and emotion, demonstrate that our approach outperforms state-of-the-art models in both effectiveness and robustness, achieving an average accuracy improvement of 2%. The project page is available on GitHub <span><span><sup>1</sup></span></span>.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"26 ","pages":"Article 100392"},"PeriodicalIF":2.3000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Graph Structure Learning (GSL) methods address the limitations of real-world graphs by refining their structure and representation. This allows Graph Neural Networks (GNNs) to be applied to broader unstructured domains such as 3D face analysis. GSL can be considered as the dynamic learning of connection weights within a layer of message passing in a GNN, and particularly in a Graph Convolutional Network (GCN). A significant challenge for GSL methods arises in scenarios with limited availability of large datasets, a common issue in 3D face analysis, particularly in medical applications. This constraint limits the applicability of data-intensive GNN models, such as Graph Transformers, which, despite their effectiveness, require large amounts of training data. To address this limitation, we propose the Kernel-Attentive Graph Convolutional Network (KA-GCN). Our key finding is that integrating kernel-based and attention-based mechanisms to dynamically refine distances and learn the adjacency matrix within a Graph Structure Learning (GSL) framework enhances the model’s adaptability, making it particularly effective for 3D face analysis tasks and delivering strong performance in data-scarce scenarios. Comprehensive experiments on the Facescape, Headspace, and Florence datasets, evaluating age, sexual dimorphism, and emotion, demonstrate that our approach outperforms state-of-the-art models in both effectiveness and robustness, achieving an average accuracy improvement of 2%. The project page is available on GitHub 1.

Abstract Image

KA-GCN:用于三维人脸分析的核注意图卷积网络
图结构学习(GSL)方法通过改进图的结构和表示来解决现实世界图的局限性。这使得图形神经网络(gnn)可以应用于更广泛的非结构化领域,如3D面部分析。GSL可以被认为是在GNN中,特别是在图卷积网络(GCN)中传递的消息层中连接权重的动态学习。在大型数据集可用性有限的情况下,GSL方法面临一个重大挑战,这是3D人脸分析中的一个常见问题,特别是在医疗应用中。这个约束限制了数据密集型GNN模型的适用性,例如Graph transformer,尽管它们很有效,但需要大量的训练数据。为了解决这一限制,我们提出了核关注图卷积网络(KA-GCN)。我们的主要发现是,在图结构学习(GSL)框架中集成基于核和基于注意力的机制来动态细化距离和学习邻接矩阵,增强了模型的适应性,使其对3D人脸分析任务特别有效,并在数据稀缺的情况下提供强大的性能。在Facescape、Headspace和Florence数据集上进行的综合实验,评估了年龄、性别二态性和情感,表明我们的方法在有效性和稳健性方面都优于最先进的模型,平均准确率提高了2%。该项目页面可在GitHub 1上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Array
Array Computer Science-General Computer Science
CiteScore
4.40
自引率
0.00%
发文量
93
审稿时长
45 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信