{"title":"Ensuring Reliable Learning in Graph Convolutional Networks: Convergence Analysis and Training Methodology","authors":"Xinge Zhao;Chien Chern Cheah","doi":"10.1109/TAI.2025.3550458","DOIUrl":null,"url":null,"abstract":"Recent advancements in learning from graph-structured data have highlighted the importance of graph convolutional networks (GCNs). Despite some research efforts on the theoretical aspects of GCNs, a gap remains in understanding their training process, especially concerning convergence analysis. This study introduces a two-stage training methodology for GCNs, incorporating both pretraining and fine-tuning phases. A two-layer GCN model is used for the convergence analysis and case studies. The convergence analysis that employs a Lyapunov-like approach is performed on the proposed learning algorithm, providing conditions to ensure the convergence of the model learning. Additionally, an automated learning rate scheduler is proposed based on the convergence conditions to prevent divergence and eliminate the need for manual tuning of the initial learning rate. The efficacy of the proposed method is demonstrated through case studies on the node classification problem. The results reveal that the proposed method outperforms gradient descent-based optimizers by achieving consistent training accuracies within a variation of 0.1% across various initial learning rates, without requiring manual tuning.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 9","pages":"2510-2525"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10929031/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in learning from graph-structured data have highlighted the importance of graph convolutional networks (GCNs). Despite some research efforts on the theoretical aspects of GCNs, a gap remains in understanding their training process, especially concerning convergence analysis. This study introduces a two-stage training methodology for GCNs, incorporating both pretraining and fine-tuning phases. A two-layer GCN model is used for the convergence analysis and case studies. The convergence analysis that employs a Lyapunov-like approach is performed on the proposed learning algorithm, providing conditions to ensure the convergence of the model learning. Additionally, an automated learning rate scheduler is proposed based on the convergence conditions to prevent divergence and eliminate the need for manual tuning of the initial learning rate. The efficacy of the proposed method is demonstrated through case studies on the node classification problem. The results reveal that the proposed method outperforms gradient descent-based optimizers by achieving consistent training accuracies within a variation of 0.1% across various initial learning rates, without requiring manual tuning.