{"title":"Sequential attention layer-wise fusion network for multi-view classification","authors":"Qing Teng, Xibei Yang, Qiguo Sun, Pingxin Wang, Xun Wang, Taihua Xu","doi":"10.1007/s13042-024-02260-x","DOIUrl":null,"url":null,"abstract":"<p>Graph convolutional network has shown excellent performance in multi-view classification. Currently, to output a fused node embedding representation in multi-view scenarios, existing researches tend to ensure the consistency of embedded node information among multiple views. However, they pay much attention to the immediate neighbors information rather than multi-order node information which can capture complex relationships and structures to enhance feature propagation. Furthermore, the embedded node information in each convolutional layer has not been fully utilized because the consistency is frequently achieved by the final convolutional layer. To tackle these limitations, we develop a new end-to-end multi-view learning architecture: sequential attention Layer-wise Fusion Network for multi-view classification (SLFNet). Motivated by the fact that for each view, multi-order node information is hidden in the multiple layer-wise node embedding representations, a set of sequential attentions can then be calculated over those multiple layers, which provides a novel fusion strategy from the perspectives of multi-order. The contributions of our architecture are: (1) capturing multi-order node information instead of using the immediate neighbors, thereby obtaining more accurate node embedding representations; (2) designing a sequential attention module that allows adaptive learning of node embedding representation for each layer, thereby attentively fusing these layer-wise node embedding representations. Our experiments, focusing on semi-supervised node classification tasks, highlight the superiorities of SLFNet compared to state-of-the-art approaches. Reports on deeper layer convolutional results further confirm its effectiveness in addressing over-smoothing problem.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":"13 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Machine Learning and Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s13042-024-02260-x","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph convolutional network has shown excellent performance in multi-view classification. Currently, to output a fused node embedding representation in multi-view scenarios, existing researches tend to ensure the consistency of embedded node information among multiple views. However, they pay much attention to the immediate neighbors information rather than multi-order node information which can capture complex relationships and structures to enhance feature propagation. Furthermore, the embedded node information in each convolutional layer has not been fully utilized because the consistency is frequently achieved by the final convolutional layer. To tackle these limitations, we develop a new end-to-end multi-view learning architecture: sequential attention Layer-wise Fusion Network for multi-view classification (SLFNet). Motivated by the fact that for each view, multi-order node information is hidden in the multiple layer-wise node embedding representations, a set of sequential attentions can then be calculated over those multiple layers, which provides a novel fusion strategy from the perspectives of multi-order. The contributions of our architecture are: (1) capturing multi-order node information instead of using the immediate neighbors, thereby obtaining more accurate node embedding representations; (2) designing a sequential attention module that allows adaptive learning of node embedding representation for each layer, thereby attentively fusing these layer-wise node embedding representations. Our experiments, focusing on semi-supervised node classification tasks, highlight the superiorities of SLFNet compared to state-of-the-art approaches. Reports on deeper layer convolutional results further confirm its effectiveness in addressing over-smoothing problem.
期刊介绍:
Cybernetics is concerned with describing complex interactions and interrelationships between systems which are omnipresent in our daily life. Machine Learning discovers fundamental functional relationships between variables and ensembles of variables in systems. The merging of the disciplines of Machine Learning and Cybernetics is aimed at the discovery of various forms of interaction between systems through diverse mechanisms of learning from data.
The International Journal of Machine Learning and Cybernetics (IJMLC) focuses on the key research problems emerging at the junction of machine learning and cybernetics and serves as a broad forum for rapid dissemination of the latest advancements in the area. The emphasis of IJMLC is on the hybrid development of machine learning and cybernetics schemes inspired by different contributing disciplines such as engineering, mathematics, cognitive sciences, and applications. New ideas, design alternatives, implementations and case studies pertaining to all the aspects of machine learning and cybernetics fall within the scope of the IJMLC.
Key research areas to be covered by the journal include:
Machine Learning for modeling interactions between systems
Pattern Recognition technology to support discovery of system-environment interaction
Control of system-environment interactions
Biochemical interaction in biological and biologically-inspired systems
Learning for improvement of communication schemes between systems