{"title":"Multi-view deep subspace clustering via level-by-level guided multi-level features learning","authors":"Kaiqiang Xu, Kewei Tang, Zhixun Su","doi":"10.1007/s10489-024-05807-1","DOIUrl":null,"url":null,"abstract":"<div><p>Multi-view subspace clustering has attracted extensive attention due to its ability to efficiently handle data from diverse sources. In recent years, plentiful multi-view subspace clustering methods have emerged and achieved satisfactory clustering performance. However, these methods rarely consider simultaneously handling data with a nonlinear structure and exploiting the structural and multi-level information inherent in the data. To remedy these shortcomings, we propose the novel multi-view deep subspace clustering via level-by-level guided multi-level features learning (MDSC-LGMFL). Specifically, an autoencoder is used for each view to extract the view-specific multi-level features, and multiple self-representation layers are introduced into the autoencoder to learn the subspace representations corresponding to the multi-level features. These self-representation layers not only provide multiple information flow paths through the autoencoder but also enforce multiple encoder layers to produce the multi-level features that satisfy the linear subspace assumption. With the novel level-by-level guidance strategy, the last-level feature is guaranteed to encode the structural information from the view and the previous-level features. Naturally, the subspace representation of the last-level feature can more reliably reflect the data affinity relationship and thus can be viewed as the new, better representation of the view. Furthermore, to guarantee the structural consistency among different views, instead of simply learning the common subspace structure by enforcing it to be close to different view-specific new, better representations, we conduct self-representation on these new, better representations to learn the common subspace structure, which can be applied to the spectral clustering algorithm to achieve the final clustering results. Numerous experiments on six widely used benchmark datasets show the superiority of the proposed method.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"54 21","pages":"11083 - 11102"},"PeriodicalIF":3.4000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-05807-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-view subspace clustering has attracted extensive attention due to its ability to efficiently handle data from diverse sources. In recent years, plentiful multi-view subspace clustering methods have emerged and achieved satisfactory clustering performance. However, these methods rarely consider simultaneously handling data with a nonlinear structure and exploiting the structural and multi-level information inherent in the data. To remedy these shortcomings, we propose the novel multi-view deep subspace clustering via level-by-level guided multi-level features learning (MDSC-LGMFL). Specifically, an autoencoder is used for each view to extract the view-specific multi-level features, and multiple self-representation layers are introduced into the autoencoder to learn the subspace representations corresponding to the multi-level features. These self-representation layers not only provide multiple information flow paths through the autoencoder but also enforce multiple encoder layers to produce the multi-level features that satisfy the linear subspace assumption. With the novel level-by-level guidance strategy, the last-level feature is guaranteed to encode the structural information from the view and the previous-level features. Naturally, the subspace representation of the last-level feature can more reliably reflect the data affinity relationship and thus can be viewed as the new, better representation of the view. Furthermore, to guarantee the structural consistency among different views, instead of simply learning the common subspace structure by enforcing it to be close to different view-specific new, better representations, we conduct self-representation on these new, better representations to learn the common subspace structure, which can be applied to the spectral clustering algorithm to achieve the final clustering results. Numerous experiments on six widely used benchmark datasets show the superiority of the proposed method.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.