{"title":"Quality assessment of large scale dimensionality reduction methods","authors":"Ntombikayise Banda, A. Engelbrecht","doi":"10.1109/ISCMI.2017.8279588","DOIUrl":null,"url":null,"abstract":"The application of spectral dimension reduction algorithms has been limited to small-to-medium datasets due to the high computational costs associated with solving the generalized eigenvector decomposition problem. This study uses the Nystrom method to approximate the large similarity matrices used in the algorithms, thus making it possible to extend their application to large scale datasets. The paper focuses on the quality of the embeddings produced and studies the interactions between the number of samples used in the approximations, the number of feature dimensions to retain, and the various performance measures. The results provide insights to the variables that are essential for producing reliable low-dimensional feature sets.","PeriodicalId":119111,"journal":{"name":"2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI.2017.8279588","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The application of spectral dimension reduction algorithms has been limited to small-to-medium datasets due to the high computational costs associated with solving the generalized eigenvector decomposition problem. This study uses the Nystrom method to approximate the large similarity matrices used in the algorithms, thus making it possible to extend their application to large scale datasets. The paper focuses on the quality of the embeddings produced and studies the interactions between the number of samples used in the approximations, the number of feature dimensions to retain, and the various performance measures. The results provide insights to the variables that are essential for producing reliable low-dimensional feature sets.