{"title":"Greedy deep transform learning","authors":"Jyoti Maggu, A. Majumdar","doi":"10.1109/ICIP.2017.8296596","DOIUrl":null,"url":null,"abstract":"We introduce deep transform learning — a new tool for deep learning. Deeper representation is learnt by stacking one transform after another. The learning proceeds in a greedy way. The first layer learns the transform and features from the input training samples. Subsequent layers use the features (after activation) from the previous layers as training input. Experiments have been carried out with other deep representation learning tools — deep dictionary learning, stacked denoising autoencoder, deep belief network and PCANet (a version of convolutional neural network). Results show that our proposed technique is better than all the said techniques, at least on the benchmark datasets (MNIST, CIFAR-10 and SVHN) compared on.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2017.8296596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
We introduce deep transform learning — a new tool for deep learning. Deeper representation is learnt by stacking one transform after another. The learning proceeds in a greedy way. The first layer learns the transform and features from the input training samples. Subsequent layers use the features (after activation) from the previous layers as training input. Experiments have been carried out with other deep representation learning tools — deep dictionary learning, stacked denoising autoencoder, deep belief network and PCANet (a version of convolutional neural network). Results show that our proposed technique is better than all the said techniques, at least on the benchmark datasets (MNIST, CIFAR-10 and SVHN) compared on.