Akilan Thangarajah, Q. M. J. Wu, Yimin Yang, A. Safaei
{"title":"Fusion of transfer learning features and its application in image classification","authors":"Akilan Thangarajah, Q. M. J. Wu, Yimin Yang, A. Safaei","doi":"10.1109/CCECE.2017.7946733","DOIUrl":null,"url":null,"abstract":"Feature fusion methods have been demonstrated to be effective for many computer vision based applications. These methods generally use multiple hand-crafted features. However, in recent days, features extracted through transfer leaning procedures have been proved to be robust than the hand-crafted features in myriad applications, such as object classification and recognition. The transfer learning is a highly appreciated strategy in deep convolutional neural networks (DCNNs) due to its multifaceted benefits. It heartens us to explore the effect of fusing multiple transfer learning features of different DCNN architectures. Thus, in this work, we extract features of image statistics by exploiting three different pre-trained DCNNs through transfer learning. Then, we transform the features into a generalized subspace through a recently introduced Autoencoder network and fuse them to form intra-class invariant feature vector that is used to train a multi-class Support Vector Machine (SVM). The experimental results on various datasets, including object and action image statistics show that the fusion of multiple transfer learning features improves classification accuracy as compared to fusion of multiple hand-crafted features and usage of single component transfer learning features.","PeriodicalId":238720,"journal":{"name":"2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCECE.2017.7946733","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
Feature fusion methods have been demonstrated to be effective for many computer vision based applications. These methods generally use multiple hand-crafted features. However, in recent days, features extracted through transfer leaning procedures have been proved to be robust than the hand-crafted features in myriad applications, such as object classification and recognition. The transfer learning is a highly appreciated strategy in deep convolutional neural networks (DCNNs) due to its multifaceted benefits. It heartens us to explore the effect of fusing multiple transfer learning features of different DCNN architectures. Thus, in this work, we extract features of image statistics by exploiting three different pre-trained DCNNs through transfer learning. Then, we transform the features into a generalized subspace through a recently introduced Autoencoder network and fuse them to form intra-class invariant feature vector that is used to train a multi-class Support Vector Machine (SVM). The experimental results on various datasets, including object and action image statistics show that the fusion of multiple transfer learning features improves classification accuracy as compared to fusion of multiple hand-crafted features and usage of single component transfer learning features.