{"title":"A Modified Echo State Network for Time Independent Image Classification","authors":"S. Gardner, M. Haider, L. Moradi, V. Vantsevich","doi":"10.1109/MWSCAS47672.2021.9531776","DOIUrl":null,"url":null,"abstract":"Image classification is typically performed with highly trained feed-forward machine learning algorithms like deep neural networks and support vector machines. The image can be treated as a time-series input when applied to the network multiple times, opening the way for recurrent neural networks to perform tasks like image classification, semantic segmentation and auto-encoding. With this approach, ultra-fast training, network optimization, and short-term memory effects allows for dynamic, low-volume datasets to be quickly learned without heavy image pre-processing or feature extraction; the main limitation being that input images need labeled output images for training, as is also true of most standard approaches. In this work, the MNIST handwritten digit dataset is used as a benchmark to evaluate metrics of a modified Echo State Network for static image classification. The image array is passed through a noise filter multiple times as the Echo State Network converges to a classification. This highly dynamic approach easily adapts to sequential image (video) tasks like object tracking and is effective with small datasets. Classification rates reach 95.3% with sample size of 10000 handwritten digits and training time of approximately 5 minutes. Progression of this research enables discrete image and time-series classification under a single algorithm, with low computing power and memory requirements.","PeriodicalId":6792,"journal":{"name":"2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)","volume":"15 1","pages":"255-258"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MWSCAS47672.2021.9531776","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Image classification is typically performed with highly trained feed-forward machine learning algorithms like deep neural networks and support vector machines. The image can be treated as a time-series input when applied to the network multiple times, opening the way for recurrent neural networks to perform tasks like image classification, semantic segmentation and auto-encoding. With this approach, ultra-fast training, network optimization, and short-term memory effects allows for dynamic, low-volume datasets to be quickly learned without heavy image pre-processing or feature extraction; the main limitation being that input images need labeled output images for training, as is also true of most standard approaches. In this work, the MNIST handwritten digit dataset is used as a benchmark to evaluate metrics of a modified Echo State Network for static image classification. The image array is passed through a noise filter multiple times as the Echo State Network converges to a classification. This highly dynamic approach easily adapts to sequential image (video) tasks like object tracking and is effective with small datasets. Classification rates reach 95.3% with sample size of 10000 handwritten digits and training time of approximately 5 minutes. Progression of this research enables discrete image and time-series classification under a single algorithm, with low computing power and memory requirements.