Francisco J. Moreno-Barea, Fiammetta Strazzera, J. M. Jerez, D. Urda, L. Franco
{"title":"Forward Noise Adjustment Scheme for Data Augmentation","authors":"Francisco J. Moreno-Barea, Fiammetta Strazzera, J. M. Jerez, D. Urda, L. Franco","doi":"10.1109/SSCI.2018.8628917","DOIUrl":null,"url":null,"abstract":"Data augmentation has been proven particularly effective for image classification tasks where a significant boost of prediction accuracy can be obtained when the technique is combined with the use of Deep Learning architectures. Unfortunately, for non-image data the situation is quite different and the positive effect of augmenting the training set size is much smaller. In this work, we propose a method that creates new samples by adjusting the level of noise for individual input variables previously ranked by their relevance level. Results from several tests are analyzed using nine benchmark data sets when the augmented and original data are used for supervised training on Deep Learning architectures.","PeriodicalId":235735,"journal":{"name":"2018 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"101","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI.2018.8628917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 101
Abstract
Data augmentation has been proven particularly effective for image classification tasks where a significant boost of prediction accuracy can be obtained when the technique is combined with the use of Deep Learning architectures. Unfortunately, for non-image data the situation is quite different and the positive effect of augmenting the training set size is much smaller. In this work, we propose a method that creates new samples by adjusting the level of noise for individual input variables previously ranked by their relevance level. Results from several tests are analyzed using nine benchmark data sets when the augmented and original data are used for supervised training on Deep Learning architectures.