{"title":"Multi-codebook Fuzzy Neural Network Using Incremental Learning for Multimodal Data Classification","authors":"M. A. Ma'sum, W. Jatmiko","doi":"10.1109/ACIRS.2019.8935971","DOIUrl":null,"url":null,"abstract":"One of the challenge in classification is classification in multimodal data. This paper proposed multi-codebook fuzzy neural network by using incremental learning for multimodal data classification. There are 2 variations of the proposed method, one uses a static threshold, and the other uses a dynamic threshold. Based on the experiment result, the multicodebook FNGLVQ using dynamic incremental learning has the highest improvement compared to the original FNGLVQ. It achieves 15.65% margin in synthetic dataset, 5.02 % margin in benchmark dataset, and 11.30% on average all dataset.","PeriodicalId":338050,"journal":{"name":"2019 4th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 4th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACIRS.2019.8935971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
One of the challenge in classification is classification in multimodal data. This paper proposed multi-codebook fuzzy neural network by using incremental learning for multimodal data classification. There are 2 variations of the proposed method, one uses a static threshold, and the other uses a dynamic threshold. Based on the experiment result, the multicodebook FNGLVQ using dynamic incremental learning has the highest improvement compared to the original FNGLVQ. It achieves 15.65% margin in synthetic dataset, 5.02 % margin in benchmark dataset, and 11.30% on average all dataset.