{"title":"数据流中快速分类的增量学习算法","authors":"S. Fong, Zhicong Luo, B. W. Yap","doi":"10.1109/ISCBI.2013.45","DOIUrl":null,"url":null,"abstract":"Classification is one of the most commonly used data mining methods which can make a prediction by modeling from the known data. However, in traditional classification, we need to acquire the whole dataset and then build a training model which may take a lot of time and resource consumption. Another drawback of the traditional classification is that it cannot process the dataset timely and efficiently, especially for real-time data stream or big data. In this paper, we evaluate a lightweight method based on incremental learning algorithms for fast classification. We use this method to do outlier detection via several popular incremental learning algorithms, like Decision Table, Naïve Bayes, J48, VFI, KStar, etc.","PeriodicalId":311471,"journal":{"name":"2013 International Symposium on Computational and Business Intelligence","volume":"36 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Incremental Learning Algorithms for Fast Classification in Data Stream\",\"authors\":\"S. Fong, Zhicong Luo, B. W. Yap\",\"doi\":\"10.1109/ISCBI.2013.45\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Classification is one of the most commonly used data mining methods which can make a prediction by modeling from the known data. However, in traditional classification, we need to acquire the whole dataset and then build a training model which may take a lot of time and resource consumption. Another drawback of the traditional classification is that it cannot process the dataset timely and efficiently, especially for real-time data stream or big data. In this paper, we evaluate a lightweight method based on incremental learning algorithms for fast classification. We use this method to do outlier detection via several popular incremental learning algorithms, like Decision Table, Naïve Bayes, J48, VFI, KStar, etc.\",\"PeriodicalId\":311471,\"journal\":{\"name\":\"2013 International Symposium on Computational and Business Intelligence\",\"volume\":\"36 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 International Symposium on Computational and Business Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCBI.2013.45\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Symposium on Computational and Business Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCBI.2013.45","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incremental Learning Algorithms for Fast Classification in Data Stream
Classification is one of the most commonly used data mining methods which can make a prediction by modeling from the known data. However, in traditional classification, we need to acquire the whole dataset and then build a training model which may take a lot of time and resource consumption. Another drawback of the traditional classification is that it cannot process the dataset timely and efficiently, especially for real-time data stream or big data. In this paper, we evaluate a lightweight method based on incremental learning algorithms for fast classification. We use this method to do outlier detection via several popular incremental learning algorithms, like Decision Table, Naïve Bayes, J48, VFI, KStar, etc.