2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)最新文献

筛选
英文 中文
Genetic algorithm-based neural error correcting output classifier 基于遗传算法的神经纠错输出分类器
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015745
Mahdi Amina, F. Masulli, S. Rovetta
{"title":"Genetic algorithm-based neural error correcting output classifier","authors":"Mahdi Amina, F. Masulli, S. Rovetta","doi":"10.1109/CIEL.2014.7015745","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015745","url":null,"abstract":"The present study elaborates a probabilistic framework of ECOC technique, via replacement of predesigned ECOC matrix by sufficiently large random codes. Further mathematical grounds of deploying random codes through probability formulations are part of novelty of this study. Random variants of ECOC techniques were applied in previous literatures, however, often failing to deliver sufficient theoretical proof of efficiency of random coding matrix. In this paper a Genetic Algorithm-based neural encoder with redefined operators is designed and trained. A variant of heuristic trimming of ECOC codewords is also deployed to acquire more satisfactory results. The efficacy of proposed approach was validated over a wide set of datasets of UCI Machine Learning Repository and compared against two conventional methods.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128601215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TS fuzzy model identification by a novel objective function based fuzzy clustering algorithm 基于目标函数的模糊聚类算法辨识TS模糊模型
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015742
T. Dam, A. K. Deb
{"title":"TS fuzzy model identification by a novel objective function based fuzzy clustering algorithm","authors":"T. Dam, A. K. Deb","doi":"10.1109/CIEL.2014.7015742","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015742","url":null,"abstract":"A Fuzzy C Regression Model (FCRM) distance metric has been used in Competitive Agglomeration (CA) algorithm to obtain optimal number rules or construct optimal fuzzy subspaces in whole input output space. To construct fuzzy partition matrix in data space, a new objective function has been proposed that can handle geometrical shape of input data distribution and linear functional relationship between input and output feature space variable. Premise and consequence parameters of Takagi-Sugeno (TS) fuzzy model are also obtained from the proposed objective function. Linear coefficients of consequence part have been determined using the Weighted Recursive Least Square (WRLS) framework. Effectiveness of the proposed algorithm has been validated using a nonlinear benchmark model.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128095897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hyper-heuristic approach for solving nurse rostering problem 解决护士名册问题的超启发式方法
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015743
K. Anwar, M. Awadallah, A. Khader, M. Al-Betar
{"title":"Hyper-heuristic approach for solving nurse rostering problem","authors":"K. Anwar, M. Awadallah, A. Khader, M. Al-Betar","doi":"10.1109/CIEL.2014.7015743","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015743","url":null,"abstract":"Hyper-heuristic (HH) is a higher level heuristic to choose from a set of heuristics applicable for the problem on hand. In this paper, a Harmony Search-based Hyper-heuristic (HSHH) approach is tested in solving nurse rostering problems (NRP). NRP is a complex scheduling problem of assigning given shifts to a given nurses. We test the proposed method by using the First International Nurse Rostering Competition 2010 (INRC2010) dataset. Experimentally, the HSHH approach achieved comparable results with the comparative methods in the literature.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129997131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Empirical mode decomposition based adaboost-backpropagation neural network method for wind speed forecasting 基于经验模态分解的adaboost-反向传播神经网络风速预报方法
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015741
Ye Ren, Xueheng Qiu, P. N. Suganthan
{"title":"Empirical mode decomposition based adaboost-backpropagation neural network method for wind speed forecasting","authors":"Ye Ren, Xueheng Qiu, P. N. Suganthan","doi":"10.1109/CIEL.2014.7015741","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015741","url":null,"abstract":"Wind speed forecasting is a popular research direction in renewable energy and computational intelligence. Ensemble forecasting and hybrid forecasting models are widely used in wind speed forecasting. This paper proposes a novel ensemble forecasting model by combining Empirical mode decomposition (EMD), Adaptive boosting (AdaBoost) and Backpropagation Neural Network (BPNN) together. The proposed model is compared with six benchmark models: persistent, AdaBoost with regression tree, BPNN, AdaBoost-BPNN, EMD-BPNN and EMD-AdaBoost with regression tree. The comparisons undergoes several statistical tests and the tests show that the proposed EMD-AdaBoost- BPNN model outperformed the other models significantly. The forecasting error of the proposed model also shows significant randomness.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Ensemble based classification using small training sets : A novel approach 基于小训练集的集成分类:一种新方法
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015738
C. V. K. Veni, Timmappareddy Sobha Rani
{"title":"Ensemble based classification using small training sets : A novel approach","authors":"C. V. K. Veni, Timmappareddy Sobha Rani","doi":"10.1109/CIEL.2014.7015738","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015738","url":null,"abstract":"Classification is a supervised learning technique typically uses two-thirds of the given annotated data set for training and the remaining for test. In this paper, we developed a frame work which uses less than one-third of the data set for training and tests the remaining two-thirds of the data and still gives results comparable to other classifiers. To achieve good classification accuracy with small training sets, we focused on three issues: The first is that, one-third(30%) of the data should represent the entire data set. The second is on increasing the classification accuracy even with these small training sets, and the third issue is on taking care of deviations in the small training sets like noise or outliers. First issue is addressed by proposing three methods: divide the instances into 10 bins based on their distances from the centroid, based on their distance from a reference point 3/2(min+max) and a distribution specific binning. In all these methods, training sets are formed using stratified sampling approach which ensures that the samples chosen are from the entire distribution. Second issue is dealt with using the concept of ensemble based weighted majority voting for classification. Third issue is tackled by implementing four filters on training sets. The filters used are Removing Outliers using Inter Quartile Range option (available in Weka) and removing misclassified instances applying Naive Bayes, IB3, IB5 as filters. Experiments are conducted on seven binary andmulti-class data sets taking only 6% to 18% of the total data for training and implemented the proposed three methods without any filters for noise and outlier removal and with them too on the training sets.We compare our results with two popular ensemble methods ada-boost and bagging ensemble techniques, ENN, CNN, RNN instance selection methods. Empirical analysis shows that our three proposed methods yield comparable classification results to those available in literature which use small training sets.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123998633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Building predictive models in two stages with meta-learning templates optimized by genetic programming 利用遗传规划优化的元学习模板分两阶段构建预测模型
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015740
P. Kordík, J. Cerný
{"title":"Building predictive models in two stages with meta-learning templates optimized by genetic programming","authors":"P. Kordík, J. Cerný","doi":"10.1109/CIEL.2014.7015740","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015740","url":null,"abstract":"The model selection stage is one of the most difficult in predictive modeling. To select a model with a highest generalization performance involves benchmarking huge number of candidate models or algorithms. Often, a final model is selected without considering potentially high quality candidates just because there are too many possibilities. Improper benchmarking methodology often leads to biased estimates of model generalization performance. Automation of the model selection stage is possible, however the computational complexity is huge especially when ensembles of models and optimization of input features should be also considered. In this paper we show, how to automate model selection process in a way that allows to search for complex hierarchies of ensemble models while maintaining computational tractability. We introduce two-stage learning, meta-learning templates optimized by evolutionary programming with anytime properties to be able to deliver and maintain data-tailored algorithms and models in a reasonable time without human interaction. Co-evolution if inputs together with optimization of templates enabled to solve algorithm selection problem efficiently for variety of datasets.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117106091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The entity-to-algorithm allocation problem: extending the analysis 实体-算法分配问题:扩展分析
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015744
J. Grobler, A. Engelbrecht, G. Kendall, V. Yadavalli
{"title":"The entity-to-algorithm allocation problem: extending the analysis","authors":"J. Grobler, A. Engelbrecht, G. Kendall, V. Yadavalli","doi":"10.1109/CIEL.2014.7015744","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015744","url":null,"abstract":"This paper extends the investigation into the algorithm selection problem in hyper-heuristics, otherwise referred to as the entity-to-algorithm allocation problem, introduced by Grobler et al.. Two newly developed population-based portfolio algorithms (the evolutionary algorithm based on selfadaptive learning population search techniques (EEA-SLPS) and the Multi-EA algorithm) are compared to two metahyper- heuristic algorithms. The algorithms are evaluated under similar conditions and the same set of constituent algorithms on a diverse set of floating-point benchmark problems. One of the meta-hyper-heuristics are shown to outperform the other algorithms, with EEA-SLPS coming in a close second.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116731129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast image segmentation based on boosted random forests, integral images, and features on demand 基于增强随机森林、积分图像和按需特征的快速图像分割
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015737
U. Knauer, U. Seiffert
{"title":"Fast image segmentation based on boosted random forests, integral images, and features on demand","authors":"U. Knauer, U. Seiffert","doi":"10.1109/CIEL.2014.7015737","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015737","url":null,"abstract":"The paper addresses the tradeoff between speed and quality of image segmentation typically found in real-time or high-throughput image analysis tasks. We propose a novel approach for high-quality image segmentation based on a rich and high-dimensional feature space and strong classifiers. To enable fast feature extraction in color images, multiple integral images are used. A decision tree based approach based on twostage Random Forest classifiers is utilized to solve several binary as well as multiclass segmentation problems. It is an intrinsic property of the tree based approach, that any decision is based on a small subset of input features only. Hence, analysis of the tree structures enables a sequential feature extraction. Runtime measurements with several real-world datasets show that the approach enables fast high-quality segmentation. Moreover, the approach can be easily used in parallel computation frameworks because calculation of integral images as well as computation of individual decisions can be done separately. Also, the number of base classifiers can be easily adapted to meet a certain time constraint.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Experiments on simultaneous combination rule training and ensemble pruning algorithm 组合规则训练与集成剪枝算法的同步实验
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-12-01 DOI: 10.1109/CIEL.2014.7015736
B. Krawczyk, Michal Wozniak
{"title":"Experiments on simultaneous combination rule training and ensemble pruning algorithm","authors":"B. Krawczyk, Michal Wozniak","doi":"10.1109/CIEL.2014.7015736","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015736","url":null,"abstract":"Nowadays many researches related to classifier design are trying to exploit strength of the ensemble learning. Such hybrid approach looks for the valuable combination of individual classifiers' outputs, which should at least outperforms quality of the each available individuals. Therefore the classifier ensembles are recently the focus of intense research. Basically, it faces with two main problems. On the one hand we look for the valuable, highly diverse pool of individual classifiers, i.e., they are expected to be mutually complimentary. On the other hand we try to propose an optimal combination of the individuals' outputs. Usually, mentioned above tasks are considering independently, i.e., there are several approaches which focus on the ensemble pruning only for a given combination rule, while the others works are devoted to the problem how to find an optimal combination rule for a fixed line-up of classifier pool. In this work we propose to put ensemble pruning and combination rule training together and consider them as the one optimization task. We employ a canonical genetic algorithm to find the best ensemble line-up and in the same time the best set-up of the combination rule parameters. The proposed concept (called CRUMP - simultaneous Combination RUle training and enseMble Pruning) was evaluated on the basis the wide range of computer experiments, which confirmed that this is the very promising direction which is able to outperform the traditional approaches focused on either the ensemble pruning or combination rule.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Ensemble deep learning for regression and time series forecasting 用于回归和时间序列预测的集成深度学习
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL) Pub Date : 2014-01-20 DOI: 10.1109/CIEL.2014.7015739
Xueheng Qiu, Le Zhang, Ye Ren, P. N. Suganthan, G. Amaratunga
{"title":"Ensemble deep learning for regression and time series forecasting","authors":"Xueheng Qiu, Le Zhang, Ye Ren, P. N. Suganthan, G. Amaratunga","doi":"10.1109/CIEL.2014.7015739","DOIUrl":"https://doi.org/10.1109/CIEL.2014.7015739","url":null,"abstract":"In this paper, for the first time, an ensemble of deep learning belief networks (DBN) is proposed for regression and time series forecasting. Another novel contribution is to aggregate the outputs from various DBNs by a support vector regression (SVR) model. We show the advantage of the proposed method on three electricity load demand datasets, one artificial time series dataset and three regression datasets over other benchmark methods.","PeriodicalId":229765,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133258453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 310
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信