Junfeng Chen, Qiwen Yang, J. Ni, Yingjuan Xie, Shi Cheng
{"title":"An improved fireworks algorithm with landscape information for balancing exploration and exploitation","authors":"Junfeng Chen, Qiwen Yang, J. Ni, Yingjuan Xie, Shi Cheng","doi":"10.1109/CEC.2015.7257035","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257035","url":null,"abstract":"Fireworks algorithm is a newly risen and developing swarm intelligence algorithm, the performance of which is determined by the tradeoff between exploration and exploitation. How to develop a satisfactory weight for exploration and exploitation is an interesting and challenging work. In this paper, the landscapes of optimization problem are firstly analyzed, and then a new sparks explosion strategy is designed to represent and mine the landscape information. Moreover, the exploration and exploitation coexist in the improved fireworks algorithm, which can automatically adjust the search strategies according to landscape structure. Finally, numerical experiments are performed for the algorithms investigations, performance analysis, and comparisons. The simulation results indicate that the proposed algorithm has a significant performance on all the test functions and can achieve the global minimum for most test functions.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Saturation in PSO neural network training: Good or evil?","authors":"Anna Sergeevna Bosman, A. Engelbrecht","doi":"10.1109/CEC.2015.7256883","DOIUrl":"https://doi.org/10.1109/CEC.2015.7256883","url":null,"abstract":"Particle swarm optimisation has been successfully applied as a neural network training algorithm before, often outperforming traditional gradient-based approaches. However, recent studies have shown that particle swarm optimisation does not scale very well, and performs poorly on high-dimensional neural network architectures. This paper hypothesises that hidden layer saturation is a significant factor contributing to the poor training performance of the particle swarms, hindering good performance on neural networks regardless of the architecture size. A selection of classification problems is used to test this hypothesis. It is discovered that although a certain degree of saturation is necessary for successful training, higher degrees of saturation ultimately lead to poor generalisation. Possible factors leading to saturation are suggested, and means of alleviating saturation in particle swarms through weight initialisation range, maximum velocity, and search space boundaries are analysed. This paper is intended as a preface to a more in-depth study of the problem of saturation in particle swarm optimisation as a neural network training algorithm.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122166941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A configurable generalized artificial bee colony algorithm with local search strategies","authors":"D. Aydın, T. Stützle","doi":"10.1109/CEC.2015.7257008","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257008","url":null,"abstract":"In this paper, we apply a generalized artificial bee colony (ABC-X) algorithm to the learning-based real-parameter optimization competition at the 2015 Congress on Evolutionary Computation. The main idea underlying the ABC-X algorithm is to provide a flexible, freely configurable framework for artificial bee colony (ABC) algorithms. From this framework, one can not only instantiate known ABC algorithms but also configure new, previously unseen ABC algorithms that may perform even better than known ABC algorithms. One key advantage of a configurable algorithm framework is that it is adaptable to many different specific problems without requiring necessarily an algorithm re-design. This is relevant if in the application problem repeatedly instances of the problem need to be solved regularly. This situation arises in many practical settings e.g. in power control or other application areas: Routinely a sequence of specific instances of a more general continuous optimization problem arise and these instances have to be solved repeatedly (possibly for an infinite horizon) in the future: in this case the instances of the problem in the sequence will share similarities as they arise from a same source. This is also the situation that is targeted by the learning-based real-parameter optimization competition and which we have also described in our own earlier research.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122223621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature subset selection using dynamic mixed strategy","authors":"Hongbin Dong, Xuyang Teng, Yang Zhou, Jun He","doi":"10.1109/CEC.2015.7256955","DOIUrl":"https://doi.org/10.1109/CEC.2015.7256955","url":null,"abstract":"Feature selection is an important part of machine learning and data mining which may enhance the speed and the performance of learning and mining algorithms. Given certain criteria to evaluate features, the problem of feature selection can be regarded as an optimization problem. Therefore, evolutionary algorithms can be used to solve such a kind of optimization problems. In this paper, we present a novel feature subset selection approach based on the framework of genetic algorithms. Two new mutation operators are constructed using the standard deviation of candidate features and the cardinality of candidate feature subsets. Then, a filter feature subset selection approach using a dynamic mixed strategy is proposed, which combines the new mutation operators with the single-point mutation operator. The new approach can not only dynamically adjust the probability distribution over these three mutation operators, but also maintain the combined effects of feature subsets as a whole fitness evaluation. The proposed approach is able to quickly escape from local optimal feature subsets and to obtain smaller scale subsets than evolutionary algorithms using a single mutation operator. Experiments have been implemented on six standard UCI datasets and the proposed algorithm is compared with other classical algorithms. The comparison outcomes confirm the effectiveness of our approach.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"24 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120890352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trevor M. Sands, D. Tayal, M. E. Morris, S. Monteiro
{"title":"Robust stock value prediction using support vector machines with particle swarm optimization","authors":"Trevor M. Sands, D. Tayal, M. E. Morris, S. Monteiro","doi":"10.1109/CEC.2015.7257306","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257306","url":null,"abstract":"Attempting to understand and characterize trends in the stock market has been the goal of numerous market analysts, but these patterns are often difficult to detect until after they have been firmly established. Recently, attempts have been made by both large companies and individual investors to utilize intelligent analysis and trading algorithms to identify potential trends before they occur in the market environment, effectively predicting future stock values and outlooks. In this paper, three different classification algorithms will be compared for the purposes of maximizing capital while minimizing risk to the investor. The main contribution of this work is a demonstrated improvement over other prediction methods using machine learning; the results show that tuning support vector machine parameters with particle swarm optimization leads to highly accurate (approximately 95%) and robust stock forecasting for historical datasets.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of heuristic rule generation from multiple patterns in multiobjective fuzzy genetics-based machine learning","authors":"Y. Nojima, Kazuhiro Watanabe, H. Ishibuchi","doi":"10.1109/CEC.2015.7257262","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257262","url":null,"abstract":"Fuzzy genetics-based machine learning (FGBML) has frequently been used for fuzzy classifier design. It is one of the promising evolutionary machine learning (EML) techniques from the viewpoint of data mining. This is because FGBML can generate accurate classifiers with linguistically interpretable fuzzy if-then rules. Of course, a classifier with tens of thousands of if-then rules is not linguistically understandable. Thus, the complexity minimization of fuzzy classifiers should be considered together with the accuracy maximization. In previous studies, we proposed hybrid FGBML and its multiobjective formulation (MoFGBML) to handle both the accuracy maximization and the complexity minimization simultaneously. MoFGBML can obtain a number of non-dominated classifiers with different tradeoffs between accuracy and complexity. In this paper, we focus on heuristic rule generation in MoFGBML to improve the search performance. In the original heuristic rule generation, each if-then rule is generated from a randomly-selected training pattern in a heuristic manner. This operation is performed at population initialization and during evolution. To generate more generalized rules according to the training data, we propose new heuristic rule generation where each rule is generated from multiple training patterns. Through computational experiments using some benchmark data sets, we discuss the effects of the proposed operation on the search performance of our MoFGBML.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123759190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolving spiking neural network for robot locomotion generation","authors":"Noriko Takase, János Botzheim, N. Kubota","doi":"10.1109/CEC.2015.7256939","DOIUrl":"https://doi.org/10.1109/CEC.2015.7256939","url":null,"abstract":"In this paper, we propose locomotion generation for a mobile robot. Legged robot can walk in various complex terrains such as stairs as well as in flat environment. However, setting its behaviour to adapt to various environments in advance is very difficult. The robot can mimic the movement of organisms based on computational intelligence. In this study, we apply spiking neural network, which can take into account the transition of temporal information between the neurons. More specifically, the motion patterns are generated by applying a spiking neural network trained by Hebbian learning and evolution strategy, by using data provided by the physics engine measuring the distance walked by the robot and applied the motion patterns to real robot. Simulation was conducted to confirm the proposed technique.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123880638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzzy neural tree in evolutionary computation for architectural design cognition","authors":"Ö. Ciftcioglu, M. Bittermann","doi":"10.1109/CEC.2015.7257171","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257171","url":null,"abstract":"A novel fuzzy-neural tree (FNT) is presented. Each tree node uses a Gaussian as a fuzzy membership function, so that the approach uniquely is in align with both the probabilistic and possibilistic interpretations of fuzzy membership. It provides a type of logical operation by fuzzy logic (FL) in a neural structure in the form of rule-chaining, yielding a novel concept of weighted fuzzy logical AND and OR operation. The tree can be supplemented both by expert knowledge, as well as data set provisions for model formation. The FNT is described in detail pointing out its various potential utilizations demanding complex modeling and multi-objective optimization therein. One of such demands concerns cognitive computing for design cognition. This is exemplified and its effectiveness is demonstrated by computer experiments in the realm of Architectural design.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116548688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new non-redundant objective set generation algorithm in many-objective optimization problems","authors":"Xiaofang Guo, Yuping Wang, Xiaoli Wang, Jingxuan Wei","doi":"10.1109/CEC.2015.7257243","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257243","url":null,"abstract":"Among the many-objective optimization problems, there exists a kind of problem with redundant objectives, it is possible to design effective algorithms by removing the redundant objectives and keeping the non-redundant objectives so that the original problem becomes the one with much fewer objectives. In this paper, a new non-redundant objective set generation algorithm is proposed. To do so, first, a multi-objective evolutionary algorithm based decomposition is adopted to generate a small number of representative non-dominated solutions widely distributed on the Pareto front. Then, the conflicting objective pairs are identified through these non-dominated solutions, and the non-redundant objective set is determined by these pairs. Finally, the experiments are conducted on a set of benchmark test problems and the results indicate the effectiveness and efficiency of the proposed algorithm.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116676292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A self-adaptive dynamic particle swarm optimizer","authors":"Jing J. Liang, L. Guo, R. Liu, B. Qu","doi":"10.1109/CEC.2015.7257290","DOIUrl":"https://doi.org/10.1109/CEC.2015.7257290","url":null,"abstract":"A self-adaptive dynamic multi-swarm particle swarm optimizer (sDMS-PSO) is proposed. In PSO, three parameters should be given experimentally or empirically. While in the sDMS-PSO a self-adaptive strategy of parameters is embedded. One or more parameters are assigned to different swarms adaptively. In a single swarm, through specified iterations, the parameters achieving the maximum number of renewal of the local best solutions are recorded. Then the information of competitive arguments is shared among all of the swarms through generating new parameters using the saved part. Multiple swarms detect the arguments in various groups in parallel during the evolutionary process which accelerates the learning speed. What's more, sharing the information of the best parameters leads to faster convergence. A local search method of the quasi-Newton is included to enhance the ability of exploitation. The sDMS-PSO is tested on the set of benchmark functions provided by CEC2015. The results of the experiment are showed in the paper.","PeriodicalId":403666,"journal":{"name":"2015 IEEE Congress on Evolutionary Computation (CEC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116719687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}