{"title":"Process Safety Enhancements for Data-Driven Evolving Fuzzy Models","authors":"E. Lughofer","doi":"10.1109/ISEFS.2006.251173","DOIUrl":null,"url":null,"abstract":"In this paper several improvements towards a safer processing of incremental learning techniques for fuzzy models are demonstrated. The first group of improvements include stability issues for making the evolving scheme more robust against faults, steady state situations and extrapolation occurrence. In the case of steady states or constant system behaviors a concept of overcoming the so-called 'unlearning' effect is proposed by which the forgetting of previously learned relationships can be prevented. A discussion on the convergence of the incremental learning scheme to the optimum in the least squares sense is included as well. The concepts regarding fault omittance are demonstrated, as usually faults in the training data lead to problems in learning underlying dependencies. An improvement of extrapolation behavior in the case of fuzzy models when using fuzzy sets with infinite support is also highlighted. The second group of improvements deals with interpretability and quality aspects of the models obtained during the evolving process. An online strategy for obtaining better interpretable models is presented. This strategy is feasible for online monitoring tasks, as it can be applied after each incremental learning step, that is without using prior data. Interpretability is important, whenever the model itself or the model decisions should be linguistically understandable. The quality aspects include an online calculation of local error bars for Takagi-Sugeno fuzzy models, which can be seen as a kind of confidence intervals. In this sense, the error bars can be exploited in order to give feedback to the operator, regarding fuzzy model reliability and prediction quality. Evaluation results based on experimental results are included, showing clearly the impact on the improvement of robustness of the learning procedure","PeriodicalId":269492,"journal":{"name":"2006 International Symposium on Evolving Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2006-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 International Symposium on Evolving Fuzzy Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISEFS.2006.251173","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
In this paper several improvements towards a safer processing of incremental learning techniques for fuzzy models are demonstrated. The first group of improvements include stability issues for making the evolving scheme more robust against faults, steady state situations and extrapolation occurrence. In the case of steady states or constant system behaviors a concept of overcoming the so-called 'unlearning' effect is proposed by which the forgetting of previously learned relationships can be prevented. A discussion on the convergence of the incremental learning scheme to the optimum in the least squares sense is included as well. The concepts regarding fault omittance are demonstrated, as usually faults in the training data lead to problems in learning underlying dependencies. An improvement of extrapolation behavior in the case of fuzzy models when using fuzzy sets with infinite support is also highlighted. The second group of improvements deals with interpretability and quality aspects of the models obtained during the evolving process. An online strategy for obtaining better interpretable models is presented. This strategy is feasible for online monitoring tasks, as it can be applied after each incremental learning step, that is without using prior data. Interpretability is important, whenever the model itself or the model decisions should be linguistically understandable. The quality aspects include an online calculation of local error bars for Takagi-Sugeno fuzzy models, which can be seen as a kind of confidence intervals. In this sense, the error bars can be exploited in order to give feedback to the operator, regarding fuzzy model reliability and prediction quality. Evaluation results based on experimental results are included, showing clearly the impact on the improvement of robustness of the learning procedure