数据驱动演化模糊模型的过程安全性增强

E. Lughofer
{"title":"数据驱动演化模糊模型的过程安全性增强","authors":"E. Lughofer","doi":"10.1109/ISEFS.2006.251173","DOIUrl":null,"url":null,"abstract":"In this paper several improvements towards a safer processing of incremental learning techniques for fuzzy models are demonstrated. The first group of improvements include stability issues for making the evolving scheme more robust against faults, steady state situations and extrapolation occurrence. In the case of steady states or constant system behaviors a concept of overcoming the so-called 'unlearning' effect is proposed by which the forgetting of previously learned relationships can be prevented. A discussion on the convergence of the incremental learning scheme to the optimum in the least squares sense is included as well. The concepts regarding fault omittance are demonstrated, as usually faults in the training data lead to problems in learning underlying dependencies. An improvement of extrapolation behavior in the case of fuzzy models when using fuzzy sets with infinite support is also highlighted. The second group of improvements deals with interpretability and quality aspects of the models obtained during the evolving process. An online strategy for obtaining better interpretable models is presented. This strategy is feasible for online monitoring tasks, as it can be applied after each incremental learning step, that is without using prior data. Interpretability is important, whenever the model itself or the model decisions should be linguistically understandable. The quality aspects include an online calculation of local error bars for Takagi-Sugeno fuzzy models, which can be seen as a kind of confidence intervals. In this sense, the error bars can be exploited in order to give feedback to the operator, regarding fuzzy model reliability and prediction quality. Evaluation results based on experimental results are included, showing clearly the impact on the improvement of robustness of the learning procedure","PeriodicalId":269492,"journal":{"name":"2006 International Symposium on Evolving Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2006-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Process Safety Enhancements for Data-Driven Evolving Fuzzy Models\",\"authors\":\"E. Lughofer\",\"doi\":\"10.1109/ISEFS.2006.251173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper several improvements towards a safer processing of incremental learning techniques for fuzzy models are demonstrated. The first group of improvements include stability issues for making the evolving scheme more robust against faults, steady state situations and extrapolation occurrence. In the case of steady states or constant system behaviors a concept of overcoming the so-called 'unlearning' effect is proposed by which the forgetting of previously learned relationships can be prevented. A discussion on the convergence of the incremental learning scheme to the optimum in the least squares sense is included as well. The concepts regarding fault omittance are demonstrated, as usually faults in the training data lead to problems in learning underlying dependencies. An improvement of extrapolation behavior in the case of fuzzy models when using fuzzy sets with infinite support is also highlighted. The second group of improvements deals with interpretability and quality aspects of the models obtained during the evolving process. An online strategy for obtaining better interpretable models is presented. This strategy is feasible for online monitoring tasks, as it can be applied after each incremental learning step, that is without using prior data. Interpretability is important, whenever the model itself or the model decisions should be linguistically understandable. The quality aspects include an online calculation of local error bars for Takagi-Sugeno fuzzy models, which can be seen as a kind of confidence intervals. In this sense, the error bars can be exploited in order to give feedback to the operator, regarding fuzzy model reliability and prediction quality. Evaluation results based on experimental results are included, showing clearly the impact on the improvement of robustness of the learning procedure\",\"PeriodicalId\":269492,\"journal\":{\"name\":\"2006 International Symposium on Evolving Fuzzy Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 International Symposium on Evolving Fuzzy Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISEFS.2006.251173\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 International Symposium on Evolving Fuzzy Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISEFS.2006.251173","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

本文展示了对模糊模型的增量学习技术进行安全处理的几个改进。第一组改进包括稳定性问题,使不断发展的方案对故障、稳态情况和外推的发生更加健壮。在稳定状态或恒定系统行为的情况下,提出了克服所谓的“遗忘”效应的概念,通过该概念可以防止先前学习的关系的遗忘。讨论了增量学习方案在最小二乘意义上收敛到最优的问题。演示了关于错误省略的概念,因为通常训练数据中的错误会导致学习潜在依赖关系的问题。在使用具有无限支持的模糊集时,对模糊模型的外推行为进行了改进。第二组改进涉及在演化过程中获得的模型的可解释性和质量方面。提出了一种获得更好的可解释模型的在线策略。这种策略对于在线监控任务是可行的,因为它可以在每个增量学习步骤之后应用,而不需要使用先前的数据。无论何时模型本身或模型决策应该在语言上是可理解的,可解释性都是重要的。质量方面包括在线计算Takagi-Sugeno模糊模型的局部误差条,这可以看作是一种置信区间。从这个意义上说,可以利用误差条来给算子反馈,考虑模糊模型的可靠性和预测质量。包括基于实验结果的评价结果,清楚地显示了对学习过程鲁棒性提高的影响
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Process Safety Enhancements for Data-Driven Evolving Fuzzy Models
In this paper several improvements towards a safer processing of incremental learning techniques for fuzzy models are demonstrated. The first group of improvements include stability issues for making the evolving scheme more robust against faults, steady state situations and extrapolation occurrence. In the case of steady states or constant system behaviors a concept of overcoming the so-called 'unlearning' effect is proposed by which the forgetting of previously learned relationships can be prevented. A discussion on the convergence of the incremental learning scheme to the optimum in the least squares sense is included as well. The concepts regarding fault omittance are demonstrated, as usually faults in the training data lead to problems in learning underlying dependencies. An improvement of extrapolation behavior in the case of fuzzy models when using fuzzy sets with infinite support is also highlighted. The second group of improvements deals with interpretability and quality aspects of the models obtained during the evolving process. An online strategy for obtaining better interpretable models is presented. This strategy is feasible for online monitoring tasks, as it can be applied after each incremental learning step, that is without using prior data. Interpretability is important, whenever the model itself or the model decisions should be linguistically understandable. The quality aspects include an online calculation of local error bars for Takagi-Sugeno fuzzy models, which can be seen as a kind of confidence intervals. In this sense, the error bars can be exploited in order to give feedback to the operator, regarding fuzzy model reliability and prediction quality. Evaluation results based on experimental results are included, showing clearly the impact on the improvement of robustness of the learning procedure
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信