{"title":"模糊规则插值和强化学习","authors":"D. Vincze","doi":"10.1109/SAMI.2017.7880298","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning (RL) methods became popular decades ago and still maintain to be one of the mainstream topics in computational intelligence. Countless different RL methods and variants can be found in the literature, each one having its own advantages and disadvantages in a specific application domain. Representation of the revealed knowledge can be realized in several ways depending on the exact RL method, including e.g. simple discrete Q-tables, fuzzy rule-bases, artificial neural networks. Introducing interpolation within the knowledge-base allows the omission of less important, redundant information, while still keeping the system functional. A Fuzzy Rule Interpolation-based (FRI) RL method called FRIQ-learning is a method which possesses this feature. By omitting the unimportant, dependent fuzzy rules — emphasizing the cardinal entries of the knowledge representation — FRIQ-learning is also suitable for knowledge extraction. In this paper the fundamental concepts of FRIQ-learning and associated extensions of the method along with benchmarks will be discussed.","PeriodicalId":105599,"journal":{"name":"2017 IEEE 15th International Symposium on Applied Machine Intelligence and Informatics (SAMI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Fuzzy rule interpolation and reinforcement learning\",\"authors\":\"D. Vincze\",\"doi\":\"10.1109/SAMI.2017.7880298\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement Learning (RL) methods became popular decades ago and still maintain to be one of the mainstream topics in computational intelligence. Countless different RL methods and variants can be found in the literature, each one having its own advantages and disadvantages in a specific application domain. Representation of the revealed knowledge can be realized in several ways depending on the exact RL method, including e.g. simple discrete Q-tables, fuzzy rule-bases, artificial neural networks. Introducing interpolation within the knowledge-base allows the omission of less important, redundant information, while still keeping the system functional. A Fuzzy Rule Interpolation-based (FRI) RL method called FRIQ-learning is a method which possesses this feature. By omitting the unimportant, dependent fuzzy rules — emphasizing the cardinal entries of the knowledge representation — FRIQ-learning is also suitable for knowledge extraction. In this paper the fundamental concepts of FRIQ-learning and associated extensions of the method along with benchmarks will be discussed.\",\"PeriodicalId\":105599,\"journal\":{\"name\":\"2017 IEEE 15th International Symposium on Applied Machine Intelligence and Informatics (SAMI)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 15th International Symposium on Applied Machine Intelligence and Informatics (SAMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SAMI.2017.7880298\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 15th International Symposium on Applied Machine Intelligence and Informatics (SAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAMI.2017.7880298","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fuzzy rule interpolation and reinforcement learning
Reinforcement Learning (RL) methods became popular decades ago and still maintain to be one of the mainstream topics in computational intelligence. Countless different RL methods and variants can be found in the literature, each one having its own advantages and disadvantages in a specific application domain. Representation of the revealed knowledge can be realized in several ways depending on the exact RL method, including e.g. simple discrete Q-tables, fuzzy rule-bases, artificial neural networks. Introducing interpolation within the knowledge-base allows the omission of less important, redundant information, while still keeping the system functional. A Fuzzy Rule Interpolation-based (FRI) RL method called FRIQ-learning is a method which possesses this feature. By omitting the unimportant, dependent fuzzy rules — emphasizing the cardinal entries of the knowledge representation — FRIQ-learning is also suitable for knowledge extraction. In this paper the fundamental concepts of FRIQ-learning and associated extensions of the method along with benchmarks will be discussed.