{"title":"ACE-RL-Checkers:通过在玩家代理中通过强化学习获得的知识来改进自动案例抽取","authors":"H. C. Neto, Rita Maria Silva Julia","doi":"10.1109/CIG.2015.7317926","DOIUrl":null,"url":null,"abstract":"This work proposes a new approach that combines Automatic Case Elicitation with Reinforcement Learning applied to Checkers player agents. This type of combination brings forth the following modifications in relation to those agents that use each of these techniques in isolation: improve the random exploration performed by the Automatic Case Elicitation-based agents and introduce adaptability to the Reinforcement Learning-based agents. In line with the above, the authors present herein the ACE-RL-Checkers player agent, a hybrid system that combines the best abilities from the automatic Checkers players CHEBR and LS-VisionDraughts. CHEBR is an Automatic Case Elicitation-based agent with a learning approach that performs random exploration in the search space. These random explorations allow the agent to present an extremely adaptive and non-deterministic behavior. On the other hand, the high frequency at which decisions are made randomly (mainly in those phases in which the content of the case library is still so scarce) compromises the agent in terms of maintaining a good performance. LS-VisionDraughts is a Multi-Layer Perceptron Neural Network player trained through Reinforcement Learning. Besides having been proven efficient in making decisions, such an agent presents an inconvenience in that it is completely predictable, as the same move is always executed when presented with the same board of play. By combining the best abilities from these players, ACE-RL-Checkers uses knowledge provided from LS-VisionDraughts in order to direct random exploration of the automatic case elicitation technique to more promising regions in the search space. Therewith, the ACE-RL-Checkers gains in terms of performance as well as acquires adaptability in its decision-making - choosing moves based on the current game dynamics. Experiments carried out in tournaments involving these agents confirm the performance superiority of ACE-RL-Checkers when pitted against its adversaries.","PeriodicalId":244862,"journal":{"name":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"ACE-RL-Checkers: Improving automatic case elicitation through knowledge obtained by reinforcement learning in player agents\",\"authors\":\"H. C. Neto, Rita Maria Silva Julia\",\"doi\":\"10.1109/CIG.2015.7317926\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work proposes a new approach that combines Automatic Case Elicitation with Reinforcement Learning applied to Checkers player agents. This type of combination brings forth the following modifications in relation to those agents that use each of these techniques in isolation: improve the random exploration performed by the Automatic Case Elicitation-based agents and introduce adaptability to the Reinforcement Learning-based agents. In line with the above, the authors present herein the ACE-RL-Checkers player agent, a hybrid system that combines the best abilities from the automatic Checkers players CHEBR and LS-VisionDraughts. CHEBR is an Automatic Case Elicitation-based agent with a learning approach that performs random exploration in the search space. These random explorations allow the agent to present an extremely adaptive and non-deterministic behavior. On the other hand, the high frequency at which decisions are made randomly (mainly in those phases in which the content of the case library is still so scarce) compromises the agent in terms of maintaining a good performance. LS-VisionDraughts is a Multi-Layer Perceptron Neural Network player trained through Reinforcement Learning. Besides having been proven efficient in making decisions, such an agent presents an inconvenience in that it is completely predictable, as the same move is always executed when presented with the same board of play. By combining the best abilities from these players, ACE-RL-Checkers uses knowledge provided from LS-VisionDraughts in order to direct random exploration of the automatic case elicitation technique to more promising regions in the search space. Therewith, the ACE-RL-Checkers gains in terms of performance as well as acquires adaptability in its decision-making - choosing moves based on the current game dynamics. Experiments carried out in tournaments involving these agents confirm the performance superiority of ACE-RL-Checkers when pitted against its adversaries.\",\"PeriodicalId\":244862,\"journal\":{\"name\":\"2015 IEEE Conference on Computational Intelligence and Games (CIG)\",\"volume\":\"79 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-11-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE Conference on Computational Intelligence and Games (CIG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2015.7317926\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2015.7317926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ACE-RL-Checkers: Improving automatic case elicitation through knowledge obtained by reinforcement learning in player agents
This work proposes a new approach that combines Automatic Case Elicitation with Reinforcement Learning applied to Checkers player agents. This type of combination brings forth the following modifications in relation to those agents that use each of these techniques in isolation: improve the random exploration performed by the Automatic Case Elicitation-based agents and introduce adaptability to the Reinforcement Learning-based agents. In line with the above, the authors present herein the ACE-RL-Checkers player agent, a hybrid system that combines the best abilities from the automatic Checkers players CHEBR and LS-VisionDraughts. CHEBR is an Automatic Case Elicitation-based agent with a learning approach that performs random exploration in the search space. These random explorations allow the agent to present an extremely adaptive and non-deterministic behavior. On the other hand, the high frequency at which decisions are made randomly (mainly in those phases in which the content of the case library is still so scarce) compromises the agent in terms of maintaining a good performance. LS-VisionDraughts is a Multi-Layer Perceptron Neural Network player trained through Reinforcement Learning. Besides having been proven efficient in making decisions, such an agent presents an inconvenience in that it is completely predictable, as the same move is always executed when presented with the same board of play. By combining the best abilities from these players, ACE-RL-Checkers uses knowledge provided from LS-VisionDraughts in order to direct random exploration of the automatic case elicitation technique to more promising regions in the search space. Therewith, the ACE-RL-Checkers gains in terms of performance as well as acquires adaptability in its decision-making - choosing moves based on the current game dynamics. Experiments carried out in tournaments involving these agents confirm the performance superiority of ACE-RL-Checkers when pitted against its adversaries.