{"title":"围棋评估函数的可选多任务训练","authors":"Yusaku Mandai, Tomoyuki Kaneko","doi":"10.1109/TAAI.2018.00037","DOIUrl":null,"url":null,"abstract":"For the game of Go, Chess, and Shogi (Japanese Chess), deep neural networks (DNNs) have contributed to building accurate evaluation functions, and many studies have attempted to create the so-called value network, which predicts the reward of a given state. A recent study of the value network for the game of Go has shown that a two-headed neural network with two different objectives can be trained effectively and performs better than a single-headed network. One of the two heads is called a value head and the other head, the policy head, predicts the next move at a given state. This multitask training makes the network more robust and improves the generalization performance. In this paper, we show that a simple discriminator network is an alternative target of multitask learning. Compared to the existing deep neural network, our proposed network can be designed more easily because of its simple output. Our experimental results showed that our discriminative target also makes the learning stable and the evaluation function trained by our method is comparable to the training of existing studies in terms of predicting the next move and playing strength.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Alternative Multitask Training for Evaluation Functions in Game of Go\",\"authors\":\"Yusaku Mandai, Tomoyuki Kaneko\",\"doi\":\"10.1109/TAAI.2018.00037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For the game of Go, Chess, and Shogi (Japanese Chess), deep neural networks (DNNs) have contributed to building accurate evaluation functions, and many studies have attempted to create the so-called value network, which predicts the reward of a given state. A recent study of the value network for the game of Go has shown that a two-headed neural network with two different objectives can be trained effectively and performs better than a single-headed network. One of the two heads is called a value head and the other head, the policy head, predicts the next move at a given state. This multitask training makes the network more robust and improves the generalization performance. In this paper, we show that a simple discriminator network is an alternative target of multitask learning. Compared to the existing deep neural network, our proposed network can be designed more easily because of its simple output. Our experimental results showed that our discriminative target also makes the learning stable and the evaluation function trained by our method is comparable to the training of existing studies in terms of predicting the next move and playing strength.\",\"PeriodicalId\":211734,\"journal\":{\"name\":\"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TAAI.2018.00037\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAAI.2018.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Alternative Multitask Training for Evaluation Functions in Game of Go
For the game of Go, Chess, and Shogi (Japanese Chess), deep neural networks (DNNs) have contributed to building accurate evaluation functions, and many studies have attempted to create the so-called value network, which predicts the reward of a given state. A recent study of the value network for the game of Go has shown that a two-headed neural network with two different objectives can be trained effectively and performs better than a single-headed network. One of the two heads is called a value head and the other head, the policy head, predicts the next move at a given state. This multitask training makes the network more robust and improves the generalization performance. In this paper, we show that a simple discriminator network is an alternative target of multitask learning. Compared to the existing deep neural network, our proposed network can be designed more easily because of its simple output. Our experimental results showed that our discriminative target also makes the learning stable and the evaluation function trained by our method is comparable to the training of existing studies in terms of predicting the next move and playing strength.