Beibei Jin, Jianing Yang, Xiangsheng Huang, D. Khan
{"title":"深度可变形Q-Network:深度Q-Network的扩展","authors":"Beibei Jin, Jianing Yang, Xiangsheng Huang, D. Khan","doi":"10.1145/3106426.3109426","DOIUrl":null,"url":null,"abstract":"The performance of Deep Reinforcement Learning (DRL) algorithms is usually constrained by instability and variability. In this work, we present an extension of Deep Q-Network (DQN) called Deep Deformable Q-Network which is based on deformable convolution mechanisms. The new algorithm can readily be built on existing models and can be easily trained end-to-end by standard back-propagation. Extensive experiments on the Atari games validate the feasibility and effectiveness of the proposed Deep Deformable Q-Network.","PeriodicalId":20685,"journal":{"name":"Proceedings of the 7th International Conference on Web Intelligence, Mining and Semantics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Deep deformable Q-Network: an extension of deep Q-Network\",\"authors\":\"Beibei Jin, Jianing Yang, Xiangsheng Huang, D. Khan\",\"doi\":\"10.1145/3106426.3109426\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The performance of Deep Reinforcement Learning (DRL) algorithms is usually constrained by instability and variability. In this work, we present an extension of Deep Q-Network (DQN) called Deep Deformable Q-Network which is based on deformable convolution mechanisms. The new algorithm can readily be built on existing models and can be easily trained end-to-end by standard back-propagation. Extensive experiments on the Atari games validate the feasibility and effectiveness of the proposed Deep Deformable Q-Network.\",\"PeriodicalId\":20685,\"journal\":{\"name\":\"Proceedings of the 7th International Conference on Web Intelligence, Mining and Semantics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th International Conference on Web Intelligence, Mining and Semantics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3106426.3109426\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Web Intelligence, Mining and Semantics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3106426.3109426","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep deformable Q-Network: an extension of deep Q-Network
The performance of Deep Reinforcement Learning (DRL) algorithms is usually constrained by instability and variability. In this work, we present an extension of Deep Q-Network (DQN) called Deep Deformable Q-Network which is based on deformable convolution mechanisms. The new algorithm can readily be built on existing models and can be easily trained end-to-end by standard back-propagation. Extensive experiments on the Atari games validate the feasibility and effectiveness of the proposed Deep Deformable Q-Network.