基于混合多任务学习的深度强化学习优化

Nelson Vithayathil Varghese, Q. Mahmoud
{"title":"基于混合多任务学习的深度强化学习优化","authors":"Nelson Vithayathil Varghese, Q. Mahmoud","doi":"10.1109/SysCon48628.2021.9447080","DOIUrl":null,"url":null,"abstract":"As an outcome of the technological advancements occurred within artificial intelligence (AI) domain in recent times, deep learning (DL) has been established its position as a prominent representation learning method for all forms of machine learning (ML), including the reinforcement learning (RL). Subsequently, leading to the evolution of deep reinforcement learning (DRL) which combines deep learning’s high representational learning capabilities with current reinforcement learning methods. Undoubtedly, this new direction has caused a pivotal role towards the performance optimization of intelligent RL systems designed by following model-free based methodology. optimization of the performance achieved with this methodology was majorly restricted to intelligent systems having reinforcement learning algorithms designed to learn single task at a time. Simultaneously, single task-based learning method was observed as quite less efficient in terms of data, especially when such intelligent systems required operate under too complex as well as data rich conditions. The prime reason for this was because of the restricted application of existing methods to wide range of scenarios, and associated tasks from those operating environments. One of the possible approaches to mitigate this issue is by adopting the method of multi-task learning. Objective of this research paper is to present a parallel multi-task learning (PMTL) approach for the optimization of deep reinforcement learning agents operating within two different by semantically similar environments with related tasks. The proposed framework will be built with multiple individual actor-critic models functioning within each environment and transferring the knowledge among themselves through a global network to optimize the performance.","PeriodicalId":384949,"journal":{"name":"2021 IEEE International Systems Conference (SysCon)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Optimization of Deep Reinforcement Learning with Hybrid Multi-Task Learning\",\"authors\":\"Nelson Vithayathil Varghese, Q. Mahmoud\",\"doi\":\"10.1109/SysCon48628.2021.9447080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As an outcome of the technological advancements occurred within artificial intelligence (AI) domain in recent times, deep learning (DL) has been established its position as a prominent representation learning method for all forms of machine learning (ML), including the reinforcement learning (RL). Subsequently, leading to the evolution of deep reinforcement learning (DRL) which combines deep learning’s high representational learning capabilities with current reinforcement learning methods. Undoubtedly, this new direction has caused a pivotal role towards the performance optimization of intelligent RL systems designed by following model-free based methodology. optimization of the performance achieved with this methodology was majorly restricted to intelligent systems having reinforcement learning algorithms designed to learn single task at a time. Simultaneously, single task-based learning method was observed as quite less efficient in terms of data, especially when such intelligent systems required operate under too complex as well as data rich conditions. The prime reason for this was because of the restricted application of existing methods to wide range of scenarios, and associated tasks from those operating environments. One of the possible approaches to mitigate this issue is by adopting the method of multi-task learning. Objective of this research paper is to present a parallel multi-task learning (PMTL) approach for the optimization of deep reinforcement learning agents operating within two different by semantically similar environments with related tasks. The proposed framework will be built with multiple individual actor-critic models functioning within each environment and transferring the knowledge among themselves through a global network to optimize the performance.\",\"PeriodicalId\":384949,\"journal\":{\"name\":\"2021 IEEE International Systems Conference (SysCon)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Systems Conference (SysCon)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SysCon48628.2021.9447080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Systems Conference (SysCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SysCon48628.2021.9447080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

作为近年来人工智能(AI)领域技术进步的结果,深度学习(DL)已经确立了其作为所有形式的机器学习(ML),包括强化学习(RL)的突出表征学习方法的地位。随后,导致深度强化学习(DRL)的发展,它将深度学习的高表征学习能力与当前的强化学习方法相结合。毫无疑问,这一新方向对遵循无模型方法设计的智能强化学习系统的性能优化起着关键作用。用这种方法实现的性能优化主要局限于具有旨在一次学习单个任务的强化学习算法的智能系统。同时,基于单一任务的学习方法被认为在数据方面效率很低,特别是当这种智能系统需要在过于复杂和数据丰富的条件下运行时。造成这种情况的主要原因是,现有方法的应用范围有限,无法适用于各种场景,以及来自这些操作环境的相关任务。缓解这一问题的一个可能方法是采用多任务学习方法。本文的目的是提出一种并行多任务学习(PMTL)方法,用于优化在两个不同的语义相似的环境中运行的深度强化学习代理。所提出的框架将建立在每个环境中运行的多个单独的演员-评论家模型上,并通过全球网络在它们之间传递知识以优化性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimization of Deep Reinforcement Learning with Hybrid Multi-Task Learning
As an outcome of the technological advancements occurred within artificial intelligence (AI) domain in recent times, deep learning (DL) has been established its position as a prominent representation learning method for all forms of machine learning (ML), including the reinforcement learning (RL). Subsequently, leading to the evolution of deep reinforcement learning (DRL) which combines deep learning’s high representational learning capabilities with current reinforcement learning methods. Undoubtedly, this new direction has caused a pivotal role towards the performance optimization of intelligent RL systems designed by following model-free based methodology. optimization of the performance achieved with this methodology was majorly restricted to intelligent systems having reinforcement learning algorithms designed to learn single task at a time. Simultaneously, single task-based learning method was observed as quite less efficient in terms of data, especially when such intelligent systems required operate under too complex as well as data rich conditions. The prime reason for this was because of the restricted application of existing methods to wide range of scenarios, and associated tasks from those operating environments. One of the possible approaches to mitigate this issue is by adopting the method of multi-task learning. Objective of this research paper is to present a parallel multi-task learning (PMTL) approach for the optimization of deep reinforcement learning agents operating within two different by semantically similar environments with related tasks. The proposed framework will be built with multiple individual actor-critic models functioning within each environment and transferring the knowledge among themselves through a global network to optimize the performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信