CooperativeQ:基于合作强化学习的节能通道访问

M. Emre, Gürkan Gür, S. Bayhan, Fatih Alagöz
{"title":"CooperativeQ:基于合作强化学习的节能通道访问","authors":"M. Emre, Gürkan Gür, S. Bayhan, Fatih Alagöz","doi":"10.1109/ICCW.2015.7247603","DOIUrl":null,"url":null,"abstract":"Cognitive Radio (CR) with the capability of discovering the unused spectrum promises higher spectrum efficiency - a pressing requirement for 5G networks. However, CR owes this capability to power-hungry tasks, most particularly to spectrum sensing. Given that advances in battery capacity has a slower pace compared to advances in device capabilities and traffic growth, it is paramount to develop energy-efficient CR protocols. To this end, we focus on spectrum sensing and access from an energy efficiency perspective. Our proposal CooperativeQ lets each CR decide with an energy efficiency objective on its actions based on its buffer occupancy, buffer capacity, and its observations about the primary channel states. Different than traditional reinforcement learning, CooperativeQ facilitates CRs to share their local knowledge with others periodically. With this information, CR chooses which action to take for the current time slot: (i) idling, (ii) sensing, and (iii) if channel is decided to be idle adapting transmission power to one of the power levels. We evaluate the performance of our proposal under various PU channel types, idling penalty coefficient, and information sharing period. Our results show that CooperativeQ outperforms greedy throughput-maximizing approach or a random channel selection owing to its adaptation and learning capability as well as cooperative mode of operation.","PeriodicalId":6464,"journal":{"name":"2015 IEEE International Conference on Communication Workshop (ICCW)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"CooperativeQ: Energy-efficient channel access based on cooperative reinforcement learning\",\"authors\":\"M. Emre, Gürkan Gür, S. Bayhan, Fatih Alagöz\",\"doi\":\"10.1109/ICCW.2015.7247603\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cognitive Radio (CR) with the capability of discovering the unused spectrum promises higher spectrum efficiency - a pressing requirement for 5G networks. However, CR owes this capability to power-hungry tasks, most particularly to spectrum sensing. Given that advances in battery capacity has a slower pace compared to advances in device capabilities and traffic growth, it is paramount to develop energy-efficient CR protocols. To this end, we focus on spectrum sensing and access from an energy efficiency perspective. Our proposal CooperativeQ lets each CR decide with an energy efficiency objective on its actions based on its buffer occupancy, buffer capacity, and its observations about the primary channel states. Different than traditional reinforcement learning, CooperativeQ facilitates CRs to share their local knowledge with others periodically. With this information, CR chooses which action to take for the current time slot: (i) idling, (ii) sensing, and (iii) if channel is decided to be idle adapting transmission power to one of the power levels. We evaluate the performance of our proposal under various PU channel types, idling penalty coefficient, and information sharing period. Our results show that CooperativeQ outperforms greedy throughput-maximizing approach or a random channel selection owing to its adaptation and learning capability as well as cooperative mode of operation.\",\"PeriodicalId\":6464,\"journal\":{\"name\":\"2015 IEEE International Conference on Communication Workshop (ICCW)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE International Conference on Communication Workshop (ICCW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCW.2015.7247603\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Communication Workshop (ICCW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCW.2015.7247603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

认知无线电(CR)具有发现未使用频谱的能力,有望提高频谱效率——这是5G网络的迫切要求。然而,CR将这种能力归功于耗电任务,尤其是频谱传感。考虑到电池容量的进步比设备能力和流量增长的进步要慢,开发节能的CR协议至关重要。为此,我们从能源效率的角度关注频谱传感和接入。我们的建议CooperativeQ允许每个CR根据其缓冲区占用率、缓冲区容量和对主通道状态的观察来决定其行动的能效目标。与传统的强化学习不同的是,CooperativeQ使cr能够定期与他人分享他们的本地知识。有了这些信息,CR选择对当前时隙采取的动作:(i)空转,(ii)感知,(iii)如果信道被决定为空转,则将传输功率调整到其中一个功率水平。我们在不同的PU通道类型、怠速惩罚系数和信息共享周期下评估了我们的建议的性能。我们的研究结果表明,由于其自适应和学习能力以及合作操作模式,CooperativeQ优于贪婪吞吐量最大化方法或随机信道选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CooperativeQ: Energy-efficient channel access based on cooperative reinforcement learning
Cognitive Radio (CR) with the capability of discovering the unused spectrum promises higher spectrum efficiency - a pressing requirement for 5G networks. However, CR owes this capability to power-hungry tasks, most particularly to spectrum sensing. Given that advances in battery capacity has a slower pace compared to advances in device capabilities and traffic growth, it is paramount to develop energy-efficient CR protocols. To this end, we focus on spectrum sensing and access from an energy efficiency perspective. Our proposal CooperativeQ lets each CR decide with an energy efficiency objective on its actions based on its buffer occupancy, buffer capacity, and its observations about the primary channel states. Different than traditional reinforcement learning, CooperativeQ facilitates CRs to share their local knowledge with others periodically. With this information, CR chooses which action to take for the current time slot: (i) idling, (ii) sensing, and (iii) if channel is decided to be idle adapting transmission power to one of the power levels. We evaluate the performance of our proposal under various PU channel types, idling penalty coefficient, and information sharing period. Our results show that CooperativeQ outperforms greedy throughput-maximizing approach or a random channel selection owing to its adaptation and learning capability as well as cooperative mode of operation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信