{"title":"CooperativeQ:基于合作强化学习的节能通道访问","authors":"M. Emre, Gürkan Gür, S. Bayhan, Fatih Alagöz","doi":"10.1109/ICCW.2015.7247603","DOIUrl":null,"url":null,"abstract":"Cognitive Radio (CR) with the capability of discovering the unused spectrum promises higher spectrum efficiency - a pressing requirement for 5G networks. However, CR owes this capability to power-hungry tasks, most particularly to spectrum sensing. Given that advances in battery capacity has a slower pace compared to advances in device capabilities and traffic growth, it is paramount to develop energy-efficient CR protocols. To this end, we focus on spectrum sensing and access from an energy efficiency perspective. Our proposal CooperativeQ lets each CR decide with an energy efficiency objective on its actions based on its buffer occupancy, buffer capacity, and its observations about the primary channel states. Different than traditional reinforcement learning, CooperativeQ facilitates CRs to share their local knowledge with others periodically. With this information, CR chooses which action to take for the current time slot: (i) idling, (ii) sensing, and (iii) if channel is decided to be idle adapting transmission power to one of the power levels. We evaluate the performance of our proposal under various PU channel types, idling penalty coefficient, and information sharing period. Our results show that CooperativeQ outperforms greedy throughput-maximizing approach or a random channel selection owing to its adaptation and learning capability as well as cooperative mode of operation.","PeriodicalId":6464,"journal":{"name":"2015 IEEE International Conference on Communication Workshop (ICCW)","volume":"131 1","pages":"2799-2805"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"CooperativeQ: Energy-efficient channel access based on cooperative reinforcement learning\",\"authors\":\"M. Emre, Gürkan Gür, S. Bayhan, Fatih Alagöz\",\"doi\":\"10.1109/ICCW.2015.7247603\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cognitive Radio (CR) with the capability of discovering the unused spectrum promises higher spectrum efficiency - a pressing requirement for 5G networks. However, CR owes this capability to power-hungry tasks, most particularly to spectrum sensing. Given that advances in battery capacity has a slower pace compared to advances in device capabilities and traffic growth, it is paramount to develop energy-efficient CR protocols. To this end, we focus on spectrum sensing and access from an energy efficiency perspective. Our proposal CooperativeQ lets each CR decide with an energy efficiency objective on its actions based on its buffer occupancy, buffer capacity, and its observations about the primary channel states. Different than traditional reinforcement learning, CooperativeQ facilitates CRs to share their local knowledge with others periodically. With this information, CR chooses which action to take for the current time slot: (i) idling, (ii) sensing, and (iii) if channel is decided to be idle adapting transmission power to one of the power levels. We evaluate the performance of our proposal under various PU channel types, idling penalty coefficient, and information sharing period. Our results show that CooperativeQ outperforms greedy throughput-maximizing approach or a random channel selection owing to its adaptation and learning capability as well as cooperative mode of operation.\",\"PeriodicalId\":6464,\"journal\":{\"name\":\"2015 IEEE International Conference on Communication Workshop (ICCW)\",\"volume\":\"131 1\",\"pages\":\"2799-2805\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE International Conference on Communication Workshop (ICCW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCW.2015.7247603\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Communication Workshop (ICCW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCW.2015.7247603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CooperativeQ: Energy-efficient channel access based on cooperative reinforcement learning
Cognitive Radio (CR) with the capability of discovering the unused spectrum promises higher spectrum efficiency - a pressing requirement for 5G networks. However, CR owes this capability to power-hungry tasks, most particularly to spectrum sensing. Given that advances in battery capacity has a slower pace compared to advances in device capabilities and traffic growth, it is paramount to develop energy-efficient CR protocols. To this end, we focus on spectrum sensing and access from an energy efficiency perspective. Our proposal CooperativeQ lets each CR decide with an energy efficiency objective on its actions based on its buffer occupancy, buffer capacity, and its observations about the primary channel states. Different than traditional reinforcement learning, CooperativeQ facilitates CRs to share their local knowledge with others periodically. With this information, CR chooses which action to take for the current time slot: (i) idling, (ii) sensing, and (iii) if channel is decided to be idle adapting transmission power to one of the power levels. We evaluate the performance of our proposal under various PU channel types, idling penalty coefficient, and information sharing period. Our results show that CooperativeQ outperforms greedy throughput-maximizing approach or a random channel selection owing to its adaptation and learning capability as well as cooperative mode of operation.