首页期刊简介编委会投稿启事审稿流程读者订阅广告服务联系我们English
引用本文
  •    [点击复制]
  •    [点击复制]
【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

←前一篇|后一篇→

过刊浏览    高级检索

本文已被:浏览 43次   下载 0 本文二维码信息
码上扫一扫!
基于优先记忆库结合竞争深度Q网络的动态功率控制
叶梓峰,王永华,万频,杨贺淞,黄沛濠
0
(广东工业大学 自动化学院,广州 510006)
摘要:
针对认知无线电多用户的动态功率控制策略问题,提出了一种基于优先记忆库(Prioritized Experience Replay,PER)结合竞争深度Q网络(Dueling Deep Q Network,Dueling DQN)的功率控制方法。在不知道主用户的控制策略及发射功率情况下,次用户以下垫式接入到主用户信道进行传输任务。微基站收集到的接收信号强度信息作为环境状态信息输入到竞争深度Q网络中,经过训练和学习后,得到次用户的动态功率控制策略。实验结果表明,次用户经过深度强化学习后能够找到最优的功率控制策略,并且在环境参数发生变化时也能快速调整自身的行为和控制策略,提高了频谱利用率和网络能效。
关键词:  认知无线网络  频谱分配  功率控制  深度强化学习
DOI:
基金项目:中央财政支持地方高校改革发展专项资金项目(400170044,400180004);广东省信息物理融合系统重点实验室和智能制造信息物理融合系统集成技术国家地方联合工程研究中心开放课题(008);广东工业大学质量工程项目资助
A dynamic power control strategy based on dueling deep Q network with prioritized experience replay
YE Zifeng,WANG Yonghua,WAN Pin,YANG Hesong,HUANG Peihao
(School of Automation,Guangdong University of Technology,Guangzhou 510006,China)
Abstract:
A power control method based on dueling deep Q network(dueling DQN) with a prioritized experience replay(PER) is developed to solve the dynamic power control problem of multi-user cognitive wireless networks.The secondary users may access the primary users’ channels to perform transmission tasks without knowing the control policy or transmission power of the primary users.The received signal strength information(RSSI) collected by the micro base station is input to the dueling DQN as environment state information and then the dynamic power control policy of the secondary users is output after training and learning.The optimal power control policy can be obtained after the training and learning of dueling DQN.Experimental results illustrate that the secondary users can improve their behavior when environmental parameters are updated and the spectrum utilization efficiency is enhanced.
Key words:  cognitive wireless network  spectrum sharing  power control  deep reinforcement learning
安全联盟站长平台