Font Size: a A A

Research On Mobile Edge Computing Offloading Strategy And Physical Layer Security Based On Deep Reinforcement Learning

Posted on:2021-03-19Degree:MasterType:Thesis
Country:ChinaCandidate:C LiFull Text:PDF
GTID:2438330611454091Subject:Computer technology
Abstract/Summary:PDF Full Text Request
The tremendous progress of the smart technology has greatly enriched the applications of mobile devices,which generate the massive connections and data traffic.However,the flexible and independent characteristics of mobile devices determine that their computing capability is severely restricted,which has caused a serious shortage of the ability of wireless networks to handle the m-assive traffic services.To settle this problem,researchers have proposed a new technology of mobile edge computing(MEC).As an extension of cloud computing technology to network edge nodes,MEC technology has become a new paaradigm to enhance the computing and storage capabilities of mobile users.To further improve the quality of service(QoS)of mobile users in the MEC networks and improve the efficiency of resource utilization,there exist complex deployment and allocation problems of computing and communication resource need to be settled.The deep reinforcement learning(DRL)is the most promising artificial intelligence algorithm to optimize the system energy consumption and latency in the complex resource deployment problems.In this paper,we first consider that there are multiple users within the coverage of a MEC network,and the users can offload the subtasks to multiple edge servers.We show that it is a dynamic decision-making problem to calculate the optimal offloading proportion for users in this network,and formulate the dynamic optimization problem as a Markov decision process(MDP).Then,we introduce the state space and action space,and a linear combination of system latency and energy consumption is introduced to measure the system performance.In fuether,we design a novel offloading strategy based on the deep Q-network(DQN),where the users can dynamically fine-tune the offloading proportion to ensure that the MEC system remains at the minimum cost value.Finally,we implement the algorithm by simulation,and numerical results confirm that the proposed offloading strategy can efficiently achieve,the system cost at an inferior and stable value.The performance of the algorithm is also estimated when the number of users and edge servers change,and the results show that the proposed algorithm perform satisfactorily,irrespective of the number of users and edgeservers.To protect the physical layer security of the MEC network in the stage of results feedback,a no-orthogonal multiple access(NOMA)system model contains an intelligent attacker is considered,where the intelligent attacker can perform as multiple attack modes.To enhance the transmission security,we propose an algorithm which can adaptively control the power allocation for the edge servers in the NOMA network based on reinforcement learning(RL).The simulation results demonstrate that the reinforcement learning based policy can efficiently depress the motivation of attacking,and enhance the transmission security for the downlink NOMA network.
Keywords/Search Tags:MEC, Dynamic optimization problem, Non-binary offloading, DQN, Physical security
PDF Full Text Request
Related items