Font Size: a A A

Research On Switch Migration In Multi-Controller SDN

Posted on:2020-03-11Degree:MasterType:Thesis
Country:ChinaCandidate:J L WangFull Text:PDF
GTID:2428330590996789Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Software-Defined Networking(SDN)decoupling the control and data plane,implements centralized management of the network.With the network scale increasing,multi-controller SDN architecture has become increasingly popular.However,the multi-controller architecture causes load imbalance and waste of control plane resources,which seriously affects the scalability of the control plane.Switch migration is an effective solution to these,but how to migrate is an NP-hard problem.In this thesis,we propose two switch migration schemes with the goal of load balancing and resource utilization maximization of the control plane.The first is dynamic switch migration method based on non-cooperative games,which converts the switch migration problem into a dynamic game among controllers.In this work,we design a load monitoring mechanism to detect load imbalance between controllers.Load imbalance will trigger switch migration.We define the payoff function well to avoid migration conflicts between controllers.Then we design the migration action based on non-cooperative game and maximize the overall resource utilization of the control plane.Finally,we prove that there must be Nash equilibrium point and the results of each migration are Pareto optimal.Many simulation results show that our scheme can not only ensure load balancing but also maximize the overall resource utilization of the control plane.Since the first scheme cannot apply to large-scale SDN,we propose a switch migration mechanism based on deep Q-learning.The deep Q-learning algorithm combines the perception of deep learning with the decision-making ability of Q-learning,and has made important achievements in the fields of automatic driving and natural language processing.In this work,we apply it to switch migration.The SDN network state is represented by a two-dimensional array as the input of the Q-network.The network features are extracted through the convolution layer,then through fully connection layer and output layer to predict migration action.After the migration action,agent gets instant rewards.Finally,the neural network is trained by the experience replay mechanism.We implement our algorithm based on keras and compare it with the classical Q-learning algorithm.The results show that our scheme is superior to others in terms of resource utilization and load balancing.
Keywords/Search Tags:Switch Migration, Non-cooperative Game, Deep Q-learning
PDF Full Text Request
Related items