| With the advancement of 5G communication technology,the development of internet of vehicles embraces new challenges and opportunities.In internet of vehicles,the volume of information grows and the service categories become more diversified.In the face of massive and heterogeneous network resource demand,how service providers manage resources has become a key and difficult issue to be solved in the development of internet of vehicles.In this regard,network slicing,a key 5G technology,provides a reliable solution by dividing a physical network into multiple dedicated logical networks to meet the needs of different service use cases.In addition,deep reinforcement learning(DRL)has been widely used in the field of internet of vehicles because of its characteristics in terms of decision speed and learning capability.Reconfigurable intelligent surface(RIS)has significantly improved the performance of wireless communication networks by utilizing large-scale low-cost RIS components.All these provide new ideas to realize intelligent network slicing resource management.The main contents of this paper are as follows:For network slicing admission control,this paper designs a resource management system framework to reduce the coupling between admission control and resource allocation,and provide guarantees for the achievement of optimization objectives.In addition,a network slicing admission control algorithm based on improved DRL is proposed.The algorithm is a model-free algorithm,which distinguishes edge nodes from core nodes and considers quality of service(Qo S)for different 5G use cases to improve the profit and network resource utilization of network slicing providers.Simulation results show that the proposed algorithm in this paper improves the profit,reception rate and network resource utilization by 3.2%,2%and 5%,respectively,compared to conventional reinforcement learning-based algorithms.For network slicing resource allocation,this paper proposes a network slicing resource allocation algorithm based on double deep Q learning network(DDQN)and RIS,which is to avoid the system performance from degrading too fast when the vehicle speed increases.Firstly,the resource allocation problem is defined as a Markov decision process.Secondly,two Q-learning networks are trained according to DDQN and decoupled for the calculation of the target Q-value and action selection,where the RIS phase shift matrix is optimized using a block coordinate descent(BCD)based approach.Finally,the solution is obtained by learning in the environment.Simulation results show that the total vehicle-to-infrastructure(V2I)capacity obtained by the algorithm proposed in this paper decreases by 16.28% when the vehicle speed increases from 60 km/h to 120 km/h,while the total capacity obtained by the RIS random configuration method decreases even more,by 61.31%,comparing the two with a difference of nearly 3.7 times.At the same time,the negative impact on communication performance due to the large distance between the base station and the vehicle is reduced by deploying RIS.For heterogeneous network slicing resource allocation,this paper constructs a joint optimization problem with different objectives to provide a scheme for achieving efficient multiplexing of different service use cases in the same frequency band.For the resource allocation of enhanced mobile broadband(e MBB)use cases,an alternating iteration-based e MBB allocation algorithm is proposed.For the resource allocation of ultra-reliable low-latency communication(URLLC)use cases,a pre-configured RIS-based heuristic URLLC allocation algorithm is proposed to achieve the balance between maximizing URLLC reception rate and minimizing e MBB loss.Four different RIS configuration schemes are set up in this paper,and the effectiveness of the selected configuration in this paper is demonstrated through simulations.The simulation results also show that the configuration achieves about 99.99% URLLC packet reception rate using only 80 RISs,which is a 4%improvement compared to the algorithm without RIS deployment. |