Deep Reinforcement Learning Based Electric Vehicle Charging Control And Residential Energy Management | | Posted on:2023-05-13 | Degree:Doctor | Type:Dissertation | | Country:China | Candidate:L F Yan | Full Text:PDF | | GTID:1522307043467914 | Subject:Electrical engineering | | Abstract/Summary: | PDF Full Text Request | | With the concern of climate change,the electric vehicle(EV)is gaining popularity in recent years.Despite the huge application potential,the large-scale integration of EVs to the grid will significantly increase the complexity of the environment and affect the stable operation of the power system due to the high power and stochastic characters of EV loads.To promote the development of vehicle-grid integration,the optimization of EV charging and residential energy management is necessary.Compared with traditional model-driven methods,the data-driven deep reinforcement learning(DRL)algorithm obtains the control solution by interacting with the environment directly without relying on system models and being highly adaptable to uncertainties.This paper focuses on EV charging control and residential energy management,firstly,the impact of EV loads on the grid is quantitatively analyzed,and then the EV charging control and residential energy management strategy based on the DRL algorithm are studied in depth.The specific research contents are as follows.(1)For the analysis of the impact of EVs on the residential load in the distribution grid,a high-resolution EV continuous driving trajectory generation model is firstly designed based on the Markov chain.Then,a residential load curve generation model is constructed considering the charging availability and charging preference.The numerical studies quantitatively analyze the impact of EV integration from the perspective of a residential household and the distribution transformer.The results show that the peak load will increase significantly and the transformer overload operation time is longer.The influence of battery parameters and driving distances on EV load is also provided.The results show that EVs have sufficient dispatchable time and capacity,and are feasible to control.(2)For the charging control of individual EVs,a DRL-based control strategy is proposed to reduce the charging cost while alleviating the aggregate anxiety of users.Various factors,including driver’s experience,charging preference and charging locations,are considered to describe the dynamic behaviors of individual EVs.The aggregate anxiety concept is introduced to characterize the driver’s anxiety on the driving range and uncertain events.A mathematical model is provided to describe the driver’s experience and aggregate anxiety quantitatively.To obtain a fine-grained control,a novel continuous SAC control framework is adopted to design the DRL-based approach for optimal EV charging control.Compared with the standard DRL methods,the proposed approach including an SL stage and an RL stage achieves a superior control performance.(3)For the coordinated charging control of EV clusters,a cooperative charging control strategy based on multi-agent deep reinforcement learning(MADRL)is proposed to satisfy the energy demand of all users and reduce the charging cost while avoiding the overload of the distribution transformer.Each agent contains a collective-policy model and an independent learner.The collective-policy model is introduced to model other agents’ behaviors.The independent learner is used to learn the optimal charging strategy by interacting with the environment.Agents are trained with only the local observation and approximation,indicating that the proposed method is fully decentralized and scalable to the problem with multiple agents.The numerical studies demonstrate the effectiveness and scalability of the proposed approach.(4)For the energy management of residential clusters with multiple appliances,an energy management strategy based on the MADRL algorithm is proposed to achieve real-time control of various types of electrical appliances in residential households.The Gaussian and Bernoulli distributions are adopted to design the actor network to generate both continuous decisions and discrete decisions.Besides,a reward reshaping mechanism is introduced to address the reward lag problem caused by the time-shiftable loads and improve the training stability.Simulation results show that the proposed algorithm can effectively realize online coordinated energy management of residential clusters and ensure a fair sharing of transformer capacity.(5)For real-time energy sharing and management in the community market,a hierarchical deep reinforcement learning(HDRL)based scheme containing a two-stage learning process is proposed.In the outer stage,a DRL based pricing approach is proposed to determine the real-time internal electricity prices based on the participants’ historical net power and external energy supplier’s electricity prices.In the inner stage,a MADRL based approach is developed to learn the real-time appliances scheduling policy based on the local observations and given internal electricity price in a decentralized way.The proposed algorithm can adapt to the heterogeneity of different households in the community and be scalable to large scale problems.The simulation results show that the internal trading price and the household scheduling decisions are made simultaneously. | | Keywords/Search Tags: | Electric vehicles, Smart homes, Community market, Charging Control, Energy management, Deep reinforcement learning, Multi-agent deep reinforcement learning, Hierarchical deep reinforcement learning | PDF Full Text Request | Related items |
| |
|