Charging station pricing and scheduling

From RL for transport research
Revision as of 08:04, 10 October 2022 by ZhengLi (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Contributors: Zheng Li and Qi Luo.

This page mainly based on the work by Zhao [1] and Wang [2] .

Problem Statement[edit]

With the increasing market penetration of Electric Vehicles (EVs) and the rapid development of charging technologies, public charging stations are considered to be an essential recharging source for EV customers. Compared with home charging, public charging stations can offer relatively lower charging prices for EV customers, as power can be purchased at a lower rate from the wholesale power market. Hence, many large charging stations are being designed and deployed to offer multiple charging options for EV customers.

High penetration of EVs is expected to change the power load profile significantly in distribution networks, creating potential threats to the power grid. Establishing a conveniently available public charging infrastructure is essential to accommodating more clean energy, reducing carbon emissions, and alleviating peak charging loads. In the past decade, various EV charging control and scheduling schemes have been proposed to improve grid reliability, reduce charging operation cost, offer auxiliary services, and promote the integration of renewable generation in commercial Microgrids, etc.

Other than charging scheduling control, there have been increasing research efforts on designing proper demand response (DR) mechanisms to improve the overall system efficiency. Therein, EVs adjust their charging demands according to the charging price announced by the charging stations or utilities. For instance, the authors in and considered dynamic pricing DR mechanisms for EV charging stations and distributed EVs, respectively. Overall, it has been widely accepted that an effective pricing and scheduling policy benefits both EV users and the grid system.

Connection to RL method[edit]

Most researches consider the operation of an EV charging station over a time horizon that is divided into T time slots. EVs arrive at the charging station at random times.

EV charging station interaction system.png

Figure 1. EV charging station interaction system [2] .

At the beginning of each time slot t, the charging station determines the charging price and charging schedule based on its observation of the past and current events, including the charging demand and the departure time of EVs that have already arrived, and the electricity prices. The decision, in turn, affects the residual charging demands left for future time slots. Thus, the optimal decision is naturally a solution of a Markov Decision Process (MDP). The system state can be described by the residual charging demand and parking time of EV. Based on the observed state, the charging station decides the charging price and the charging rate (charging schedule) of each EV. The reward function is closely related to the objective of the charging station, e.g., the profit of the charging station, the benefit of EV customers, the social welfare, etc.

Main results[edit]

Q-learning approach[edit]

Some studies utilize Q-learning approach to solve the charging station pricing and scheduling.

Table 1
Paper Methods
Lu et al. (2018) [3] A real-time price-based demand response algorithm was developed based on Q-learning.
Su et al. (2020) [4] An online reservation system with a real-time charging pricing strategy was proposed to motivate EV customers to use the designated charging station for services.

However, the computational efficiency of original RL-based algorithms sharply decreases under a high-dimensional charging environment since a lookup table is required to store the transition information of state–action pairs, soon rendering the problem intractable.

Deep reinforcement learning approach[edit]

Some studies use the deep reinforcement learning (DRL) approach with a nonlinear approximator to tackle the dimensionality challenges.

Table 2
Paper Methods
Qiu et al. (2020) [5] A deep Qnetwork (DQN) with a priority experience replay memory was utilized to optimize the pricing decisions from the charging service provider’s perspective. The state space and action space were defined as the wholesale market price and retail price, whereas the reward was set to be the overall profit.
Abdalrahman et al. (2020) [6] A dynamic pricing mechanism was investigated for a multiservice EV charging station, where the predetermined service quality was assumed to be maintained all the time.

The comparison of Q-learning and DRL methods is shown as below.

Table 3
Methods Pros Cons
Q-learning Perform well in uncomplicated problems, real time decision making. Curse of dimensionality, time-consuming.
DRL Real time decision making, perform well in high-dimensional problems, good execution speed. Selecting appropriate hyperparameters can be tricky.

Reference[edit]

  1. Z. Zhao and C. K. M. Lee, “Dynamic Pricing for EV Charging Stations: A Deep Reinforcement Learning Approach,” IEEE Trans. Transp. Electrif., vol. 8, no. 2, pp. 2456–2468, 2022.
  2. 2.0 2.1 S. Wang, S. Bi, and Y. A. Zhang, “Reinforcement Learning for Real-Time Pricing and Scheduling Control in EV Charging Stations,” IEEE Trans. Ind. Informatics, vol. 17, no. 2, pp. 849–859, 2021.
  3. R. Lu, S. H. Hong, and X. Zhang, “A dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach,” Appl. Energy, vol. 220, pp. 220–230, Jun. 2018.
  4. Z. Su, T. Lin, Q. Xu, N. Chen, S. Yu, and S. Guo, “An online pricing strategy of EV charging and data caching in highway service stations,” in Proc. 16th Int. Conf. Mobility, Sens. Netw. (MSN), Dec. 2020, pp. 81–85.
  5. D. Qiu, Y. Ye, D. Papadaskalopoulos, and G. Strbac, “A deep reinforcement learning method for pricing electric vehicles with discrete charging levels,” IEEE Trans. Ind. Appl., vol. 56, no. 5, pp. 5901–5912, Sep. 2020.
  6. A. Abdalrahman and W. Zhuang, “Dynamic pricing for differentiated PEV charging services using deep reinforcement learning,” IEEE Trans. Intell. Transp. Syst., early access, Oct. 1, 2020.