Electric vehicle eco-driving

From RL for transport research
Jump to navigation Jump to search

Contributors: Zheng Li and Qi Luo.

This page mainly based on the work by Lee [1] and Li [2].

Problem Statement[edit]

In recent decades, as environmental issues have emerged, research on various eco-friendly vehicles such as hybrid electric vehicles (HEVs), electric vehicles (EVs), and fuel cell electric vehicles (FCEVs) has been actively conducted, and the market share of these next-generation vehicles has rapidly grown. These vehicles differ from traditional vehicle structures based on internal combustion engines, as internal combustion engines are assisted or replaced with alternate power sources such as electric batteries and fuel cells, thereby improving the vehicle efficiency. There is high potential for increasing the fuel economy performance of vehicles using these powertrain structures by capturing regenerative energy when braking, and/or by controlling the energy flow within the powertrain for an efficient use of power resources. Another way to reduce a vehicle’s fuel consumption is to improve the operation behavior of the vehicle driver, in a strategy called “eco-driving”. In addition to improving the fuel economy by controlling the powertrain or improving the powertrain structure, the vehicle’s fuel consumption may vary depending on how the vehicle is driven in diverse driving situations regarding the driver’s behavior or driving environment, such as the slope of the road. Thus, eco-driving is an effective way to improve fuel economy by optimizing driving behaviors under different conditions. However, the implementation of eco-driving is difficult, because it requires each driver to learn how to drive a vehicle in an efficient manner, and/or an interface between the vehicle and driver for optimizing eco-driving behaviors, such as a haptic system or a smartphone. However, with the recent development of various autonomous vehicle technologies, the use of such eco-driving technologies may increase as the driver’s interventions in vehicle driving decrease; thus, these autonomous vehicles could be more eco-friendly than human-driven vehicles. For example, a vehicle can be driven using a form of cruise control, or with entire speed planning for fully autonomous driving.

Connection to RL method[edit]

The frequent used optimal control-based approaches, as Dynamic Programming (DP), Pon- tryagin’s Minimum Principle (PMP), and Model Predictive Control (MPC), require detailed modeling of the dynamic system and driving environment; accordingly, the performance of the controller is significantly affected by the accuracy of the model. In particular, the driving environment of a vehicle is very diverse and complicated, and very difficult to predict; thus, it is not easy to apply these control methods to a vehicle controller. As an alternative, Reinforcement Learning (RL) has been actively studied in electric vehicle eco-driving control.

Car following scenario[edit]

In Lee et al. (2022) [1], the control policy for eco-driving vehicles was derived using an RL algorithm based on Q-learning. The state consists of the vehicle speed and time according to the traveling distance. The actions are defined by speed value.

Signalized intersections[edit]

Li et al. (2022) [2] proposes a Deep Reinforcement learning (DRL) based eco-driving strategy for automated HEV eco-driving under a connected traffic environment with signalized intersections. They simplify the actual environment to the Markov decision process (MDP). The ego vehicle reference speed, ego vehicle travel distance, ego vehicle actual speed, preceding vehicle speed, preceding vehicle acceleration, distance headway, road slope, battery state of charge, distance to the intersection stop line, remaining red light time, remaining green light time, and green light velocity boundary are select to constitute the observation state vector. The vehicle acceleration is selected as the action variable.

Main results[edit]

Car following scenario[edit]

The control concept proposed by Lee et al. (2022) [1] is shown in Figure 1. In the outer loop, the greedy action is chosen as the control input, whereas in the inner loop, the Q-function value is updated with all of the admissible actions using the approximated environment model constructed from the driving batch data. These RL structures imply that the powertrain dynamics and longitudinal dynamics are modeled approximately, and these models are used for learning through virtual interactions with an agent. The information regarding the stochastic transitions of state variables, such as the driving cycle information and the behavior of the front vehicle’s speed, which are difficult to generalize, are delivered from the environment to an approximated environment model as an experience replay. The stored actual experience is utilized for off-policy. learning multiple times, thereby providing a more efficient sampling method; moreover, the learning process can converge quickly, without the time-consuming process of gaining real-world experience.

Architecture of reinforcement learning algorithm for eco-driving strategy.png

Figure 1. Architecture of reinforcement learning algorithm for eco-driving strategy [1].

Signalized intersections[edit]

In Li et al. (2022) [2], the DRL algorithm is implemented to solve the multi-objective eco-driving problem. The proposed DRL based eco-driving strategy is illustrated in Figure 2. A twin-delayed deep deterministic policy gradient (TD3) agent is designed to deal with the eco-driving control problem by learning an economical acceleration/deceleration policy. And an adaptive equivalent consumption minimization (A-ECMS) is applied to control the power split of the engine and motors at the hybrid powertrain level.

The proposed eco-driving system architecture.png

Figure 2. The proposed eco-driving system architecture [2].


Reference[edit]

  1. 1.0 1.1 1.2 1.3 H. Lee, K. Kim, N. Kim, and S. W. Cha, “Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning,” Appl. Energy, vol. 313, no. March, p. 118460, 2022.
  2. 2.0 2.1 2.2 2.3 J. Li, X. Wu, M. Xu, and Y. Liu, “Deep reinforcement learning and reward shaping based eco-driving control for automated HEVs among signalized intersections,” Energy, vol. 251, p. 123924, 2022.