Developing an eco-driving strategy in a hybrid traffic network using reinforcement learning

Date

2024-07-23

Authors

Jamil, Umar
Malmir, Mostafa
Chen, Alan
Filipovska, Monika
Xie, Mimi
Ding, Caiwen
Jin, Yu-Fang

Journal Title

Journal ISSN

Volume Title

Publisher

SAGE Publications

Abstract

Eco-driving has garnered considerable research attention owing to its potential socio-economic impact, including enhanced public health and mitigated climate change effects through the reduction of greenhouse gas emissions. With an expectation of more autonomous vehicles (AVs) on the road, an eco-driving strategy in hybrid traffic networks encompassing AV and human-driven vehicles (HDVs) with the coordination of traffic lights is a challenging task. The challenge is partially due to the insufficient infrastructure for collecting, transmitting, and sharing real-time traffic data among vehicles, facilities, and traffic control centers, and the following decision-making of agents involved in traffic control. Additionally, the intricate nature of the existing traffic network, with its diverse array of vehicles and facilities, contributes to the challenge by hindering the development of a mathematical model for accurately characterizing the traffic network. In this study, we utilized the Simulation of Urban Mobility (SUMO) simulator to tackle the first challenge through computational analysis. To address the second challenge, we employed a model-free reinforcement learning (RL) algorithm, proximal policy optimization, to decide the actions of AV and traffic light signals in a traffic network. A novel eco-driving strategy was proposed by introducing different percentages of AV into the traffic flow and collaborating with traffic light signals using RL to control the overall speed of the vehicles, resulting in improved fuel consumption efficiency. Average rewards with different penetration rates of AV (5%, 10%, and 20% of total vehicles) were compared to the situation without any AV in the traffic flow (0% penetration rate). The 10% penetration rate of AV showed a minimum time of convergence to achieve average reward, leading to a significant reduction in fuel consumption and total delay of all vehicles.

Description

Keywords

eco-driving, hybrid traffic network, reinforcement learning, traffic flow control, fuel consumption, microscopic traffic simulator

Citation

Jamil, U., Malmir, M., Chen, A., Filipovska, M., Xie, M., Ding, C., & Jin, Y.-F. (2024). Developing an eco-driving strategy in a hybrid traffic network using reinforcement learning. Science Progress, 107(3). doi:10.1177/00368504241263406

Department

Electrical and Computer Engineering
Computer Science