Developing an eco-driving strategy in a hybrid traffic network using reinforcement learning

dc.contributor.authorJamil, Umar
dc.contributor.authorMalmir, Mostafa
dc.contributor.authorChen, Alan
dc.contributor.authorFilipovska, Monika
dc.contributor.authorXie, Mimi
dc.contributor.authorDing, Caiwen
dc.contributor.authorJin, Yu-Fang
dc.creator.orcidhttps://orcid.org/0000-0002-2346-556X
dc.date.accessioned2024-08-05T15:10:49Z
dc.date.available2024-08-05T15:10:49Z
dc.date.issued2024-07-23
dc.description.abstractEco-driving has garnered considerable research attention owing to its potential socio-economic impact, including enhanced public health and mitigated climate change effects through the reduction of greenhouse gas emissions. With an expectation of more autonomous vehicles (AVs) on the road, an eco-driving strategy in hybrid traffic networks encompassing AV and human-driven vehicles (HDVs) with the coordination of traffic lights is a challenging task. The challenge is partially due to the insufficient infrastructure for collecting, transmitting, and sharing real-time traffic data among vehicles, facilities, and traffic control centers, and the following decision-making of agents involved in traffic control. Additionally, the intricate nature of the existing traffic network, with its diverse array of vehicles and facilities, contributes to the challenge by hindering the development of a mathematical model for accurately characterizing the traffic network. In this study, we utilized the Simulation of Urban Mobility (SUMO) simulator to tackle the first challenge through computational analysis. To address the second challenge, we employed a model-free reinforcement learning (RL) algorithm, proximal policy optimization, to decide the actions of AV and traffic light signals in a traffic network. A novel eco-driving strategy was proposed by introducing different percentages of AV into the traffic flow and collaborating with traffic light signals using RL to control the overall speed of the vehicles, resulting in improved fuel consumption efficiency. Average rewards with different penetration rates of AV (5%, 10%, and 20% of total vehicles) were compared to the situation without any AV in the traffic flow (0% penetration rate). The 10% penetration rate of AV showed a minimum time of convergence to achieve average reward, leading to a significant reduction in fuel consumption and total delay of all vehicles.
dc.description.departmentElectrical and Computer Engineering
dc.description.departmentComputer Science
dc.description.sponsorshipNational Science Foundation
dc.description.sponsorshipUS Department of Transportation for Transportation Consortium of South-Central States (TranSET)
dc.identifier.citationJamil, U., Malmir, M., Chen, A., Filipovska, M., Xie, M., Ding, C., & Jin, Y.-F. (2024). Developing an eco-driving strategy in a hybrid traffic network using reinforcement learning. Science Progress, 107(3). doi:10.1177/00368504241263406
dc.identifier.issn2047-7163
dc.identifier.otherhttps://doi.org/10.1177/00368504241263406
dc.identifier.urihttps://hdl.handle.net/20.500.12588/6600
dc.language.isoen
dc.publisherSAGE Publications
dc.relation.ispartofseriesScience Progress; Volume 107, Issue 3
dc.rightsAttribution-NonCommercial 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by-nc/3.0/us/
dc.subjecteco-driving
dc.subjecthybrid traffic network
dc.subjectreinforcement learning
dc.subjecttraffic flow control
dc.subjectfuel consumption
dc.subjectmicroscopic traffic simulator
dc.titleDeveloping an eco-driving strategy in a hybrid traffic network using reinforcement learning
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
jamil-et-al-2024-developing-an-eco-driving-strategy-in-a-hybrid-traffic-network-using-reinforcement-learning.pdf
Size:
3.77 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.86 KB
Format:
Item-specific license agreed upon to submission
Description: