Implementing Deep Reinforcement Learning (DRL)-based Driving Styles for Non-Player Vehicles

We propose a new, hierarchical architecture for behavioral planning of vehicle models usable as realistic non-player vehicles in serious games related to traffic and driving. These agents, trained with deep reinforcement learning (DRL), decide their motion by taking high-level decisions, such as &qu...

Full description

Saved in:
Bibliographic Details
Main Authors: Luca Forneris (Author), Alessandro Pighetti (Author), Luca Lazzaroni (Author), Francesco Bellotti (Author), Alessio Capello (Author), Marianna Cossu (Author), Riccardo Berta (Author)
Format: Book
Published: Serious Games Society, 2023-11-01T00:00:00Z.
Subjects:
Online Access:Connect to this object online.
Tags: Add Tag
No Tags, Be the first to tag this record!

MARC

LEADER 00000 am a22000003u 4500
001 doaj_c7fbc0928718475fb8d2c7f5fc82b4c2
042 |a dc 
100 1 0 |a Luca Forneris  |e author 
700 1 0 |a Alessandro Pighetti  |e author 
700 1 0 |a Luca Lazzaroni  |e author 
700 1 0 |a Francesco Bellotti  |e author 
700 1 0 |a Alessio Capello  |e author 
700 1 0 |a Marianna Cossu  |e author 
700 1 0 |a Riccardo Berta  |e author 
245 0 0 |a Implementing Deep Reinforcement Learning (DRL)-based Driving Styles for Non-Player Vehicles 
260 |b Serious Games Society,   |c 2023-11-01T00:00:00Z. 
500 |a 10.17083/ijsg.v10i4.638 
500 |a 2384-8766 
520 |a We propose a new, hierarchical architecture for behavioral planning of vehicle models usable as realistic non-player vehicles in serious games related to traffic and driving. These agents, trained with deep reinforcement learning (DRL), decide their motion by taking high-level decisions, such as "keep lane", "overtake" and "go to rightmost lane". This is similar to a driver's high-level reasoning and takes into account the availability of advanced driving assistance systems (ADAS) in current vehicles. Compared to a low-level decision making system, our model performs better both in terms of safety and speed. As a significant advantage, the proposed approach allows to reduce the number of training steps by more than one order of magnitude. This makes the development of new models much more efficient, which is key for implementing vehicles featuring different driving styles. We also demonstrate that, by simply tweaking the reinforcement learning (RL) reward function, it is possible to train agents characterized by different driving behaviors. We also employed the continual learning technique, starting the training procedure of a more specialized agent from a base model. This allowed significantly to reduce the number of training steps while keeping similar vehicular performance figures. However, the characteristics of the specialized agents are deeply influenced by the characteristics of the baseline agent.   
546 |a EN 
690 |a Reinforcement Learning 
690 |a Automotive Driving 
690 |a Serious Games 
690 |a Autonomous Agents 
690 |a Racing Games 
690 |a Driving Styles 
690 |a Education 
690 |a L 
690 |a Electronic computers. Computer science 
690 |a QA75.5-76.95 
690 |a Computer software 
690 |a QA76.75-76.765 
655 7 |a article  |2 local 
786 0 |n International Journal of Serious Games, Vol 10, Iss 4 (2023) 
787 0 |n http://journal.seriousgamessociety.org/index.php/IJSG/article/view/638 
787 0 |n https://doaj.org/toc/2384-8766 
856 4 1 |u https://doaj.org/article/c7fbc0928718475fb8d2c7f5fc82b4c2  |z Connect to this object online.