Deep reinforcement learning for autonomous driving /

Recently, both the automotive industry and research communities have directed attention towards Autonomous Driving (AD) to tackle issues like traffic congestion and road accidents. End-to-end driving has gained interest as sensory inputs are mapped straight to controls. Machine learning approaches h...

Full description

Saved in:
Bibliographic Details
Main Author: Osman, Hassan Abdalla Abdelkarim (Author)
Format: Thesis
Language:English
Published: Kuala Lumpur : Kulliyah of Engineering, International Islamic University Malaysia, 2019
Subjects:
Online Access:http://studentrepo.iium.edu.my/handle/123456789/4434
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, both the automotive industry and research communities have directed attention towards Autonomous Driving (AD) to tackle issues like traffic congestion and road accidents. End-to-end driving has gained interest as sensory inputs are mapped straight to controls. Machine learning approaches have been used for end-to-end driving, particularly Deep Learning (DL). However, DL requires expensive labelling. Another approached used is Deep Reinforcement Learning (DRL). However, work that use DRL predominantly use one input sensor modality to learn policies such as image pixels of the state. The state-of-art DRL algorithm is Proximal Policy Optimization (PPO). One shortcoming of using PPO in the context of autonomous driving using inputs from multiple sensors is robustness to sensor defectiveness or sensor failure. This is due to naïve sensor fusion. This thesis investigates the use of a stochastic regularization technique named Sensor Dropout (SD) in an attempt to address this shortcoming. Training and evaluation are carried out on a car racing simulator called TORCS. The input to the agent were captured from different sensing modalities such as range-finders, proprioceptive sensors and a front-facing RGB camera. They are used to control the car's steering, acceleration and brakes. In order to simulate sensor defectiveness and sensor failure, Gaussian noise is added to sensor readings and input from sensors are blocked respectively. Results show that using regularization requires longer training time with lower training performance. However, in settings where sensor readings are noisy, the PPO-SD agent displayed better driving behaviour. On the other hand, the PPO agent suffered approximately 59% of performance drop, in terms of rewards, compared to the PPO-SD agent. The case was the same in settings where sensor readings are blocked.
Physical Description:xiv, 70 leaves : illustrations ; 30cm.
Bibliography:Includes bibliographical references (leaves 64-68).