Deep reinforcement learning for autonomous driving /

Recently, both the automotive industry and research communities have directed attention towards Autonomous Driving (AD) to tackle issues like traffic congestion and road accidents. End-to-end driving has gained interest as sensory inputs are mapped straight to controls. Machine learning approaches h...

Full description

Saved in:
Bibliographic Details
Main Author: Osman, Hassan Abdalla Abdelkarim (Author)
Format: Thesis
Language:English
Published: Kuala Lumpur : Kulliyah of Engineering, International Islamic University Malaysia, 2019
Subjects:
Online Access:http://studentrepo.iium.edu.my/handle/123456789/4434
Tags: Add Tag
No Tags, Be the first to tag this record!
LEADER 032320000a22002890004500
008 191030s2019 my a f m 000 0 eng d
040 |a UIAM  |b eng  |e rda 
041 |a eng 
043 |a a-my--- 
100 1 |a Osman, Hassan Abdalla Abdelkarim,  |e author 
245 1 0 |a Deep reinforcement learning for autonomous driving /  |c by Hassan Abdalla Abdelkarim Osman 
264 1 |a Kuala Lumpur :  |b Kulliyah of Engineering, International Islamic University Malaysia,  |c 2019 
300 |a xiv, 70 leaves :  |b illustrations ;  |c 30cm. 
336 |2 rdacontent  |a text 
347 |2 rdaft  |a text file  |b PDF 
502 |a Thesis (MSMCT)--International Islamic University Malaysia, 2019. 
504 |a Includes bibliographical references (leaves 64-68). 
520 |a Recently, both the automotive industry and research communities have directed attention towards Autonomous Driving (AD) to tackle issues like traffic congestion and road accidents. End-to-end driving has gained interest as sensory inputs are mapped straight to controls. Machine learning approaches have been used for end-to-end driving, particularly Deep Learning (DL). However, DL requires expensive labelling. Another approached used is Deep Reinforcement Learning (DRL). However, work that use DRL predominantly use one input sensor modality to learn policies such as image pixels of the state. The state-of-art DRL algorithm is Proximal Policy Optimization (PPO). One shortcoming of using PPO in the context of autonomous driving using inputs from multiple sensors is robustness to sensor defectiveness or sensor failure. This is due to naïve sensor fusion. This thesis investigates the use of a stochastic regularization technique named Sensor Dropout (SD) in an attempt to address this shortcoming. Training and evaluation are carried out on a car racing simulator called TORCS. The input to the agent were captured from different sensing modalities such as range-finders, proprioceptive sensors and a front-facing RGB camera. They are used to control the car's steering, acceleration and brakes. In order to simulate sensor defectiveness and sensor failure, Gaussian noise is added to sensor readings and input from sensors are blocked respectively. Results show that using regularization requires longer training time with lower training performance. However, in settings where sensor readings are noisy, the PPO-SD agent displayed better driving behaviour. On the other hand, the PPO agent suffered approximately 59% of performance drop, in terms of rewards, compared to the PPO-SD agent. The case was the same in settings where sensor readings are blocked. 
596 |a 1 
655 7 |a Theses, IIUM local 
690 |a Dissertations, Academic  |x Department of Mechatronics Engineering  |z IIUM 
710 2 |a International Islamic University Malaysia.  |b Department of Mechatronics Engineering 
856 4 |u http://studentrepo.iium.edu.my/handle/123456789/4434 
900 |a sz to aaz 
999 |c 440432  |d 473224 
952 |0 0  |6 XX(559368.1)  |7 0  |8 THESES  |9 762000  |a IIUM  |b IIUM  |c MULTIMEDIA  |g 0.00  |o XX(559368.1)  |p 11100409665  |r 1900-01-02  |t 1  |v 0.00  |y THESIS 
952 |0 0  |6 XX(559368.1) CD  |7 5  |8 THESES  |9 858761  |a IIUM  |b IIUM  |c MULTIMEDIA  |g 0.00  |o XX(559368.1) CD  |p 11100409666  |r 1900-01-02  |t 1  |v 0.00  |y THESISDIG