Advanced driver assistant system based on thermal imaging and machine learning

Based on a research conducted by Malaysian Institute Road Safety Research (MIROS), it is found that human error contributes up to 80% of road accidents. The answer to this issue would be through the implementation of autonomous car equipped with Advanced Driver Assistance System (ADAS) replacing man...

Full description

Saved in:
Bibliographic Details
Main Author: Cheah, Sengli
Format: Thesis
Language:English
Published: 2020
Subjects:
Online Access:http://eprints.utm.my/id/eprint/93032/1/CheahShengliMSKE2020.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Based on a research conducted by Malaysian Institute Road Safety Research (MIROS), it is found that human error contributes up to 80% of road accidents. The answer to this issue would be through the implementation of autonomous car equipped with Advanced Driver Assistance System (ADAS) replacing manual human control. ADAS utilizes the concept of sensor fusion which data from multiple sensors such as camera, radar and LIDAR are collected and processed to handle different traffic situations. This is due to the fact that each sensor has their own strength and weakness. The current SAE J3016 automation level 2 and level 3 vehicles defined by Society of Automotive Engineers (SAE) does not include thermal imaging and it is expected that thermal sensor will be widely adopted in the future. Thermal sensor senses heat instead of light which would allow ADAS system to operate normally even when in low light environment, cluttered environment and inclement weather such as rain, fog and snow compared to visible cameras. With the rise of computer vision and deep learning, ADAS can be equipped with thermal sensor and Convolutional Neural Network to detect vehicles and pedestrian on the road. YOLOv3 is used in this research due to the lower computing power needed allowing easy deployment on compact low power embedded platform. In addition, a light weight YOLOv3 model called YOLOv3 Tiny is also used to achieve a faster inference speed. Thermal dataset provided by FLIR is used to train both models in PC and Nvidia Jetson TX2 is selected to be end target deployment platform. Performance evaluation is conducted with different network size and colour channels for benchmarking the detection speed and accuracy. The YOLOv3 model in this work has a mAP of 52.09% and on the embedded platform single channel YOLOv3 Tiny is able to achieve performance up to 27 frames per second.