Improving robotic grasping system using deep learning approach

Traditional robots can only move according to a pre-planned trajectory which limits the range of applications that they could be engaged in. Despite their long history, the use of computer vision technology for grasp prediction and object detection is still an active research area. However, the gene...

Full description

Saved in:
Bibliographic Details
Main Author: Mohannad K. H., Farag
Format: Thesis
Language:English
Published: 2020
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/34100/1/Improving%20robotic%20grasping%20system%20using%20deep.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Traditional robots can only move according to a pre-planned trajectory which limits the range of applications that they could be engaged in. Despite their long history, the use of computer vision technology for grasp prediction and object detection is still an active research area. However, the generating of a full grasp configuration of a target object is the main challenge to plan a successful robotic operation of the physical robotic grasp. Integrating computer vision technology with tactile sensing feedback has given rise to a new capability of robots that can accomplish various robotic tasks. However, the recently conducted studies had used tactile sensing with grasp detection models to improve prediction accuracy, not physical grasp success. Thus, the problem of detecting the slip event of the grasped objects that have different weights is addressed in this research. This research aimed to develop a Deep Learning grasp detection model and a slip detection algorithm and integrating them into one innovative robotic grasping system. By proposing a four-step data augmentation technique, the achieved grasping accuracy was 98.2 % exceeding the best-reported results by almost 0.5 % where 625 new instances were generated per original image with different grasp labels. Besides, using the twostage- transfer-learning technique improved the obtained results in the second stage by 0.3 % compared to the first stage results. For the physical robot grasp, the proposed sevendimensional grasp representations method allows the autonomous prediction of the grasp size and depth. The developed model achieved 74.8 milliseconds as prediction time, which makes it possible to use the model in real-time robotic applications. By observing the real-time feedback of a force sensing resistor sensor, the proposed slip detection algorithm indicated a quick response within 86 milliseconds. These results allowed the system to maintain holding the target objects by an immediate increase of the grasping force. The integration of the Deep Learning and slip detection models has shown a significant improvement of 18.4% in the results of the experimental grasps conducted on a SCARA robot. Besides, the utilized Zerocross-Canny edge detector has improved the robot positioning error by 0.27 mm compared to the related studies. The achieved results introduced an innovative robotic grasping system with a Grasp-NoDrop-Place scheme.