An Initialization Method of Deep Q-network for Learning Acceleration of Robotic Grasp

2020 
Generally, self-supervised learning of robotic grasp utilizes a model-free Reinforcement Learning method, e.g., a Deep Q-network (DQN). A DQN makes use of a high-dimensional Q-network to infer dense pixel-wise probability maps of affordances for grasping actions. Unfortunately, it usually leads to a time-consuming training process. Inspired by the initialization thought of optimization algorithms, we propose a method of initialization for accelerating self-supervised learning of robotic grasp. It pre-trains the Q-network by the supervised learning of affordance maps before the robotic grasp training. When applying the pre-trained Q-network a robot can be trained through self-supervised trial-and-error in a purposeful style to avoid meaningless grasping in empty regions. The Q-network is pre-trained by supervised learning on a small dataset with coarse-grained labels. We test the proposed method with Mean Square Error, Smooth L1, and Kullback-Leibler Divergence (KLD) as loss functions in the pre-training phase. The results indicate that the KLD loss function can predict accurately affordances with less noise in the empty regions. Also, our method is able to accelerate the self-supervised learning significantly in the early stage and shows little relevance to the sparsity of objects in the workspace.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []