Autonomous Navigation in Complex Environments using Memory-Aided Deep Reinforcement Learning

2021 
Mobile robots have gained increased importance within industrial tasks such as commissioning, delivery or operation in hazardous environments. The ability to navigate in unknown and complex environments is paramount in industrial robotics. Reinforcement learning approaches have shown remarkable success in dealing with unknown situations and react accordingly without manually engineered guidelines and overconservative measures. However, these approaches are often restricted to short range navigation and are prone to local minima due to a lack of a memory module. Thus, the navigation in complex environments such as mazes, long corridors or concave areas is still an open frontier. In this paper, we incorporate a variety of recurrent neural networks to cope with these challenges. We train a reinforcement learning based agent within a 2D simulation environment of our previous work and extend it with a memory module. The agent is able to navigate solely on sensor data observations which are directly mapped to actions. We evaluate the performance on different complex environments and achieve enhanced results within complex environments compared to memory-free baseline approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []