Deep Reinforcement Learning for Instruction Following Visual Navigation in 3D Maze-like Environments
Abstract - In this work, we address the problem of visual navigation by following instructions. In this task, the robot must interpret a natural language instruction in order to follow a predefined path in a possibly unknown environment. Despite different approaches have been proposed in the last years, they are all based on the assumption that the environment contains objects or other elements that can be used to formulate instructions, such as houses or offices. On the contrary, we focus on situations where the environment objects cannot be used to specify a navigation path. In particular, we consider 3D maze-like environments as our test bench because they can be very large and offer very intricate structures. We show that without reference points, visual navigation and instruction following can be rather challenging, and that standard approaches can not be applied successfully. For this reason, we propose a new architecture that explicitly learns both visual navigation and instruction understanding. We demonstrate with simulated experiments that our method can effectively follow instructions and navigate in previously unseen mazes of various sizes.
Code: Soon available