A Multi task deep learning approach for robust obstacle detection.

Our research community is the Robotics one and we mainly work on Machine Learning and Computer Vision to enable robots to understand and interact with their environment. In recent years, we developed a strong interest in themes connected to Human-Robot Interaction.
 

”In this work, we give a new twist to monocular obstacle detection. Most of the existing approaches either rely on Visual SLAM systems or on depth estimation models to build 3D maps and detect obstacles. Despite their success, these methods are not specifically devised for monocular obstacle detection. In particular, they are not robust to appearance and camera intrinsics changes or texture-less scenarios. To overcome these limitations, we propose an end-to-end deep architecture that jointly learns to detect obstacle and estimate their depth. The multi task nature of this strategy strengthen both the obstacle detection task with more reliable bounding boxes and range measures and the depth estimation one with robustness to scenario changes. We call this architecture J-MOD 2 . We prove the effectiveness of our approach with experiments on sequences with different appearance and focal lengths. Furthermore, we show its benefits on a set of simulated navigation experiments where a MAV explores an unknown scenario and plans safe trajectories by using our detection model.