Learning to estimate confidence measure for ego-motion estimations based on data-driven strategies



Abstract— Over the past few years, we have witnessed a considerable diffusion of data-driven VO approaches as viable alternatives to standard geometric-based strategies. Their success is mainly related to the improved robustness to image non-ideal conditions (\eg blur, high or low contrast, texture-poor scenarios). However, most of the data-driven SotA approaches do not provide any kind of information about the uncertainty of their estimates, which is crucial to effectively integrate them into robotic navigation systems. Inspired by this considerations, we propose Uncertainty-aware Visual Odometry (UA-VO), a novel Deep Neural Network (DNN) architecture that computes relative pose predictions by processing sequence of images and, at the same time, provides uncertainty measures about those estimations. The confidence measure computed by UA-VO considers both epistemic and aleatoric uncertainties and accounts for heteroscedasticity, i.e., it is sample-dependent. We assess the benefits of UA-VO with different typology of experiments on two publicly available datasets. In addition, we run tests on a third brand new set of sequences we gathered and made available to the community.


Code and Dataset

The code and the dataset will be made available soon...