Skip to main content

Self-supervised monocular image depth learning and confidence estimation.

Chen, L., Tang, W., Wan, T.R. and John, N.W., 2020. Self-supervised monocular image depth learning and confidence estimation. Neurocomputing, 381 (March), 272-281.

Full text available as:

[img]
Preview
PDF
Neurocomputing-WenTang.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

9MB

DOI: 10.1016/j.neucom.2019.11.038

Abstract

We present a novel self-supervised framework for monocular image depth learning and confidence estimation. Our framework reduces the amount of ground truth annotation data required for training Convolutional Neural Networks (CNNs), which is often a challenging problem for the fast deployment of CNNs in many computer vision tasks. Our DepthNet adopts a novel fully differential patch-based cost function through the Zero-Mean Normalized Cross-Correlation (ZNCC) to take multi-scale patches as matching and learning strategies. This approach greatly increases the accuracy and robustness of the depth learning. Whilst the proposed patch-based cost function naturally provides a 0-to-1 confidence, it is then used to self-supervise the training of a parallel network for confidence map learning and estimation by exploiting the fact that ZNCC is a normalised measure of similarity which can be approximated as the confidence of the depth estimation. Therefore, the proposed corresponding confidence map learning and estimation operate in a self-supervised manner and is a parallel network to the DepthNet. Evaluation on the KITTI depth prediction evaluation dataset and Make3D dataset show that our method outperforms the state-of-the-art results.

Item Type:Article
ISSN:0925-2312
Uncontrolled Keywords:monocular depth estimation; deep convolutional neural networks; confidence map
Group:Faculty of Science & Technology
ID Code:33253
Deposited By: Symplectic RT2
Deposited On:17 Jan 2020 15:23
Last Modified:14 Mar 2022 14:19

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -