Intrinsic Depth: Improving Depth Transfer with Intrinsic Images

2015

Conference Paper

ps


We formulate the estimation of dense depth maps from video sequences as a problem of intrinsic image estimation. Our approach synergistically integrates the estimation of multiple intrinsic images including depth, albedo, shading, optical flow, and surface contours. We build upon an example-based framework for depth estimation that uses label transfer from a database of RGB and depth pairs. We combine this with a method that extracts consistent albedo and shading from video. In contrast to raw RGB values, albedo and shading provide a richer, more physical, foundation for depth transfer. Additionally we train a new contour detector to predict surface boundaries from albedo, shading, and pixel values and use this to improve the estimation of depth boundaries. We also integrate sparse structure from motion with our method to improve the metric accuracy of the estimated depth maps. We evaluate our Intrinsic Depth method quantitatively by estimating depth from videos in the NYU RGB-D and SUN3D datasets. We find that combining the estimation of multiple intrinsic images improves depth estimation relative to the baseline method.

Author(s): Naejin Kong and Michael J. Black
Book Title: IEEE International Conference on Computer Vision (ICCV)
Pages: 3514--3522
Year: 2015
Month: December

Department(s): Perceiving Systems
Research Project(s): Intrinsic Depth
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

Event Name: International Conference on Computer Vision (ICCV)

Links: pdf
suppmat
YouTube
official video
poster
Video:

BibTex

@inproceedings{Kong:ICCV:2015,
  title = {Intrinsic Depth: Improving Depth Transfer with Intrinsic Images},
  author = {Kong, Naejin and Black, Michael J.},
  booktitle = {IEEE International Conference on Computer Vision (ICCV)},
  pages = {3514--3522},
  month = dec,
  year = {2015},
  month_numeric = {12}
}