Layered segmentation and optical flow estimation over time


Conference Paper


Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.

Author(s): Sun, D. and Sudderth, E. and Black, M. J.
Book Title: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)
Pages: 1768--1775
Year: 2012
Publisher: IEEE

Department(s): Perceiving Systems
Research Project(s): Layers, Time and Segmentation
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

Links: pdf
sup mat


  title = {Layered segmentation and optical flow estimation over time},
  author = {Sun, D. and Sudderth, E. and Black, M. J.},
  booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  pages = {1768--1775},
  publisher = {IEEE},
  year = {2012}