Video segmentation via object flow


Conference Paper


Video object segmentation is challenging due to fast moving objects, deforming shapes, and cluttered backgrounds. Optical flow can be used to propagate an object segmentation over time but, unfortunately, flow is often inaccurate, particularly around object boundaries. Such boundaries are precisely where we want our segmentation to be accurate. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multiscale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process object flow and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme. Experiments on the SegTrack v2 and Youtube-Objects datasets show that the proposed algorithm performs favorably against the other state-of-the-art methods.

Author(s): Yi-Hsuan Tsai and Ming-Hsuan Yang and Michael J. Black
Book Title: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)
Year: 2016
Month: June

Department(s): Perceiving Systems
Research Project(s): Dense Optical Flow
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

Event Name: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016

Links: pdf


  title = {Video segmentation via object flow},
  author = {Tsai, Yi-Hsuan and Yang, Ming-Hsuan and Black, Michael J.},
  booktitle = { IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2016},
  month_numeric = {6}