Perceiving Systems, Computer Vision

Predicting 3D people from 2D pictures

2006

Conference Paper

ps


We propose a hierarchical process for inferring the 3D pose of a person from monocular images. First we infer a learned view-based 2D body model from a single image using non-parametric belief propagation. This approach integrates information from bottom-up body-part proposal processes and deals with self-occlusion to compute distributions over limb poses. Then, we exploit a learned Mixture of Experts model to infer a distribution of 3D poses conditioned on 2D poses. This approach is more general than recent work on inferring 3D pose directly from silhouettes since the 2D body model provides a richer representation that includes the 2D joint angles and the poses of limbs that may be unobserved in the silhouette. We demonstrate the method in a laboratory setting where we evaluate the accuracy of the 3D poses against ground truth data. We also estimate 3D body pose in a monocular image sequence. The resulting 3D estimates are sufficiently accurate to serve as proposals for the Bayesian inference of 3D human motion over time

Award: (Best Paper)
Author(s): Sigal, L. and Black, M. J.
Book Title: Proc. IV Conf. on Articulated Motion and DeformableObjects (AMDO)
Volume: LNCS 4069
Pages: 185--195
Year: 2006
Month: July

Department(s): Perceiving Systems
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

DOI: 10.1007/11789239_19

Award Paper: Best Paper

Links: pdf
pdf from publisher
Video
Video:

BibTex

@inproceedings{Sigal:AMDO:2006,
  title = {Predicting {3D} people from {2D} pictures},
  author = {Sigal, L. and Black, M. J.},
  booktitle = {Proc. IV Conf. on Articulated Motion and DeformableObjects (AMDO)},
  volume = {LNCS 4069},
  pages = {185--195},
  month = jul,
  year = {2006},
  doi = {10.1007/11789239_19},
  month_numeric = {7}
}