Department Talks

Learning from Synthetic Humans

Talk
  • 04 May 2017 • 15:00 16:00
  • Gul Varol
  • N3.022 (Greenhouse)

Estimating human pose, shape, and motion from images and video are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL: a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.

Organizers: Dimitris Tzionas

  • Vittorio Ferrari

Vision is a crucial sense for computational systems to interact with their environments as biological systems do. A major task is interpreting images of complex scenes, by recognizing and localizing objects, persons and actions. This involves learning a large number of visual models, ideally autonomously.

In this talk I will present two ways of reducing the amount of human supervision required by this learning process. The first way is labeling images only by the object class they contain. Learning from cluttered images is very challenging in this weakly supervised setting. In the traditional paradigm, each class is learned starting from scratch. In our work instead, knowledge generic over classes is first learned during a meta-training stage from images of diverse classes with given object locations, and is then used to support learning any new class without location annotation. Generic knowledge helps because during meta-training the system can learn about localizing objects in general. As demonstrated experimentally, this approach enables learning from more challenging images than possible before, such as the PASCAL VOC 2007, containing extensive clutter and large scale and appearance variations between object instances.

The second way is the analysis of news items consisting of images and text captions. We associate names and action verbs in the captions to the face and body pose of the persons in the images. We introduce a joint probabilistic model for simultaneously recovering image-caption correspondences and learning appearance models for the face and pose classes occurring in the corpus. As demonstrated experimentally, this joint `face and pose' model solves the correspondence problem better than earlier models covering only the face.

I will conclude with an outlook on the idea of visual culture, where new visual concepts are learned incrementally on top of all visual knowledge acquired so far. Beside generic knowledge, visual culture includes also knowledge specific to a class, knowledge of scene structures and other forms of visual knowledge. Potentially, this approach could considerably extend current visual recognition capabilities and produce an integrated body of visual knowledge.


  • Jim Little

I will survey our work on tracking and measurement, waypoints on the path to activity recognition and understanding, in sports video, highlighting some of our recent work on rectification and player tracking, not just in hockey but more recently in basketball, where we have addressed player identification both in a fully supervised and semi-supervised manner.


  • Trevor Darrell

Methods for visual recognition have made dramatic strides in recent years on various online benchmarks, but performance in the real world still often falters. Classic gradient-histogram models make overly simplistic assumptions regarding image appearance statistics, both locally and globally. Recent progress suggests that new learning-based representations can improve recognition by devices that are embedded in a physical world.

I'll review new methods for domain adaptation which capture the visual domain shift between environments, and improve recognition of objects in specific places when trained from generic online sources. I'll discuss methods for cross-modal semi-supervised learning, which can leverage additional unlabeled modalities in a test environment.

Finally as time permits I'll present recent results learning hierarchical local image representations based on recursive probabilistic topic models, on learning strong object color models from sets of uncalibrated views using a new multi-view color constancy paradigm, and/or on recent results on monocular estimation of grasp affordances.


  • Stan Sclaroff

In the first part of the talk, I will describe methods that learn a single family of detectors for object classes that exhibit large within-class variation. One common solution is to use a divide-and-conquer strategy, where the space of possible within-class variations is partitioned, and different detectors are trained for different partitions.

However, these discrete partitions tend to be arbitrary in continuous spaces, and the classifiers have limited power when there are too few training samples in each subclass. To address this shortcoming, explicit feature sharing has been proposed, but it also makes training more expensive. We show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved in a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. The multiplicative kernel formulation enables feature sharing implicitly; the solution for the optimal sharing is a byproduct of SVM learning.

The resulting detector family is tuned to specific variations in the foreground. The effectiveness of this framework is demonstrated in experiments that involve detection, tracking, and pose estimation of human hands, faces, and vehicles in video.