Header logo is ps
Department Talks

Appearance Modeling for 4D Multi-view Representations

Talk
  • 15 December 2017 • 12:00 12:45
  • Vagia Tsiminaki
  • PS Seminar Room (N3.022)

The emergence of multi-view capture systems has yield a tremendous amount of video sequences. The task of capturing spatio-temporal models from real world imagery (4D modeling) should arguably benefit from this enormous visual information. In order to achieve highly realistic representations both geometry and appearance need to be modeled in high precision. Yet, even with the great progress of the geometric modeling, the appearance aspect has not been fully explored and visual quality can still be improved. I will explain how we can optimally exploit the redundant visual information of the captured video sequences and provide a temporally coherent, super-resolved, view-independent appearance representation. I will further discuss how to exploit the interdependency of both geometry and appearance as separate modalities to enhance visual perception and finally how to decompose appearance representations into intrinsic components (shading & albedo) and super-resolve them jointly to allow for more realistic renderings.

Organizers: Despoina Paschalidou

Reconstructing and Perceiving Humans in Motion

Talk
  • 30 November 2017 • 15:00
  • Dr. Gerard Pons-Moll

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies. Currently, digital models typically lack realistic soft tissue and clothing or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data. We then use such models to extract information out of incomplete and noisy sensor data from monocular video, depth or IMUs. I will give an overview of a selection of projects conducted in Perceiving Systems in which we build realistic models of human pose, shape, soft-tissue and clothing. I will also present some of our recent work on 3D reconstruction of people models from monocular video, real-time fusion and online human body shape estimation from depth data and recovery of human pose in the wild from video and IMUs. I will conclude the talk outlining the next challenges in building digital humans and perceiving them from sensory data.

Organizers: Melanie Feldhofer


  • Christoph Mayer
  • S2 Seminar Room (S 2.014)

Variational image processing translates image processing tasks into optimisation problems. The practical success of this approach depends on the type of optimisation problem and on the properties of the ensuing algorithm. A recent breakthrough was to realise that old first-order optimisation algorithms based on operator splitting are particularly suited for modern data analysis problems. Operator splitting techniques decouple complex optimisation problems into many smaller and simpler sub-problems. In this talk I will revise the variational segmentation problem and a common family of algorithms to solve such optimisation problems. I will show that operator splitting leads to a divide-and-conquer strategy that allows to derive simple and massively parallel updates suitable for GPU implementations. The technique decouples the likelihood from the prior term and allows to use a data-driven model estimating the likelihood from data, using for example deep learning. Using a different decoupling strategy together with general consensus optimisation leads to fully distributed algorithms especially suitable for large-scale segmentation problems. Motivating applications are 3d yeast-cell reconstruction and segmentation of histology data.

Organizers: Benjamin Coors


3D lidar mapping: an accurate and performant approach

Talk
  • 20 October 2017 • 11:30 12:30
  • Michiel Vlaminck
  • PS Seminar Room (N3.022)

In my talk I will present my work regarding 3D mapping using lidar scanners. I will give an overview of the SLAM problem and its main challenges: robustness, accuracy and processing speed. Regarding robustness and accuracy, we investigate a better point cloud representation based on resampling and surface reconstruction. Moreover, we demonstrate how it can be incorporated in an ICP-based scan matching technique. Finally, we elaborate on globally consistent mapping using loop closures. Regarding processing speed, we propose the integration of our scan matching in a multi-resolution scheme and a GPU-accelerated implementation using our programming language Quasar.

Organizers: Simon Donne


  • Slobodan Ilic and Mira Slavcheva
  • PS Seminar Room (N3.022)

In this talk we will address the problem of 3D reconstruction of rigid and deformable objects from a single depth video stream. Traditional 3D registration techniques, such as ICP and its variants, are wide-spread and effective, but sensitive to initialization and noise due to the underlying correspondence estimation procedure. Therefore, we have developed SDF-2-SDF, a dense, correspondence-free method which aligns a pair of implicit representations of scene geometry, e.g. signed distance fields, by minimizing their direct voxel-wise difference. In its rigid variant, we apply it for static object reconstruction via real-time frame-to-frame camera tracking and posterior multiview pose optimization, achieving higher accuracy and a wider convergence basin than ICP variants. Its extension to scene reconstruction, SDF-TAR, carries out the implicit-to-implicit registration over several limited-extent volumes anchored in the scene and runs simultaneous GPU tracking and CPU refinement, with a lower memory footprint than other SLAM systems. Finally, to handle non-rigidly moving objects, we incorporate the SDF-2-SDF energy in a variational framework, regularized by a damped approximately Killing vector field. The resulting system, KillingFusion, is able to reconstruct objects undergoing topological changes and fast inter-frame motion in near-real time.

Organizers: Fatma Güney


  • Anton Van Den Hengel
  • Aquarium

Visual Question Answering is one of the applications of Deep Learning that is pushing towards real Artificial Intelligence. It turns the typical deep learning process around by only defining the task to be carried out after the training has taken place, which changes the task fundamentally. We have developed a range of strategies for incorporating other information sources into deep learning-based methods, and the process taken a step towards developing algorithms which learn how to use other algorithms to solve a problem, rather than solving it directly. This talk thus covers some of the high-level questions about the types of challenges Deep Learning can be applied to, and how we might separate the things its good at from those that it’s not.

Organizers: Siyu Tang


  • Yeara Kozlov
  • Aquarium

Creating convincing human facial animation is challenging. Face animation is often hand-crafted by artists separately from body motion. Alternatively, if the face animation is derived from motion capture, it is typically performed while the actor is relatively still. Recombining the isolated face animation with body motion is non-trivial and often results in uncanny results if the body dynamics are not properly reflected on the face (e.g. cheeks wiggling when running). In this talk, I will discuss the challenges of human soft tissue simulation and control. I will then present our method for adding physical effects to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method can combine facial animation and rigid body motion consistently while preserving the original animation as closely as possible. Our novel simulation framework uses the original animation as per-frame rest-poses without adding spurious forces. We also propose the concept of blendmaterials to give artists an intuitive means to control the changing material properties due to muscle activation.

Organizers: Timo Bolkart


Deep Learning for stereo matching and related tasks

Talk
  • 12 July 2017 • 11:00 12:00
  • Matteo Poggi
  • PS Seminar Room (N3.022)

Recently, deep learning proved to be successful also on low level vision tasks such as stereo matching. Another recent trend in this latter field is represented by confidence measures, with increasing effectiveness when coupled with random forest classifiers or CNNs. Despite their excellent accuracy in outliers detection, few other applications rely on them. In the first part of the talk, we'll take a look at the latest proposal in terms of confidence measures for stereo matching, as well as at some novel methodologies exploiting these very accurate cues. In the second part, we'll talk about GC-net, a deep network currently representing the state-of-the-art on the KITTI datasets, and its extension to motion stereo processing.

Organizers: Yiyi Liao


  • Seong Joon Oh
  • Aquarium

Growth of the internet and social media has spurred the sharing and dissemination of personal data at large scale. At the same time, recent developments in computer vision has enabled unseen effectiveness and efficiency in automated recognition. It is clear that visual data contains private information that can be mined, yet the privacy implications of sharing such data have been less studied in computer vision community. In the talk, I will present some key results from our study of the implications of the development of computer vision on the identifiability in social media, and an analysis of existing and new anonymisation techniques. In particular, we show that adversarial image perturbations (AIP) introduce human invisible perturbations on the input image that effectively misleads a recogniser. They are far more aesthetic and effective compared to e.g. face blurring. The core limitation, however, is that AIPs are usually generated against specific target recogniser(s), and it is hard to guarantee the performance against uncertain, potentially adaptive recognisers. As a first step towards dealing with the uncertainty, we have introduced a game theoretical framework to obtain the user’s privacy guarantee independent of the randomly chosen recogniser (within some fixed set).

Organizers: Siyu Tang


  • Matthias Niessner
  • PS Seminar Room (N3.022)

In the recent years, commodity 3D sensors have become easily and widely available. These advances in sensing technology have spawned significant interest in using captured 3D data for mapping and semantic understanding of 3D environments. In this talk, I will give an overview of our latest research in the context of 3D reconstruction of indoor environments. I will further talk about the use of 3D data in the context of modern machine learning techniques. Specifically, I will highlight the importance of training data, and how can we efficiently obtain labeled and self-supervised ground truth training datasets from captured 3D content. Finally, I will show a selection of state-of-the-art deep learning approaches, including discriminative semantic labeling of 3D scenes and generative reconstruction techniques.

Organizers: Despoina Paschalidou


  • Alexey Dosovitskiy
  • PS Seminar Room (N3.022)

Our world is dynamic and three-dimensional. Understanding the 3D layout of scenes and the motion of objects is crucial for successfully operating in such an environment. I will talk about two lines of recent research in this direction. One is on end-to-end learning of motion and 3D structure: optical flow estimation, binocular and monocular stereo, direct generation of large volumes with convolutional networks. The other is on sensorimotor control in immersive three-dimensional environments, learned from experience or from demonstration.

Organizers: Lars Mescheder Aseem Behl