Header logo is ps
Department Talks

New Ideas for Stereo Matching of Untextured Scenes

Talk
  • 24 July 2018 • 14:00 15:00
  • Daniel Scharstein
  • Ground Floor Seminar Room (N0.002)

Two talks for the price of one! I will present my recent work on the challenging problem of stereo matching of scenes with little or no surface texture, attacking the problem from two very different angles. First, I will discuss how surface orientation priors can be added to the popular semi-global matching (SGM) algorithm, which significantly reduces errors on slanted weakly-textured surfaces. The orientation priors serve as a soft constraint during matching and can be derived in a variety of ways, including from low-resolution matching results and from monocular analysis and Manhattan-world assumptions. Second, we will examine the pathological case of Mondrian Stereo -- synthetic scenes consisting solely of solid-colored planar regions, resembling paintings by Piet Mondrian. I will discuss assumptions that allow disambiguating such scenes, present a novel stereo algorithm employing symbolic reasoning about matched edge segments, and discuss how similar ideas could be utilized in robust real-world stereo algorithms for untextured environments.

Organizers: Anurag Ranjan

Imitation of Human Motion Planning

Talk
  • 27 July 2018 • 12:00 12:45
  • Jim Mainprice
  • N3.022 (Aquarium)

Humans act upon their environment through motion, the ability to plan their movements is therefore an essential component of their autonomy. In recent decades, motion planning has been widely studied in robotics and computer graphics. Nevertheless robots still fail to achieve human reactivity and coordination. The need for more efficient motion planning algorithms has been present through out my own research on "human-aware" motion planning, which aims to take the surroundings humans explicitly into account. I believe imitation learning is the key to this particular problem as it allows to learn both, new motion skills and predictive models, two capabilities that are at the heart of "human-aware" robots while simultaneously holding the promise of faster and more reactive motion generation. In this talk I will present my work in this direction.

  • Omar Costilla Reyes
  • Aquarium @ PS

Human footsteps can provide a unique behavioural pattern for robust biometric systems. Traditionally, security systems have been based on passwords or security access cards. Biometric recognition deals with the design of security systems for automatic identification or verification of a human subject (client) based on physical and behavioural characteristics. In this talk, I will present spatio-temporal raw and processed footstep data representations designed and evaluated on deep machine learning models based on a two-stream resnet architecture, by using the SFootBD database the largest footstep database to date with more than 120 people and almost 20,000 footstep signals. Our models deliver an artificial intelligence capable of effectively differentiating the fine-grained variability of footsteps between legitimate users (clients) and impostor users of the biometric system. We provide experimental results in 3 critical data-driven security scenarios, according to the amount of footstep data available for model training: at airports security checkpoints (smallest training set), workspace environments (medium training set) and home environments (largest training set). In these scenarios we report state-of-the-art footstep recognition rates.

Organizers: Dimitris Tzionas


  • Silvia Zuffi
  • N3.022

Animals are widespread in nature and the analysis of their shape and motion is of importance in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. In our previous SMAL model, we learn animal shape from toys figurines, but toys are limited in number and realism, and not every animal is sufficiently popular for there to be realistic toys depicting it. What is available in large quantities are images and videos of animals from nature photographs, animal documentaries, and webcams. In this talk I will present our recent work for capturing the detailed 3D shape of animals from images alone. Our method extracts significantly more 3D shape detail than previous work and is able to model new species using only a few video frames. Additionally, we extract realistic texture map from images for capturing both animal shape and appearance.


  • Prof. Constantin Rothkopf
  • Tübingen, 3rd Floor Intelligent Systems: Aquarium

Active vision has long put forward the idea, that visual sensation and our actions are inseparable, especially when considering naturalistic extended behavior. Further support for this idea comes from theoretical work in optimal control, which demonstrates that sensing, planning, and acting in sequential tasks can only be separated under very restricted circumstances. The talk will present experimental evidence together with computational explanations of human visuomotor behavior in tasks ranging from classic psychophysical detection tasks to ball catching and visuomotor navigation. Along the way it will touch topics such as the heuristics hypothesis and learning of visual representations. The connecting theme will be that, from the switching of visuomotor behavior in response to changing task-constraints down to cortical visual representations in V1, action and perception are inseparably intertwined in an ambiguous and uncertain world

Organizers: Betty Mohler


Deriving a Tongue Model from MRI Data

Talk
  • 20 February 2018 • 14:00 14:45
  • Alexander Hewer
  • Aquarium

The tongue plays a vital part in everyday life where we use it extensively during speech production. Due to this importance, we want to derive a parametric shape model of the tongue. This model enables us to reconstruct the full tongue shape from a sparse set of points, like for example motion capture data. Moreover, we can use such a model in simulations of the vocal tract to perform articulatory speech synthesis or to create animated virtual avatars. In my talk, I describe a framework for deriving such a model from MRI scans of the vocal tract. In particular, this framework uses image denoising and segmentation methods to produce a point cloud approximating the vocal tract surface. In this context, I will also discuss how palatal contacts of the tongue can be handled, i.e., situations where the tongue touches the palate and thus no tongue boundary is visible. Afterwards, template matching is used to derive a mesh representation of the tongue from this cloud. The acquired meshes are finally used to construct a multilinear model.

Organizers: Timo Bolkart


Appearance Modeling for 4D Multi-view Representations

Talk
  • 15 December 2017 • 12:00 12:45
  • Vagia Tsiminaki
  • PS Seminar Room (N3.022)

The emergence of multi-view capture systems has yield a tremendous amount of video sequences. The task of capturing spatio-temporal models from real world imagery (4D modeling) should arguably benefit from this enormous visual information. In order to achieve highly realistic representations both geometry and appearance need to be modeled in high precision. Yet, even with the great progress of the geometric modeling, the appearance aspect has not been fully explored and visual quality can still be improved. I will explain how we can optimally exploit the redundant visual information of the captured video sequences and provide a temporally coherent, super-resolved, view-independent appearance representation. I will further discuss how to exploit the interdependency of both geometry and appearance as separate modalities to enhance visual perception and finally how to decompose appearance representations into intrinsic components (shading & albedo) and super-resolve them jointly to allow for more realistic renderings.

Organizers: Despoina Paschalidou


Reconstructing and Perceiving Humans in Motion

Talk
  • 30 November 2017 • 15:00
  • Dr. Gerard Pons-Moll

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies. Currently, digital models typically lack realistic soft tissue and clothing or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data. We then use such models to extract information out of incomplete and noisy sensor data from monocular video, depth or IMUs. I will give an overview of a selection of projects conducted in Perceiving Systems in which we build realistic models of human pose, shape, soft-tissue and clothing. I will also present some of our recent work on 3D reconstruction of people models from monocular video, real-time fusion and online human body shape estimation from depth data and recovery of human pose in the wild from video and IMUs. I will conclude the talk outlining the next challenges in building digital humans and perceiving them from sensory data.

Organizers: Melanie Feldhofer


  • Christoph Mayer
  • S2 Seminar Room (S 2.014)

Variational image processing translates image processing tasks into optimisation problems. The practical success of this approach depends on the type of optimisation problem and on the properties of the ensuing algorithm. A recent breakthrough was to realise that old first-order optimisation algorithms based on operator splitting are particularly suited for modern data analysis problems. Operator splitting techniques decouple complex optimisation problems into many smaller and simpler sub-problems. In this talk I will revise the variational segmentation problem and a common family of algorithms to solve such optimisation problems. I will show that operator splitting leads to a divide-and-conquer strategy that allows to derive simple and massively parallel updates suitable for GPU implementations. The technique decouples the likelihood from the prior term and allows to use a data-driven model estimating the likelihood from data, using for example deep learning. Using a different decoupling strategy together with general consensus optimisation leads to fully distributed algorithms especially suitable for large-scale segmentation problems. Motivating applications are 3d yeast-cell reconstruction and segmentation of histology data.

Organizers: Benjamin Coors


3D lidar mapping: an accurate and performant approach

Talk
  • 20 October 2017 • 11:30 12:30
  • Michiel Vlaminck
  • PS Seminar Room (N3.022)

In my talk I will present my work regarding 3D mapping using lidar scanners. I will give an overview of the SLAM problem and its main challenges: robustness, accuracy and processing speed. Regarding robustness and accuracy, we investigate a better point cloud representation based on resampling and surface reconstruction. Moreover, we demonstrate how it can be incorporated in an ICP-based scan matching technique. Finally, we elaborate on globally consistent mapping using loop closures. Regarding processing speed, we propose the integration of our scan matching in a multi-resolution scheme and a GPU-accelerated implementation using our programming language Quasar.

Organizers: Simon Donne


  • Slobodan Ilic and Mira Slavcheva
  • PS Seminar Room (N3.022)

In this talk we will address the problem of 3D reconstruction of rigid and deformable objects from a single depth video stream. Traditional 3D registration techniques, such as ICP and its variants, are wide-spread and effective, but sensitive to initialization and noise due to the underlying correspondence estimation procedure. Therefore, we have developed SDF-2-SDF, a dense, correspondence-free method which aligns a pair of implicit representations of scene geometry, e.g. signed distance fields, by minimizing their direct voxel-wise difference. In its rigid variant, we apply it for static object reconstruction via real-time frame-to-frame camera tracking and posterior multiview pose optimization, achieving higher accuracy and a wider convergence basin than ICP variants. Its extension to scene reconstruction, SDF-TAR, carries out the implicit-to-implicit registration over several limited-extent volumes anchored in the scene and runs simultaneous GPU tracking and CPU refinement, with a lower memory footprint than other SLAM systems. Finally, to handle non-rigidly moving objects, we incorporate the SDF-2-SDF energy in a variational framework, regularized by a damped approximately Killing vector field. The resulting system, KillingFusion, is able to reconstruct objects undergoing topological changes and fast inter-frame motion in near-real time.

Organizers: Fatma Güney


  • Anton Van Den Hengel
  • Aquarium

Visual Question Answering is one of the applications of Deep Learning that is pushing towards real Artificial Intelligence. It turns the typical deep learning process around by only defining the task to be carried out after the training has taken place, which changes the task fundamentally. We have developed a range of strategies for incorporating other information sources into deep learning-based methods, and the process taken a step towards developing algorithms which learn how to use other algorithms to solve a problem, rather than solving it directly. This talk thus covers some of the high-level questions about the types of challenges Deep Learning can be applied to, and how we might separate the things its good at from those that it’s not.

Organizers: Siyu Tang