Header logo is ps
Department Talks
  • Cordelia Schmid

We, first, address the problems of large scale image classification. We present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We show and interpret the importance of an appropriate vector normalization.

Furthermore, we discuss how to learn given a large number of classes and images with stochastic gradient descent and show results on ImageNet10k. We, then, present a weakly supervised approach for learning human actions modeled as interactions between humans and objects.

Our approach is human-centric: we first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated (only) with the action label.

Finally, we present work on learning object detectors from realworld web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos.


  • Pradeep Krishna Yarlagadda

The grand goal of Computer Vision is to generate an automatic description of an image based on its visual content. Category level object detection is an important building block towards such capability. The first part of this talk deals with three established object detection techniques in Computer Vision, their shortcomings and how they are improved. i) Hough Voting methods efficiently handle the high complexity of multi-scale, category-level object detection in cluttered scenes.

However, the primary weakness of this approach is that mutually dependent local observations independently vote for intrinsically global object properties such as object scale. We model the feature dependencies by presenting an objective function that combines various intimately related problems in Hough Voting. ii) Shape is a highly prominent characteristic of objects that human vision utilizes for detecting objects. However, shape poses significant challenges for object detection in cluttered scenes: Object form is an emergent property that cannot be perceived locally but becomes available only once the whole object has been detected. Thus we address the detection of objects and assembling of their shape simultaneously in a Max-Margin Multiple Instance Learning framework, while avoiding fragile bottom-up grouping in query images altogether. iii) Chamfer matching is a widely used technique for detecting objects because of its speed. However, it treats objects as being a mere sum of the distance transformation of all their contour pixels. Also, spurious matches in background clutter is a huge problem for chamfer matching. We address these two issues by a) applying a discriminative approach to distance transformation computation in chamfer matching and b) estimating the accidentalness of a foreground template match by a small dictionary of simple background contours.

The second part of the talk explores the question: what insights can automatic object detection and intra-category object relationships bring to art historians ? It turns out that techniques from Computer Vision have helped the art historians in discovering different artistic workshops within an Upper German manuscript, understanding the variations of art within a particular school of design and studying the transitions across artistic styles by 1-d ordering of objects. Obtaining such insights manually is a tedious task and Computer Vision made the job of art historians easier.

Related Publications:

1. Pradeep Yarlagadda and Björn Ommer From Meaningful Contours to Discriminative Object Shape, ECCV 2012.

2. Pradeep Yarlagadda, Angela Eigenstetter and Björn Ommer Learning Discriminative Chamfer Regularization, BMVC 2012.

3. Pradeep Yarlagadda, Antonio Monroy and Björn Ommer Voting by Grouping Dependent Parts, ECCV 2010.

4. Pradeep Yarlagadda, Antonio Monroy, Bernd Carque and Björn Ommer Recognition and Analysis of Objects in Medieval Images, ACCV (e-heritage) 2010.

5. Pradeep Yarlagadda, Antonio Monroy, Bernd Carque and Björn Ommer Top-down Analysis of Low-level Object Relatedness Leading to Semantic Understanding of Medieval Image Collections, Computer Vision and Image Analysis of art SPIE, 2010.


  • Andreas Geiger

Navigating a car safely through complex environments is considered a relatively easy task for humans. Computer algorithms, however, can't nearly match human performance and often rely on 3D laser scanners or detailed maps. The reason for this is that the level and accuracy of current computer vision and scene understanding algorithms is still far from that of a human being. In this talk I will argue that pushing these limits requires solving a set of core computer vision problems, ranging from low-level tasks (stereo, optical flow) to high-level problems (object detection, 3D scene understanding).

First, I will introduce the KITTI datasets and benchmarks with accurate ground truth for evaluating stereo, optical flow, SLAM and 3D object detection/tracking on realistic video sequences. Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.

Second, I will propose a novel generative model for 3D scene understanding that is able to reason jointly about the scene layout (topology and geometry of streets) as well as the location and orientation of objects. By using context from this model, performance of state-of-the-art object detectors in terms of estimating object orientation can be significantly increased.

Finally, I will give an outlook on how prior information in form of large-scale community-driven maps (OpenStreetMap) can be used in the context of 3D scene understanding.


  • Stefan Roth

Markov random fields (MRFs) have found widespread use as models of natural image and scene statistics. Despite progress in modeling image properties beyond gradient statistics with high-order cliques, and learning image models from example data, existing MRFs only exhibit a limited ability of actually capturing natural image statistics.

In this talk I will present recent work that investigates this limitation of previous filter-based MRF models, including Fields of Experts (FoEs). We found that these limitations are due to inadequacies in the leaning procedure and suggest various modifications to address them. These "secrets of FoE learning" allow training more suitable potential functions, whose shape approaches that of a Dirac-delta function, as well as models with larger and more filters.

Our experiments not only indicate a substantial improvement of the models' ability to capture relevant statistical properties of natural images, but also demonstrate a significant performance increase in a denoising application to levels previously unattained by generative approaches. This is joint work with Qi Gao.


  • Hedvig Kjellström

The great majority of object analysis methods are based on visual object properties - objects are categorized according to how they appear in images. Visual appearance is measured in terms of image features (e.g., SIFTs) extracted from images or video. However, besides appearance, objects also have many properties that can be of interest, e.g., for a robot who wants to employ them in activities: Temperature, weight, surface softness, and also the functionalities or affordances of the object, i.e., how it is intended to be used. One example, recently addressed in the vision community, are chairs. Chairs can look vastly different, but have one thing in common: they afford sitting. At the Computer Vision and Active Perception Lab at KTH, we study the problem of inferring non-observable object properties in a number of ways. In this presentation I will describe some of this work.


  • Anuj Srivastava

Shape analysis and modeling of 2D and 3D objects has important applications in many branches of science and engineering. The general goals in shape analysis include: derivation of efficient shape metrics, computation of shape templates, representation of dominant shape variability in a shape class, and development of probability models that characterize shape variation within and across classes. While past work on shape analysis is dominated by point representations -- finite sets of ordered or triangulated points on objects' boundaries -- the emphasis has lately shifted to continuous formulations.

The shape analysis of parametrized curves and surfaces introduces an additional shape invariance, the re-parametrization group, in additional to the standard invariants of rigid motions and global scales. Treating re-parametrization as a tool for registration of points across objects, we incorporate this group in shape analysis, in the same way orientation is handled in Procrustes analysis. For shape analysis of parametrized curves, I will describe an elastic Riemannian metric and a mathematical representation, called square-root-velocity-function (SRVF), that allows optimal registration and analysis using simple tools.

This framework provides proper metrics, geodesics, and sample statistics of shapes. These sample statistics are further useful in statistical modeling of shapes in different shape classes. Then, I will describe some preliminary extensions of these ideas to shape analysis of parametrized surfaces, I will demonstrate these ideas using applications from medical image analysis, protein structure analysis, 3D face recognition, and human activity recognition in videos.


  • Edward H. Adelson

Abstract: We can modify the optical properties of surfaces by “coating” them with a micron-thin membrane supported by an elastomeric gel. Using an opaque, matte membrane, we can make reflected light micrographs with a distinctive SEM-like appearance. These have modest magnification (e.g., 50X), but they reveal fine surface details not normally seen with an optical microscope.

The system, which we call “GelSight,” removes optical complexities such as specular reflection, albedo, and subsurface scattering, and isolates the shading information that signals 3D shape. One can then see the topography of optically challenging subjects like sandpaper, machined metal, and living human skin. In addition, one can capture 3D surface geometry through photometric stereo. This leads to a non-destructive contact-based optical profilometer that is simple, fast, and compact.


  • Edward H. Adelson

Human can easily see 3D shape from single 2D images, exploiting multiple kinds of information. This has given rise to multiple subfields (in both human vision and computer vision) devoted to the study of shape-from-shading, shape-from-texture, shape-from-contours, and so on.

The proposed algorithms for each type of shape-from-x remain specialized and fragile (in contrast with the flexibility and robustness of human vision). Recent work in graphics and psychophysics has demonstrated the importance of local orientation structure in conveying 3D shape. This information is fairly stable and reliable, even when a given shape is rendered in multiple styles (including non-photorealistic styles such as line drawings.)

We have developed an exemplar-based system (which we call Shape Collage) that learns to associate image patches with corresponding 3D shape patches. We train it with synthetic images of “blobby” objects rendered in various ways, including solid texture, Phong shading, and line drawings. Given a new image, it finds the best candidate scene patches and assembles them into a coherent interpretation of the object shape.

Our system is the first that can retrieve the shape of naturalistic objects from line drawings. The same system, without modification, works for shape-from-texture and can also get shape from shading, even with non-Lambertian surfaces. Thus disparate types of image information can be processed by a single mechanism to extract 3D shape. Collaborative work with Forrester Cole, Phillip Isola, Fredo Durand, and William Freeman.


  • E.J. Chichilnisky

A central aspect of visual processing in the retina is the existence of nonlinear subunits within the receptive fields of retinal ganglion cells. These subunits have been implicated in visual computations such as segregation of object motion from background motion. However, relatively little is known about the spatial structure of subunits and its emergence from nonlinear interactions in the interneuron circuitry of the retina.

We used physiological measurements of functional circuitry in the isolated primate retina at single-cell resolution, combined with novel computational approaches, to explore the neural computations that produce subunits. Preliminary results suggest that these computations can be understood in terms of convergence of photoreceptor signals via specific types of interneurons to ganglion cells.


  • Ruth Rosenholtz

Considerable research has demonstrated that the representation is not equally faithful throughout the visual field; representation appears to be coarser in peripheral vision, perhaps as a strategy for dealing with an information bottleneck in visual processing. In the last few years, a convergence of evidence has suggested that in peripheral and unattended regions, the information available consists of summary statistics.

For a complex set of statistics, such a representation can provide a rich and detailed percept of many aspects of a visual scene. However, such a representation is also lossy; we would expect the inherent ambiguities and confusions to have profound implications for vision.

For example, a complex pattern, viewed peripherally, might be poorly represented by its summary statistics, leading to the degraded recognition experienced under conditions of visual crowding. Difficult visual search might occur when summary statistics could not adequately discriminate between a target-present and distractor-only patch of the stimuli. Certain illusory percepts might arise from valid interpretations of the available – lossy – information. It is precisely visual tasks upon which a statistical representation has significant impact that provide the evidence for such a representation in early vision. I will summarize recent evidence that early vision computes summary statistics based upon such tasks.