Header logo is ps


2010


Visibility Maps for Improving Seam Carving
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

webpage pdf slides supplementary code [BibTex]

2010

webpage pdf slides supplementary code [BibTex]


A {2D} human body model dressed in eigen clothing
A 2D human body model dressed in eigen clothing

Guan, P., Freifeld, O., Black, M. J.

In European Conf. on Computer Vision, (ECCV), pages: 285-298, Springer-Verlag, September 2010 (inproceedings)

Abstract
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.

pdf data poster Project Page [BibTex]

pdf data poster Project Page [BibTex]


Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Multisensor-Fusion for 3D Full-Body Human Motion Capture
Multisensor-Fusion for 3D Full-Body Human Motion Capture

Pons-Moll, G., Baak, A., Helten, T., Müller, M., Seidel, H., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

project page pdf [BibTex]

project page pdf [BibTex]


Contour people: A parameterized model of {2D} articulated human shape
Contour people: A parameterized model of 2D articulated human shape

Freifeld, O., Weiss, A., Zuffi, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR), pages: 639-646, IEEE, June 2010 (inproceedings)

Abstract
We define a new “contour person” model of the human body that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The contour person (CP) model is learned from a 3D SCAPE model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts forms the training set. The CP model factors deformations of the body into three components: shape variation, viewpoint change and part rotation. This latter model also incorporates a learned non-rigid deformation model. The result is a 2D articulated model that is compact to represent, simple to compute with and more expressive than previous models. We demonstrate the value of such a model in 2D pose estimation and segmentation. Given an initial pose from a standard pictorial-structures method, we refine the pose and shape using an objective function that segments the scene into foreground and background regions. The result is a parametric, human-specific, image segmentation.

pdf slides video of CVPR talk Project Page [BibTex]

pdf slides video of CVPR talk Project Page [BibTex]


Secrets of optical flow estimation and their principles
Secrets of optical flow estimation and their principles

(2020 Longuet-Higgins Prize)

Sun, D., Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2432-2439, IEEE, June 2010 (inproceedings)

pdf Matlab code code copryright notice [BibTex]

pdf Matlab code code copryright notice [BibTex]


Coded exposure imaging for projective motion deblurring
Coded exposure imaging for projective motion deblurring

Tai, Y., Kong, N., Lin, S., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2408-2415, June 2010 (inproceedings)

Abstract
We propose a method for deblurring of spatially variant object motion. A principal challenge of this problem is how to estimate the point spread function (PSF) of the spatially variant blur. Based on the projective motion blur model of, we present a blur estimation technique that jointly utilizes a coded exposure camera and simple user interactions to recover the PSF. With this spatially variant PSF, objects that exhibit projective motion can be effectively de-blurred. We validate this method with several challenging image examples.

Publisher site [BibTex]

Publisher site [BibTex]


Tracking people interacting with objects
Tracking people interacting with objects

Kjellstrom, H., Kragic, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 747-754, June 2010 (inproceedings)

pdf Video [BibTex]

pdf Video [BibTex]


no image
Modellbasierte Echtzeit-Bewegungsschätzung in der Fluoreszenzendoskopie

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

In Bildverarbeitung für die Medizin, 574, pages: 435-439, CEUR Workshop Proceedings, 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


{Robust one-shot 3D scanning using loopy belief propagation}
Robust one-shot 3D scanning using loopy belief propagation

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages: 15-22, IEEE, 2010 (inproceedings)

Abstract
A structured-light technique can greatly simplify the problem of shape recovery from images. There are currently two main research challenges in design of such techniques. One is handling complicated scenes involving texture, occlusions, shadows, sharp discontinuities, and in some cases even dynamic change; and the other is speeding up the acquisition process by requiring small number of images and computationally less demanding algorithms. This paper presents a “one-shot” variant of such techniques to tackle the aforementioned challenges. It works by projecting a static grid pattern onto the scene and identifying the correspondence between grid stripes and the camera image. The correspondence problem is formulated using a novel graphical model and solved efficiently using loopy belief propagation. Unlike prior approaches, the proposed approach uses non-deterministic geometric constraints, thereby can handle spurious connections of stripe images. The effectiveness of the proposed approach is verified on a variety of complicated real scenes.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Scene Carving: Scene Consistent Image Retargeting
Scene Carving: Scene Consistent Image Retargeting

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In European Conference on Computer Vision (ECCV), 2010 (inproceedings)

webpage+code pdf supplementary poster [BibTex]

webpage+code pdf supplementary poster [BibTex]


Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction
Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction

Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.

In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)

Abstract
An innovative pain management system, namely Epione, is presented here. Epione deals with three main types of pain, i.e., acute pain, chronic pain, and phantom limb pain. In particular, by using facial expression analysis, Epione forms a dynamic pain meter, which then triggers biofeedback and augmented reality-based destruction scenarios, in an effort to maximize patient's pain relief. This unique combination sets Epione not only a novel pain management approach, but also a means that provides an understanding and integration of the needs of the whole community involved i.e., patients and physicians, in a joint attempt to facilitate easing of their suffering, provide efficient monitoring and contribute to a better quality of life.

Paper Project Page DOI [BibTex]

Paper Project Page DOI [BibTex]


Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing
Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing

Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.

In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)

Abstract
Post-amputation sensation often translates to the feeling of severe pain in the missing limb, referred to as phantom limb pain (PLP). A clear and rational treatment regimen is difficult to establish, as long as the underlying pathophysiology is not fully known. In this work, an innovative PLP management system is presented, as a module of an holistic computer-mediated pain management environment, namely Epione. The proposed Epione-PLP scheme is structured upon advanced facial expression analysis, used to form a dynamic pain meter, which, in turn, is used to trigger biofeedback and augmented reality-based PLP distraction scenarios. The latter incorporate a model of the missing limb for its visualization, in an effort to provide to the amputee the feeling of its existence and control, and, thus, maximize his/her PLP relief. The novel Epione-PLP management approach integrates edge-technology within the context of personalized health and it could be used to facilitate easing of PLP patients' suffering, provide efficient progress monitoring and contribute to the increase in their quality of life.

Paper Project Page link (url) [BibTex]

Paper Project Page link (url) [BibTex]


no image
An automated action initiation system reveals behavioral deficits in MyosinVa deficient mice

Pandian, S., Edelman, N., Jhuang, H., Serre, T., Poggio, T., Constantine-Paton, M.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Dense Marker-less Three Dimensional Motion Capture
Dense Marker-less Three Dimensional Motion Capture

Soren Hauberg, Bente Rona Jensen, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Virtual Vistas; Eleventh International Symposium on the 3D Analysis of Human Movement, 2010 (inproceedings)

Conference site [BibTex]

Conference site [BibTex]


Stick It! Articulated Tracking using Spatial Rigid Object Priors
Stick It! Articulated Tracking using Spatial Rigid Object Priors

Soren Hauberg, Kim S. Pedersen

In Computer Vision – ACCV 2010, 6494, pages: 758-769, Lecture Notes in Computer Science, (Editors: Kimmel, Ron and Klette, Reinhard and Sugimoto, Akihiro), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


Gaussian-like Spatial Priors for Articulated Tracking
Gaussian-like Spatial Priors for Articulated Tracking

Soren Hauberg, Stefan Sommer, Kim S. Pedersen

In Computer Vision – ECCV 2010, 6311, pages: 425-437, Lecture Notes in Computer Science, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


no image
Reach to grasp actions in rhesus macaques: Dimensionality reduction of hand, wrist, and upper arm motor subspaces using principal component analysis

Vargas-Irwin, C., Franquemont, L., Shakhnarovich, G., Yadollahpour, P., Black, M., Donoghue, J.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Layered image motion with explicit occlusions, temporal consistency, and depth ordering
Layered image motion with explicit occlusions, temporal consistency, and depth ordering

Sun, D., Sudderth, E., Black, M. J.

In Advances in Neural Information Processing Systems 23 (NIPS), pages: 2226-2234, MIT Press, 2010 (inproceedings)

Abstract
Layered models are a powerful way of describing natural scenes containing smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or accuracy of non-layered methods. We present a new probabilistic model of optical flow in layers that addresses many of the shortcomings of previous approaches. In particular, we define a probabilistic graphical model that explicitly captures: 1) occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is modeled by a combination of a parametric model and a smooth deviation based on an MRF with a robust spatial prior; the resulting model allows roughness in layers. Finally, a key contribution is the formulation of the layers using an image dependent hidden field prior based on recent models for static scene segmentation. The method achieves state-of-the-art results on the Middlebury benchmark and produces meaningful scene segmentations as well as detected occlusion regions.

main paper supplemental material paper and supplemental material in one pdf file Project Page [BibTex]


Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations
Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations

Stefan Sommer, Francois Lauze, Soren Hauberg, Mads Nielsen

In Computer Vision – ECCV 2010, 6316, pages: 43-56, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking
GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

Rune Mollegaard Friborg, Soren Hauberg, Kenny Erleben

In The CVGPU workshop at European Conference on Computer Vision (ECCV) 2010, 2010 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Unsupervised learning of a low-dimensional non-linear representation of motor cortical neuronal ensemble activity using Spatio-Temporal Isomap

Kim, S., Tsoli, A., Jenkins, O., Simeral, J., Donoghue, J., Black, M.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Vision-Based Automated Recognition of Mice Home-Cage Behaviors.
Vision-Based Automated Recognition of Mice Home-Cage Behaviors.

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

Workshop: Visual Observation and Analysis of Animal and Insect Behavior, in conjunction with International Conference on Pattern Recognition (ICPR) , 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Hands in action: real-time 3{D} reconstruction of hands in interaction with objects
Hands in action: real-time 3D reconstruction of hands in interaction with objects

Romero, J., Kjellström, H., Kragic, D.

In IEEE International Conference on Robotics and Automation (ICRA), pages: 458-463, 2010 (inproceedings)

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


no image
Orientation and direction selectivity in the population code of the visual thalamus

Stanley, G., Jin, J., Wang, Y., Desbordes, G., Black, M., Alonso, J.

COSYNE, 2010 (conference)

[BibTex]

[BibTex]


3{D} Knowledge-Based Segmentation Using Pose-Invariant Higher-Order  Graphs
3D Knowledge-Based Segmentation Using Pose-Invariant Higher-Order Graphs

Wang, C., Teboul, O., Michel, F., Essafi, S., Paragios, N.

In International Conference, Medical Image Computing and Computer Assisted Intervention (MICCAI), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Computational Mechanisms for the motion processing in visual area MT
Computational Mechanisms for the motion processing in visual area MT

Jhuang, H., Serre, T., Poggio, T.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Spatio-Temporal Modeling of Grasping Actions
Spatio-Temporal Modeling of Grasping Actions

Romero, J., Feix, T., Kjellström, H., Kragic, D.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pages: 2103-2108, 2010 (inproceedings)

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


Estimating Shadows with the Bright Channel Cue
Estimating Shadows with the Bright Channel Cue

Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.

In Color and Reflectance in Imaging and Computer Vision Workshop (CRICV) (in conjunction with ECCV 2010), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Dense non-rigid surface registration using high-order graph matching
Dense non-rigid surface registration using high-order graph matching

Zeng, Y., Wang, C., Wang, Y., Gu, X., Samaras, D., Paragios, N.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]

2000


Stochastic tracking of {3D} human figures using {2D} image motion
Stochastic tracking of 3D human figures using 2D image motion

(Winner of the 2010 Koenderink Prize for Fundamental Contributions in Computer Vision)

Sidenbladh, H., Black, M. J., Fleet, D.

In European Conference on Computer Vision, ECCV, pages: 702-718, LNCS 1843, Springer Verlag, Dublin, Ireland, June 2000 (inproceedings)

Abstract
A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image gray level differences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle filtering. The approach extends previous work on parameterized optical flow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds.

pdf code [BibTex]

2000

pdf code [BibTex]


no image
Functional analysis of human motion data

Ormoneit, D., Hastie, T., Black, M. J.

In In Proc. 5th World Congress of the Bernoulli Society for Probability and Mathematical Statistics and 63rd Annual Meeting of the Institute of Mathematical Statistics, Guanajuato, Mexico, May 2000 (inproceedings)

[BibTex]

[BibTex]


no image
Stochastic modeling and tracking of human motion

Ormoneit, D., Sidenbladh, H., Black, M. J., Hastie, T.

Learning 2000, Snowbird, UT, April 2000 (conference)

abstract [BibTex]

abstract [BibTex]


A framework for modeling the appearance of {3D} articulated figures
A framework for modeling the appearance of 3D articulated figures

Sidenbladh, H., De la Torre, F., Black, M. J.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 368-375, Grenoble, France, March 2000 (inproceedings)

pdf [BibTex]

pdf [BibTex]

1999


Edges as outliers: Anisotropic smoothing using local image statistics
Edges as outliers: Anisotropic smoothing using local image statistics

Black, M. J., Sapiro, G.

In Scale-Space Theories in Computer Vision, Second Int. Conf., Scale-Space ’99, pages: 259-270, LNCS 1682, Springer, Corfu, Greece, September 1999 (inproceedings)

Abstract
Edges are viewed as statistical outliers with respect to local image gradient magnitudes. Within local image regions we compute a robust statistical measure of the gradient variation and use this in an anisotropic diffusion framework to determine a spatially varying "edge-stopping" parameter σ. We show how to determine this parameter for two edge-stopping functions described in the literature (Perona-Malik and the Tukey biweight). Smoothing of the image is related the local texture and in regions of low texture, small gradient values may be treated as edges whereas in regions of high texture, large gradient magnitudes are necessary before an edge is preserved. Intuitively these results have similarities with human perceptual phenomena such as masking and "popout". Results are shown on a variety of standard images.

pdf [BibTex]

1999

pdf [BibTex]


Probabilistic detection and tracking of motion discontinuities
Probabilistic detection and tracking of motion discontinuities

(Marr Prize, Honorable Mention)

Black, M. J., Fleet, D. J.

In Int. Conf. on Computer Vision, ICCV-99, pages: 551-558, ICCV, Corfu, Greece, September 1999 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Artscience Sciencart
Artscience Sciencart

Black, M. J., Levy, D., PamelaZ,

In Art and Innovation: The Xerox PARC Artist-in-Residence Program, pages: 244-300, (Editors: Harris, C.), MIT-Press, 1999 (incollection)

Abstract
One of the effects of the PARC Artist In Residence (PAIR) program has been to expose the strong connections between scientists and artists. Both do what they do because they need to do it. They are often called upon to justify their work in order to be allowed to continue to do it. They need to justify it to funders, to sponsoring institutions, corporations, the government, the public. They publish papers, teach workshops, and write grants touting the educational or health benefits of what they do. All of these things are to some extent valid, but the fact of the matter is: artists and scientists do their work because they are driven to do it. They need to explore and create.

This chapter attempts to give a flavor of one multi-way "PAIRing" between performance artist PamelaZ and two PARC researchers, Michael Black and David Levy. The three of us paired up because we found each other interesting. We chose each other. While most artists in the program are paired with a single researcher Pamela jokingly calls herself a bigamist for choosing two PAIR "husbands" with different backgrounds and interests.

There are no "rules" to the PAIR program; no one told us what to do with our time. Despite this we all had a sense that we needed to produce something tangible during Pamela's year-long residency. In fact, Pamela kept extending her residency because she did not feel as though we had actually made anything concrete. The interesting thing was that all along we were having great conversations, some of which Pamela recorded. What we did not see at the time was that it was these conversations between artists and scientists that are at the heart of the PAIR program and that these conversations were changing the way we thought about our own work and the relationships between science and art.

To give these conversations their due, and to allow the reader into our PAIR interactions, we include two of our many conversations in this chapter.

[BibTex]

[BibTex]


Explaining optical flow events with parameterized spatio-temporal models
Explaining optical flow events with parameterized spatio-temporal models

Black, M. J.

In IEEE Proc. Computer Vision and Pattern Recognition, CVPR’99, pages: 326-332, IEEE, Fort Collins, CO, 1999 (inproceedings)

pdf video [BibTex]

pdf video [BibTex]

1998


The Digital Office: Overview
The Digital Office: Overview

Black, M., Berard, F., Jepson, A., Newman, W., Saund, E., Socher, G., Taylor, M.

In AAAI Spring Symposium on Intelligent Environments, pages: 1-6, Stanford, March 1998 (inproceedings)

pdf [BibTex]

1998

pdf [BibTex]


A framework for modeling appearance change in image sequences
A framework for modeling appearance change in image sequences

Black, M. J., Fleet, D. J., Yacoob, Y.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 660-667, Mumbai, India, January 1998 (inproceedings)

Abstract
Image "appearance" may change over time due to a variety of causes such as 1) object or camera motion; 2) generic photometric events including variations in illumination (e.g. shadows) and specular reflections; and 3) "iconic changes" which are specific to the objects being viewed and include complex occlusion events and changes in the material properties of the objects. We propose a general framework for representing and recovering these "appearance changes" in an image sequence as a "mixture" of different causes. The approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion.

pdf video [BibTex]

pdf video [BibTex]


Parameterized modeling and recognition of activities
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 120-127, Mumbai, India, January 1998 (inproceedings)

Abstract
A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented.

pdf [BibTex]

pdf [BibTex]


Motion feature detection using steerable flow fields
Motion feature detection using steerable flow fields

Fleet, D. J., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-98, pages: 274-281, IEEE, Santa Barbara, CA, 1998 (inproceedings)

Abstract
The estimation and detection of occlusion boundaries and moving bars are important and challenging problems in image sequence analysis. Here, we model such motion features as linear combinations of steerable basis flow fields. These models constrain the interpretation of image motion, and are used in the same way as translational or affine motion models. We estimate the subspace coefficients of the motion feature models directly from spatiotemporal image derivatives using a robust regression method. From the subspace coefficients we detect the presence of a motion feature and solve for the orientation of the feature and the relative velocities of the surfaces. Our method does not require the prior computation of optical flow and recovers accurate estimates of orientation and velocity.

pdf [BibTex]

pdf [BibTex]


Visual surveillance of human activity
Visual surveillance of human activity

L. Davis, S. F., Harwood, D., Yacoob, Y., Hariatoglu, I., Black, M.

In Asian Conference on Computer Vision, ACCV, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions
A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions

Black, M. J., Jepson, A. D.

In European Conf. on Computer Vision, ECCV-98, pages: 909-924, Freiburg, Germany, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Recognizing temporal trajectories using the {Condensation} algorithm
Recognizing temporal trajectories using the Condensation algorithm

Black, M. J., Jepson, A. D.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 16-21, Nara, Japan, 1998 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Looking at people in action - An overview
Looking at people in action - An overview

Yacoob, Y., Davis, L. S., Black, M., Gavrila, D., Horprasert, T., Morimoto, C.

In Computer Vision for Human–Machine Interaction, (Editors: R. Cipolla and A. Pentland), Cambridge University Press, 1998 (incollection)

publisher site google books [BibTex]

publisher site google books [BibTex]

1991


Dynamic motion estimation and feature extraction over long image sequences
Dynamic motion estimation and feature extraction over long image sequences

Black, M. J., Anandan, P.

In Proc. IJCAI Workshop on Dynamic Scene Understanding, Sydney, Australia, August 1991 (inproceedings)

[BibTex]

1991

[BibTex]