Header logo is ps


2020


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

arxiv project page [BibTex]

2020


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully-automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires (1) the generated human bodies should be semantically plausible with the 3D environment, e.g. people sitting on the sofa or cooking near the stove; (2) the generated human-scene interaction should be physically feasible in the way that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human pose conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR.

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

arXiv code [BibTex]

arXiv code [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

arXiv [BibTex]

arXiv [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

pdf [BibTex]

pdf [BibTex]

2006


no image
Finding directional movement representations in motor cortical neural populations using nonlinear manifold learning

WorKim, S., Simeral, J., Jenkins, O., Donoghue, J., Black, M.

World Congress on Medical Physics and Biomedical Engineering 2006, Seoul, Korea, August 2006 (conference)

[BibTex]

2006

[BibTex]


A non-parametric {Bayesian} approach to spike sorting
A non-parametric Bayesian approach to spike sorting

Wood, F., Goldwater, S., Black, M. J.

In International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pages: 1165-1169, New York, NY, August 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Predicting {3D} people from {2D} pictures
Predicting 3D people from 2D pictures

(Best Paper)

Sigal, L., Black, M. J.

In Proc. IV Conf. on Articulated Motion and DeformableObjects (AMDO), LNCS 4069, pages: 185-195, July 2006 (inproceedings)

Abstract
We propose a hierarchical process for inferring the 3D pose of a person from monocular images. First we infer a learned view-based 2D body model from a single image using non-parametric belief propagation. This approach integrates information from bottom-up body-part proposal processes and deals with self-occlusion to compute distributions over limb poses. Then, we exploit a learned Mixture of Experts model to infer a distribution of 3D poses conditioned on 2D poses. This approach is more general than recent work on inferring 3D pose directly from silhouettes since the 2D body model provides a richer representation that includes the 2D joint angles and the poses of limbs that may be unobserved in the silhouette. We demonstrate the method in a laboratory setting where we evaluate the accuracy of the 3D poses against ground truth data. We also estimate 3D body pose in a monocular image sequence. The resulting 3D estimates are sufficiently accurate to serve as proposals for the Bayesian inference of 3D human motion over time

pdf pdf from publisher Video [BibTex]

pdf pdf from publisher Video [BibTex]


Specular flow and the recovery of surface structure
Specular flow and the recovery of surface structure

Roth, S., Black, M.

In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, 2, pages: 1869-1876, New York, NY, June 2006 (inproceedings)

Abstract
In scenes containing specular objects, the image motion observed by a moving camera may be an intermixed combination of optical flow resulting from diffuse reflectance (diffuse flow) and specular reflection (specular flow). Here, with few assumptions, we formalize the notion of specular flow, show how it relates to the 3D structure of the world, and develop an algorithm for estimating scene structure from 2D image motion. Unlike previous work on isolated specular highlights we use two image frames and estimate the semi-dense flow arising from the specular reflections of textured scenes. We parametrically model the image motion of a quadratic surface patch viewed from a moving camera. The flow is modeled as a probabilistic mixture of diffuse and specular components and the 3D shape is recovered using an Expectation-Maximization algorithm. Rather than treating specular reflections as noise to be removed or ignored, we show that the specular flow provides additional constraints on scene geometry that improve estimation of 3D structure when compared with reconstruction from diffuse flow alone. We demonstrate this for a set of synthetic and real sequences of mixed specular-diffuse objects.

pdf [BibTex]

pdf [BibTex]


An adaptive appearance model approach for model-based articulated object tracking
An adaptive appearance model approach for model-based articulated object tracking

Balan, A., Black, M. J.

In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, 1, pages: 758-765, New York, NY, June 2006 (inproceedings)

Abstract
The detection and tracking of three-dimensional human body models has progressed rapidly but successful approaches typically rely on accurate foreground silhouettes obtained using background segmentation. There are many practical applications where such information is imprecise. Here we develop a new image likelihood function based on the visual appearance of the subject being tracked. We propose a robust, adaptive, appearance model based on the Wandering-Stable-Lost framework extended to the case of articulated body parts. The method models appearance using a mixture model that includes an adaptive template, frame-to-frame matching and an outlier process. We employ an annealed particle filtering algorithm for inference and take advantage of the 3D body model to predict self occlusion and improve pose estimation accuracy. Quantitative tracking results are presented for a walking sequence with a 180 degree turn, captured with four synchronized and calibrated cameras and containing significant appearance changes and self-occlusion in each view.

pdf [BibTex]

pdf [BibTex]


Measure locally, reason globally: Occlusion-sensitive articulated pose estimation
Measure locally, reason globally: Occlusion-sensitive articulated pose estimation

Sigal, L., Black, M. J.

In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, 2, pages: 2041-2048, New York, NY, June 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Statistical analysis of the non-stationarity of neural population codes
Statistical analysis of the non-stationarity of neural population codes

Kim, S., Wood, F., Fellows, M., Donoghue, J. P., Black, M. J.

In BioRob 2006, The first IEEE / RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, pages: 295-299, Pisa, Italy, Febuary 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
How to choose the covariance for Gaussian process regression independently of the basis

Franz, M., Gehler, P.

In Proceedings of the Workshop Gaussian Processes in Practice, 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


The rate adapting poisson model for information retrieval and object recognition
The rate adapting poisson model for information retrieval and object recognition

Gehler, P. V., Holub, A. D., Welling, M.

In Proceedings of the 23rd international conference on Machine learning, pages: 337-344, ICML ’06, ACM, New York, NY, USA, 2006 (inproceedings)

project page pdf DOI [BibTex]

project page pdf DOI [BibTex]


Tracking complex objects using graphical object models
Tracking complex objects using graphical object models

Sigal, L., Zhu, Y., Comaniciu, D., Black, M. J.

In International Workshop on Complex Motion, LNCS 3417, pages: 223-234, Springer-Verlag, 2006 (inproceedings)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Hierarchical Approach for Articulated {3D} Pose-Estimation and Tracking (extended abstract)
Hierarchical Approach for Articulated 3D Pose-Estimation and Tracking (extended abstract)

Sigal, L., Black, M. J.

In Learning, Representation and Context for Human Sensing in Video Workshop (in conjunction with CVPR), 2006 (inproceedings)

pdf poster [BibTex]

pdf poster [BibTex]


Nonlinear physically-based models for decoding motor-cortical population activity
Nonlinear physically-based models for decoding motor-cortical population activity

Shakhnarovich, G., Kim, S., Black, M. J.

In Advances in Neural Information Processing Systems 19, NIPS-2006, pages: 1257-1264, MIT Press, 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
A comparison of decoding models for imagined motion from human motor cortex

Kim, S., Simeral, J., Donoghue, J. P., Hocherberg, L. R., Friehs, G., Mukand, J. A., Chen, D., Black, M. J.

Program No. 256.11. 2006 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Atlanta, GA, 2006, Online (conference)

[BibTex]

[BibTex]


Denoising archival films using a learned {Bayesian} model
Denoising archival films using a learned Bayesian model

Moldovan, T. M., Roth, S., Black, M. J.

In Int. Conf. on Image Processing, ICIP, pages: 2641-2644, Atlanta, 2006 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Efficient belief propagation with learned higher-order {Markov} random fields
Efficient belief propagation with learned higher-order Markov random fields

Lan, X., Roth, S., Huttenlocher, D., Black, M. J.

In European Conference on Computer Vision, ECCV, II, pages: 269-282, Graz, Austria, 2006 (inproceedings)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Products of ``Edge-perts''
Products of “Edge-perts”

Gehler, P., Welling, M.

In Advances in Neural Information Processing Systems 18, pages: 419-426, (Editors: Weiss, Y. and Sch"olkopf, B. and Platt, J.), MIT Press, Cambridge, MA, 2006 (incollection)

pdf [BibTex]

pdf [BibTex]


no image
Modeling neural control of physically realistic movement

Shaknarovich, G., Kim, S., Donoghue, J. P., Hocherberg, L. R., Friehs, G., Mukand, J. A., Chen, D., Black, M. J.

Program No. 256.12. 2006 Abstract Viewer and Itinerary Planner, Society for Neuroscience, Atlanta, GA, 2006, Online (conference)

[BibTex]

[BibTex]

2005


A quantitative evaluation of video-based {3D} person tracking
A quantitative evaluation of video-based 3D person tracking

Balan, A. O., Sigal, L., Black, M. J.

In The Second Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, VS-PETS, pages: 349-356, October 2005 (inproceedings)

pdf [BibTex]

2005

pdf [BibTex]


Inferring attentional state and kinematics from motor cortical firing rates
Inferring attentional state and kinematics from motor cortical firing rates

Wood, F., Prabhat, , Donoghue, J. P., Black, M. J.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 1544-1547, September 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Motor cortical decoding using an autoregressive moving average model
Motor cortical decoding using an autoregressive moving average model

Fisher, J., Black, M. J.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 1469-1472, September 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Fields of Experts: A framework for learning image priors
Fields of Experts: A framework for learning image priors

Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, 2, pages: 860-867, June 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


On the spatial statistics of optical flow
On the spatial statistics of optical flow

(Marr Prize, Honorable Mention)

Roth, S., Black, M. J.

In International Conf. on Computer Vision, International Conf. on Computer Vision, pages: 42-49, 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Modeling neural population spiking activity with {Gibbs} distributions
Modeling neural population spiking activity with Gibbs distributions

Wood, F., Roth, S., Black, M. J.

In Advances in Neural Information Processing Systems 18, pages: 1537-1544, 2005 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Energy-based models of motor cortical population activity

Wood, F., Black, M.

Program No. 689.20. 2005 Abstract Viewer/Itinerary Planner, Society for Neuroscience, Washington, DC, 2005 (conference)

abstract [BibTex]

abstract [BibTex]

1999


Edges as outliers: Anisotropic smoothing using local image statistics
Edges as outliers: Anisotropic smoothing using local image statistics

Black, M. J., Sapiro, G.

In Scale-Space Theories in Computer Vision, Second Int. Conf., Scale-Space ’99, pages: 259-270, LNCS 1682, Springer, Corfu, Greece, September 1999 (inproceedings)

Abstract
Edges are viewed as statistical outliers with respect to local image gradient magnitudes. Within local image regions we compute a robust statistical measure of the gradient variation and use this in an anisotropic diffusion framework to determine a spatially varying "edge-stopping" parameter σ. We show how to determine this parameter for two edge-stopping functions described in the literature (Perona-Malik and the Tukey biweight). Smoothing of the image is related the local texture and in regions of low texture, small gradient values may be treated as edges whereas in regions of high texture, large gradient magnitudes are necessary before an edge is preserved. Intuitively these results have similarities with human perceptual phenomena such as masking and "popout". Results are shown on a variety of standard images.

pdf [BibTex]

1999

pdf [BibTex]


Probabilistic detection and tracking of motion discontinuities
Probabilistic detection and tracking of motion discontinuities

(Marr Prize, Honorable Mention)

Black, M. J., Fleet, D. J.

In Int. Conf. on Computer Vision, ICCV-99, pages: 551-558, ICCV, Corfu, Greece, September 1999 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Artscience Sciencart
Artscience Sciencart

Black, M. J., Levy, D., PamelaZ,

In Art and Innovation: The Xerox PARC Artist-in-Residence Program, pages: 244-300, (Editors: Harris, C.), MIT-Press, 1999 (incollection)

Abstract
One of the effects of the PARC Artist In Residence (PAIR) program has been to expose the strong connections between scientists and artists. Both do what they do because they need to do it. They are often called upon to justify their work in order to be allowed to continue to do it. They need to justify it to funders, to sponsoring institutions, corporations, the government, the public. They publish papers, teach workshops, and write grants touting the educational or health benefits of what they do. All of these things are to some extent valid, but the fact of the matter is: artists and scientists do their work because they are driven to do it. They need to explore and create.

This chapter attempts to give a flavor of one multi-way "PAIRing" between performance artist PamelaZ and two PARC researchers, Michael Black and David Levy. The three of us paired up because we found each other interesting. We chose each other. While most artists in the program are paired with a single researcher Pamela jokingly calls herself a bigamist for choosing two PAIR "husbands" with different backgrounds and interests.

There are no "rules" to the PAIR program; no one told us what to do with our time. Despite this we all had a sense that we needed to produce something tangible during Pamela's year-long residency. In fact, Pamela kept extending her residency because she did not feel as though we had actually made anything concrete. The interesting thing was that all along we were having great conversations, some of which Pamela recorded. What we did not see at the time was that it was these conversations between artists and scientists that are at the heart of the PAIR program and that these conversations were changing the way we thought about our own work and the relationships between science and art.

To give these conversations their due, and to allow the reader into our PAIR interactions, we include two of our many conversations in this chapter.

[BibTex]

[BibTex]


Explaining optical flow events with parameterized spatio-temporal models
Explaining optical flow events with parameterized spatio-temporal models

Black, M. J.

In IEEE Proc. Computer Vision and Pattern Recognition, CVPR’99, pages: 326-332, IEEE, Fort Collins, CO, 1999 (inproceedings)

pdf video [BibTex]

pdf video [BibTex]

1997


Robust anisotropic diffusion and sharpening of scalar and vector images
Robust anisotropic diffusion and sharpening of scalar and vector images

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

In Int. Conf. on Image Processing, ICIP, 1, pages: 263-266, Vol. 1, Santa Barbara, CA, October 1997 (inproceedings)

Abstract
Relations between anisotropic diffusion and robust statistics are described. We show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator, that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in the image. We extend the framework to vector-valued images and show applications to robust image sharpening.

pdf publisher site [BibTex]

1997

pdf publisher site [BibTex]


Robust anisotropic diffusion: Connections between robust statistics, line processing, and anisotropic diffusion
Robust anisotropic diffusion: Connections between robust statistics, line processing, and anisotropic diffusion

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

In Scale-Space Theory in Computer Vision, Scale-Space’97, pages: 323-326, LNCS 1252, Springer Verlag, Utrecht, the Netherlands, July 1997 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Learning parameterized models of image motion
Learning parameterized models of image motion

Black, M. J., Yacoob, Y., Jepson, A. D., Fleet, D. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-97, pages: 561-567, Puerto Rico, June 1997 (inproceedings)

Abstract
A framework for learning parameterized models of optical flow from image sequences is presented. A class of motions is represented by a set of orthogonal basis flow fields that are computed from a training set using principal component analysis. Many complex image motions can be represented by a linear combination of a small number of these basis flows. The learned motion models may be used for optical flow estimation and for model-based recognition. For optical flow estimation we describe a robust, multi-resolution scheme for directly computing the parameters of the learned flow models from image derivatives. As examples we consider learning motion discontinuities, non-rigid motion of human mouths, and articulated human motion.

pdf [BibTex]

pdf [BibTex]


Analysis of gesture and action in technical talks for video indexing
Analysis of gesture and action in technical talks for video indexing

Ju, S. X., Black, M. J., Minneman, S., Kimber, D.

In IEEE Conf. on Computer Vision and Pattern Recognition, pages: 595-601, CVPR-97, Puerto Rico, June 1997 (inproceedings)

Abstract
In this paper, we present an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing and we use active contours to automatically track these potential gestures. Given the constrained domain we define a simple ``vocabulary'' of actions which can easily be recognized based on the active contour shape and motion. The recognized actions provide a rich annotation of the sequence that can be used to access a condensed version of the talk from a web page.

pdf [BibTex]

pdf [BibTex]


Modeling appearance change in image sequences
Modeling appearance change in image sequences

Black, M. J., Yacoob, Y., Fleet, D. J.

In Advances in Visual Form Analysis, pages: 11-20, Proceedings of the Third International Workshop on Visual Form, Capri, Italy, May 1997 (inproceedings)

abstract [BibTex]

abstract [BibTex]


Recognizing human motion using parameterized models of optical flow
Recognizing human motion using parameterized models of optical flow

Black, M. J., Yacoob, Y., Ju, X. S.

In Motion-Based Recognition, pages: 245-269, (Editors: Mubarak Shah and Ramesh Jain,), Kluwer Academic Publishers, Boston, MA, 1997 (incollection)

pdf [BibTex]

pdf [BibTex]

1995


Robust estimation of multiple surface shapes from occluded textures
Robust estimation of multiple surface shapes from occluded textures

Black, M. J., Rosenholtz, R.

In International Symposium on Computer Vision, pages: 485-490, Miami, FL, November 1995 (inproceedings)

pdf [BibTex]

1995

pdf [BibTex]


no image
The PLAYBOT Project

Tsotsos, J. K., Dickinson, S., Jenkin, M., Milios, E., Jepson, A., Down, B., Amdur, E., Stevenson, S., Black, M., Metaxas, D., Cooperstock, J., Culhane, S., Nuflo, F., Verghese, G., Wai, W., Wilkes, D., Ye, Y.

In Proc. IJCAI Workshop on AI Applications for Disabled People, Montreal, August 1995 (inproceedings)

abstract [BibTex]

abstract [BibTex]


Recognizing facial expressions under rigid and non-rigid facial motions using local parametric models of image motion
Recognizing facial expressions under rigid and non-rigid facial motions using local parametric models of image motion

Black, M. J., Yacoob, Y.

In International Workshop on Automatic Face- and Gesture-Recognition, Zurich, July 1995 (inproceedings)

video abstract [BibTex]

video abstract [BibTex]


Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion
Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion

Black, M. J., Yacoob, Y.

In Fifth International Conf. on Computer Vision, ICCV’95, pages: 347-381, Boston, MA, June 1995 (inproceedings)

Abstract
This paper explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performs with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.

pdf video publisher site [BibTex]

pdf video publisher site [BibTex]


no image
A computational model for shape from texture for multiple textures

Black, M. J., Rosenholtz, R.

Investigative Ophthalmology and Visual Science Supplement, Vol. 36, No. 4, pages: 2202, March 1995 (conference)

abstract [BibTex]

abstract [BibTex]

1991


Dynamic motion estimation and feature extraction over long image sequences
Dynamic motion estimation and feature extraction over long image sequences

Black, M. J., Anandan, P.

In Proc. IJCAI Workshop on Dynamic Scene Understanding, Sydney, Australia, August 1991 (inproceedings)

[BibTex]

1991

[BibTex]


Robust dynamic motion estimation over time
Robust dynamic motion estimation over time

(IEEE Computer Society Outstanding Paper Award)

Black, M. J., Anandan, P.

In Proc. Computer Vision and Pattern Recognition, CVPR-91,, pages: 296-302, Maui, Hawaii, June 1991 (inproceedings)

Abstract
This paper presents a novel approach to incrementally estimating visual motion over a sequence of images. We start by formulating constraints on image motion to account for the possibility of multiple motions. This is achieved by exploiting the notions of weak continuity and robust statistics in the formulation of the minimization problem. The resulting objective function is non-convex. Traditional stochastic relaxation techniques for minimizing such functions prove inappropriate for the task. We present a highly parallel incremental stochastic minimization algorithm which has a number of advantages over previous approaches. The incremental nature of the scheme makes it truly dynamic and permits the detection of occlusion and disocclusion boundaries.

pdf video abstract [BibTex]

pdf video abstract [BibTex]