Header logo is ps


2010


Thumb xl screen shot 2012 12 01 at 2.37.12 pm
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

webpage pdf slides supplementary code [BibTex]

2010

webpage pdf slides supplementary code [BibTex]


Thumb xl eigenclothingimagesmall2
A 2D human body model dressed in eigen clothing

Guan, P., Freifeld, O., Black, M. J.

In European Conf. on Computer Vision, (ECCV), pages: 285-298, Springer-Verlag, September 2010 (inproceedings)

Abstract
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.

pdf data poster Project Page [BibTex]

pdf data poster Project Page [BibTex]


Thumb xl teaser eccvw
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl testing results 1
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl graspimagesmall
Decoding complete reach and grasp actions from local primary motor cortex populations

(Featured in Nature’s Research Highlights (Nature, Vol 466, 29 July 2010))

Vargas-Irwin, C. E., Shakhnarovich, G., Yadollahpour, P., Mislow, J., Black, M. J., Donoghue, J. P.

J. of Neuroscience, 39(29):9659-9669, July 2010 (article)

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]


Thumb xl teaser cvpr2010
Multisensor-Fusion for 3D Full-Body Human Motion Capture

Pons-Moll, G., Baak, A., Helten, T., Müller, M., Seidel, H., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

project page pdf [BibTex]

project page pdf [BibTex]


Thumb xl deblur small
Coded exposure imaging for projective motion deblurring

Tai, Y., Kong, N., Lin, S., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2408-2415, June 2010 (inproceedings)

Abstract
We propose a method for deblurring of spatially variant object motion. A principal challenge of this problem is how to estimate the point spread function (PSF) of the spatially variant blur. Based on the projective motion blur model of, we present a blur estimation technique that jointly utilizes a coded exposure camera and simple user interactions to recover the PSF. With this spatially variant PSF, objects that exhibit projective motion can be effectively de-blurred. We validate this method with several challenging image examples.

Publisher site [BibTex]

Publisher site [BibTex]


Thumb xl cvpr10
Tracking people interacting with objects

Kjellstrom, H., Kragic, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 747-754, June 2010 (inproceedings)

pdf Video [BibTex]

pdf Video [BibTex]


Thumb xl contourpersonimagesmall
Contour people: A parameterized model of 2D articulated human shape

Freifeld, O., Weiss, A., Zuffi, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR), pages: 639-646, IEEE, June 2010 (inproceedings)

pdf slides video of CVPR talk Project Page [BibTex]

pdf slides video of CVPR talk Project Page [BibTex]


Thumb xl secretsimagesmall2
Secrets of optical flow estimation and their principles

Sun, D., Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2432-2439, IEEE, June 2010 (inproceedings)

pdf Matlab code code copryright notice [BibTex]

pdf Matlab code code copryright notice [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: State of the art in image- and video-based human pose and motion estimation

Sigal, L., Black, M. J.

International Journal of Computer Vision, 87(1):1-3, March 2010 (article)

pdf from publisher [BibTex]

pdf from publisher [BibTex]


Thumb xl humanevaimagesmall2
HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion

Sigal, L., Balan, A., Black, M. J.

International Journal of Computer Vision, 87(1):4-27, Springer Netherlands, March 2010 (article)

Abstract
While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


no image
Modellbasierte Echtzeit-Bewegungsschätzung in der Fluoreszenzendoskopie

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

In Bildverarbeitung für die Medizin, 574, pages: 435-439, CEUR Workshop Proceedings, 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl acva2010
Robust one-shot 3D scanning using loopy belief propagation

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages: 15-22, IEEE, 2010 (inproceedings)

Abstract
A structured-light technique can greatly simplify the problem of shape recovery from images. There are currently two main research challenges in design of such techniques. One is handling complicated scenes involving texture, occlusions, shadows, sharp discontinuities, and in some cases even dynamic change; and the other is speeding up the acquisition process by requiring small number of images and computationally less demanding algorithms. This paper presents a “one-shot” variant of such techniques to tackle the aforementioned challenges. It works by projecting a static grid pattern onto the scene and identifying the correspondence between grid stripes and the camera image. The correspondence problem is formulated using a novel graphical model and solved efficiently using loopy belief propagation. Unlike prior approaches, the proposed approach uses non-deterministic geometric constraints, thereby can handle spurious connections of stripe images. The effectiveness of the proposed approach is verified on a variety of complicated real scenes.

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Thumb xl screen shot 2012 12 01 at 2.43.22 pm
Scene Carving: Scene Consistent Image Retargeting

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In European Conference on Computer Vision (ECCV), 2010 (inproceedings)

webpage+code pdf supplementary poster [BibTex]

webpage+code pdf supplementary poster [BibTex]


Thumb xl new thumb incos
Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction

Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.

In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)

Abstract
An innovative pain management system, namely Epione, is presented here. Epione deals with three main types of pain, i.e., acute pain, chronic pain, and phantom limb pain. In particular, by using facial expression analysis, Epione forms a dynamic pain meter, which then triggers biofeedback and augmented reality-based destruction scenarios, in an effort to maximize patient's pain relief. This unique combination sets Epione not only a novel pain management approach, but also a means that provides an understanding and integration of the needs of the whole community involved i.e., patients and physicians, in a joint attempt to facilitate easing of their suffering, provide efficient monitoring and contribute to a better quality of life.

Paper Project Page DOI [BibTex]

Paper Project Page DOI [BibTex]


Thumb xl new thumb dsai
Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing

Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.

In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)

Abstract
Post-amputation sensation often translates to the feeling of severe pain in the missing limb, referred to as phantom limb pain (PLP). A clear and rational treatment regimen is difficult to establish, as long as the underlying pathophysiology is not fully known. In this work, an innovative PLP management system is presented, as a module of an holistic computer-mediated pain management environment, namely Epione. The proposed Epione-PLP scheme is structured upon advanced facial expression analysis, used to form a dynamic pain meter, which, in turn, is used to trigger biofeedback and augmented reality-based PLP distraction scenarios. The latter incorporate a model of the missing limb for its visualization, in an effort to provide to the amputee the feeling of its existence and control, and, thus, maximize his/her PLP relief. The novel Epione-PLP management approach integrates edge-technology within the context of personalized health and it could be used to facilitate easing of PLP patients' suffering, provide efficient progress monitoring and contribute to the increase in their quality of life.

Paper Project Page link (url) [BibTex]

Paper Project Page link (url) [BibTex]


Thumb xl ncomm fig2
Automated Home-Cage Behavioral Phenotyping of Mice

Jhuang, H., Garrote, E., Mutch, J., Poggio, T., Steele, A., Serre, T.

Nature Communications, Nature Communications, 2010 (article)

software, demo pdf [BibTex]

software, demo pdf [BibTex]


no image
An automated action initiation system reveals behavioral deficits in MyosinVa deficient mice

Pandian, S., Edelman, N., Jhuang, H., Serre, T., Poggio, T., Constantine-Paton, M.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Thumb xl vista
Dense Marker-less Three Dimensional Motion Capture

Soren Hauberg, Bente Rona Jensen, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Virtual Vistas; Eleventh International Symposium on the 3D Analysis of Human Movement, 2010 (inproceedings)

Conference site [BibTex]

Conference site [BibTex]


Thumb xl accv2010
Stick It! Articulated Tracking using Spatial Rigid Object Priors

Soren Hauberg, Kim S. Pedersen

In Computer Vision – ACCV 2010, 6494, pages: 758-769, Lecture Notes in Computer Science, (Editors: Kimmel, Ron and Klette, Reinhard and Sugimoto, Akihiro), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


Thumb xl eccv2010a
Gaussian-like Spatial Priors for Articulated Tracking

Soren Hauberg, Stefan Sommer, Kim S. Pedersen

In Computer Vision – ECCV 2010, 6311, pages: 425-437, Lecture Notes in Computer Science, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


no image
Reach to grasp actions in rhesus macaques: Dimensionality reduction of hand, wrist, and upper arm motor subspaces using principal component analysis

Vargas-Irwin, C., Franquemont, L., Shakhnarovich, G., Yadollahpour, P., Black, M., Donoghue, J.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Thumb xl nips2010layersimagesmall
Layered image motion with explicit occlusions, temporal consistency, and depth ordering

Sun, D., Sudderth, E., Black, M. J.

In Advances in Neural Information Processing Systems 23 (NIPS), pages: 2226-2234, MIT Press, 2010 (inproceedings)

Abstract
Layered models are a powerful way of describing natural scenes containing smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or accuracy of non-layered methods. We present a new probabilistic model of optical flow in layers that addresses many of the shortcomings of previous approaches. In particular, we define a probabilistic graphical model that explicitly captures: 1) occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is modeled by a combination of a parametric model and a smooth deviation based on an MRF with a robust spatial prior; the resulting model allows roughness in layers. Finally, a key contribution is the formulation of the layers using an image dependent hidden field prior based on recent models for static scene segmentation. The method achieves state-of-the-art results on the Middlebury benchmark and produces meaningful scene segmentations as well as detected occlusion regions.

main paper supplemental material paper and supplemental material in one pdf file Project Page [BibTex]


Thumb xl eccv2010b
Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations

Stefan Sommer, Francois Lauze, Soren Hauberg, Mads Nielsen

In Computer Vision – ECCV 2010, 6316, pages: 43-56, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


Thumb xl cvgpu2010
GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

Rune Mollegaard Friborg, Soren Hauberg, Kenny Erleben

In The CVGPU workshop at European Conference on Computer Vision (ECCV) 2010, 2010 (inproceedings)

PDF [BibTex]

PDF [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.00.36 pm
Visual Object-Action Recognition: Inferring Object Affordances from Human Demonstration

Kjellström, H., Romero, J., Kragic, D.

Computer Vision and Image Understanding, pages: 81-90, 2010 (article)

Pdf [BibTex]

Pdf [BibTex]


no image
Unsupervised learning of a low-dimensional non-linear representation of motor cortical neuronal ensemble activity using Spatio-Temporal Isomap

Kim, S., Tsoli, A., Jenkins, O., Simeral, J., Donoghue, J., Black, M.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

[BibTex]

[BibTex]


Thumb xl segmentation miccai10
3D Knowledge-Based Segmentation Using Pose-Invariant Higher-Order Graphs

Wang, C., Teboul, O., Michel, F., Essafi, S., Paragios, N.

In International Conference, Medical Image Computing and Computer Assisted Intervention (MICCAI), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl testing results 1
Vision-Based Automated Recognition of Mice Home-Cage Behaviors.

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

Workshop: Visual Observation and Analysis of Animal and Insect Behavior, in conjunction with International Conference on Pattern Recognition (ICPR) , 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 11.52.35 am
Hands in action: real-time 3D reconstruction of hands in interaction with objects

Romero, J., Kjellström, H., Kragic, D.

In IEEE International Conference on Robotics and Automation (ICRA), pages: 458-463, 2010 (inproceedings)

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


no image
Orientation and direction selectivity in the population code of the visual thalamus

Stanley, G., Jin, J., Wang, Y., Desbordes, G., Black, M., Alonso, J.

COSYNE, 2010 (conference)

[BibTex]

[BibTex]


Thumb xl brightchannel
Estimating Shadows with the Bright Channel Cue

Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.

In Color and Reflectance in Imaging and Computer Vision Workshop (CRICV) (in conjunction with ECCV 2010), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl shapematching cvpr10
Dense non-rigid surface registration using high-order graph matching

Zeng, Y., Wang, C., Wang, Y., Gu, X., Samaras, D., Paragios, N.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2012 02 23 at 5.52.02 pm
Computational Mechanisms for the motion processing in visual area MT

Jhuang, H., Serre, T., Poggio, T.

Society for Neuroscience, 2010 (conference)

pdf [BibTex]

pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 11.56.17 am
Spatio-Temporal Modeling of Grasping Actions

Romero, J., Feix, T., Kjellström, H., Kragic, D.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pages: 2103-2108, 2010 (inproceedings)

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]

2003


Thumb xl iccv2003 copy
Image statistics and anisotropic diffusion

Scharr, H., Black, M. J., Haussecker, H.

In Int. Conf. on Computer Vision, pages: 840-847, October 2003 (inproceedings)

pdf [BibTex]

2003

pdf [BibTex]


Thumb xl switching2003
A switching Kalman filter model for the motor cortical coding of hand motion

Wu, W., Black, M. J., Mumford, D., Gao, Y., Bienenstock, E., Donoghue, J. P.

In Proc. IEEE Engineering in Medicine and Biology Society, pages: 2083-2086, September 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl hedvig
Learning the statistics of people in images and video

Sidenbladh, H., Black, M. J.

International Journal of Computer Vision, 54(1-3):183-209, August 2003 (article)

Abstract
This paper address the problems of modeling the appearance of humans and distinguishing human appearance from the appearance of general scenes. We seek a model of appearance and motion that is generic in that it accounts for the ways in which people's appearance varies and, at the same time, is specific enough to be useful for tracking people in natural scenes. Given a 3D model of the person projected into an image we model the likelihood of observing various image cues conditioned on the predicted locations and orientations of the limbs. These cues are taken to be steered filter responses corresponding to edges, ridges, and motion-compensated temporal differences. Motivated by work on the statistics of natural scenes, the statistics of these filter responses for human limbs are learned from training images containing hand-labeled limb regions. Similarly, the statistics of the filter responses in general scenes are learned to define a “background” distribution. The likelihood of observing a scene given a predicted pose of a person is computed, for each limb, using the likelihood ratio between the learned foreground (person) and background distributions. Adopting a Bayesian formulation allows cues to be combined in a principled way. Furthermore, the use of learned distributions obviates the need for hand-tuned image noise models and thresholds. The paper provides a detailed analysis of the statistics of how people appear in scenes and provides a connection between work on natural image statistics and the Bayesian tracking of people.

pdf pdf from publisher code DOI [BibTex]

pdf pdf from publisher code DOI [BibTex]


Thumb xl delatorreijcvteaser
A framework for robust subspace learning

De la Torre, F., Black, M. J.

International Journal of Computer Vision, 54(1-3):117-142, August 2003 (article)

Abstract
Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multi-linear models. These models have been widely used for the representation of shape, appearance, motion, etc., in computer vision applications. Methods for learning linear models can be seen as a special case of subspace fitting. One draw-back of previous learning methods is that they are based on least squares estimation techniques and hence fail to account for “outliers” which are common in realistic training sets. We review previous approaches for making linear learning methods robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Subspace Learning (RSL) for linear models within a continuous optimization framework based on robust M-estimation. The framework applies to a variety of linear learning problems in computer vision including eigen-analysis and structure from motion. Several synthetic and natural examples are used to develop and illustrate the theory and applications of robust subspace learning in computer vision.

pdf code pdf from publisher Project Page [BibTex]

pdf code pdf from publisher Project Page [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: Computational vision at Brown

Black, M. J., Kimia, B.

International Journal of Computer Vision, 54(1-3):5-11, August 2003 (article)

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Thumb xl cviu91teaser
Robust parameterized component analysis: Theory and applications to 2D facial appearance models

De la Torre, F., Black, M. J.

Computer Vision and Image Understanding, 91(1-2):53-71, July 2003 (article)

Abstract
Principal component analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion in images. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequences as they undergo significant changes due to facial expressions. Here we consider person-specific facial appearance models (PSFAM), which use modular PCA to model complex intra-person appearance changes. Such models require aligned visual training data; in previous work, this has involved a time consuming and error-prone hand alignment and cropping process. Instead, the main contribution of this paper is to introduce parameterized component analysis to learn a subspace that is invariant to affine (or higher order) geometric transformations. The automatic learning of a PSFAM given a training image sequence is posed as a continuous optimization problem and is solved with a mixture of stochastic and deterministic techniques achieving sub-pixel accuracy. We illustrate the use of the 2D PSFAM model with preliminary experiments relevant to applications including video-conferencing and avatar animation.

pdf [BibTex]

pdf [BibTex]


no image
A Gaussian mixture model for the motor cortical coding of hand motion

Wu, W., Mumford, D., Black, M. J., Gao, Y., Bienenstock, E., Donoghue, J. P.

Neural Control of Movement, Santa Barbara, CA, April 2003 (conference)

abstract [BibTex]

abstract [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 09.35.12
Connecting brains with machines: The neural control of 2D cursor movement

Black, M. J., Bienenstock, E., Donoghue, J. P., Serruya, M., Wu, W., Gao, Y.

In 1st International IEEE/EMBS Conference on Neural Engineering, pages: 580-583, Capri, Italy, March 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 15 um 09.44.01
A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions

Gao, Y., Black, M. J., Bienenstock, E., Wu, W., Donoghue, J. P.

In 1st International IEEE/EMBS Conference on Neural Engineering, pages: 189-192, Capri, Italy, March 2003 (inproceedings)

pdf [BibTex]

pdf [BibTex]


no image
Accuracy of manual spike sorting: Results for the Utah intracortical array

Wood, F., Fellows, M., Vargas-Irwin, C., Black, M. J., Donoghue, J. P.

Program No. 279.2. 2003, Abstract Viewer and Itinerary Planner, Society for Neuroscience, Washington, DC, 2003, Online (conference)

abstract [BibTex]

abstract [BibTex]


no image
Specular flow and the perception of surface reflectance

Roth, S., Domini, F., Black, M. J.

Journal of Vision, 3 (9): 413a, 2003 (conference)

abstract poster [BibTex]

abstract poster [BibTex]


Thumb xl attractiveteaser
Attractive people: Assembling loose-limbed models using non-parametric belief propagation

Sigal, L., Isard, M. I., Sigelman, B. H., Black, M. J.

In Advances in Neural Information Processing Systems 16, NIPS, pages: 1539-1546, (Editors: S. Thrun and L. K. Saul and B. Schölkopf), MIT Press, 2003 (inproceedings)

Abstract
The detection and pose estimation of people in images and video is made challenging by the variability of human appearance, the complexity of natural scenes, and the high dimensionality of articulated body models. To cope with these problems we represent the 3D human body as a graphical model in which the relationships between the body parts are represented by conditional probability distributions. We formulate the pose estimation problem as one of probabilistic inference over a graphical model where the random variables correspond to the individual limb parameters (position and orientation). Because the limbs are described by 6-dimensional vectors encoding pose in 3-space, discretization is impractical and the random variables in our model must be continuous-valued. To approximate belief propagation in such a graph we exploit a recently introduced generalization of the particle filter. This framework facilitates the automatic initialization of the body-model from low level cues and is robust to occlusion of body parts and scene clutter.

pdf (color) pdf (black and white) [BibTex]

pdf (color) pdf (black and white) [BibTex]