I am working in the intersection of Natural Language and Computer Vision. I am particularly interested on analyzing the connection between human motion and language.
Daniel's research focused on understanding the link between semantics and vision. He believed that our intelligence and ability to perceive our surroundings is strongly influenced by language and meaning. He was also very interested in human emotions, facial expressions, sentiment analysis, multimodal learning, transfer learning and 3D modelling of human bodies and faces; amongst others.
I joined Max Planck ETH Center for Learning Systems as a Ph.D. student in September 2019, where I am supervised by Michael Black and Marc Pollefeys.
I am interested in human object interaction learning, to start with human ground interaction leaning (walking/ running) and later on would like to extended it to more complex hand manipulated objects.
Yinghao Huang is a PhD candidate at Max Planck Institute for Intelligent Systems, supervised by Director Michael J. Black. His research interests fall in the areas of Machine Learning, Computer Vision and Computer Graphics. More specifically, he focuses on the topics of Human Body Modelling, 3D Human Shape and Pose Estimation, and other related things.
Perception is a fundamental part of intelligence since perception is necessary to acquire knowledge and knowledge is necessary to understand perception. Therefore computer vision is one of the most important aspects in the realization of intelligent systems. My interest of research lies in computer vision and the combination with machine learning which, to my mind, will enable the realization of intelligent systems. Currently, I am working on optical flow and how to incorporate high-level information to alleviate this ill-posed problem.
I am a PhD student supervised by Dr. Michael Black and Dr. Siyu Tang. My research lies on the intersection of computer vision, graphics and machine learning. Currently my focus is on developing deep learning algorithms for non-Euclidean domains and their various applications, such as building realistic 3D-mesh based human body models. Especially, I aim to develop novel 3D clothing models.
Peter Vincent Gehler
I work on decomposing photographs into their intrinsic layers of reflectance and shading using deep learning methods for fast inference. In addition I started to work on interactive semantic segmentation using CNNs.
My work spans both the research aspect of creating the world most realistic human body models and the development of computationally efficient and scalable software that enables learning such models from large scale data sets. I completed an MSc in Statistics at Imperial College London, MSc in Artificial Intelligence at the Uni. of Manchester and BEng in Mechatronics and Robotics at the University of Liverpool.
I'm a second year PhD Student at the department of Perceiving Systems. I'm developing multi-aerial vehicle intelligence for practical research application with prototypes built here at the institute. My current work involves integrating detections from real time deep neural networks into cooperative multi vehicle sensor fusion.
My research is based in preclinical imaging at the Werner Siemens Imaging Center, and I am focused on novel molecular imaging techniques. My research involves awake and unrestrained rodents and measurements of a more truthful neurophysiological response (to drugs, stimuli, treatments, etc…). I am interested in building an model for tracking and capturing the most commonly used research rodents in preclinical applications.
I am interested in modeling and capturing humans body+hands motions with a focus on human-environment interaction and haptic motion capturing. More specifically, my research is focused on precise Body and Hands mocap using IMUs, Images, or other modalities using deep-learning and machine-learning to be able to capture interactions and feedback from the environment.