University of Tübingen, December 2019 (phdthesis)
The motion of the world is inherently dependent on the spatial structure of the world and its geometry. Therefore, classical optical flow methods try to model this geometry to solve for the motion. However, recent deep learning methods take a completely different approach. They try to predict optical flow by learning from labelled data. Although deep networks have shown state-of-the-art performance on classification problems in computer vision, they have not been as effective in solving optical flow. The key reason is that deep learning methods do not explicitly model the structure of the world in a neural network, and instead expect the network to learn about the structure from data. We hypothesize that it is difficult for a network to learn about motion without any constraint on the structure of the world. Therefore, we explore several approaches to explicitly model the geometry of the world and its spatial structure in deep neural networks.
The spatial structure in images can be captured by representing it at multiple scales. To represent multiple scales of images in deep neural nets, we introduce a Spatial Pyramid Network (SpyNet). Such a network can leverage global information for estimating large motions and local information for estimating small motions. We show that SpyNet significantly improves over previous optical flow networks while also being the smallest and fastest neural network for motion estimation. SPyNet achieves a 97% reduction in model parameters over previous methods and is more accurate.
The spatial structure of the world extends to people and their motion. Humans have a very well-defined structure, and this information is useful in estimating optical flow for humans. To leverage this information, we create a synthetic dataset for human optical flow using a statistical human body model and motion capture sequences. We use this dataset to train deep networks and see significant improvement in the ability of the networks to estimate human optical flow.
The structure and geometry of the world affects the motion. Therefore, learning about the structure of the scene together with the motion can benefit both problems. To facilitate this, we introduce Competitive Collaboration, where several neural networks are constrained by geometry and can jointly learn about structure and motion in the scene without any labels. To this end, we show that jointly learning single view depth prediction, camera motion, optical flow and motion segmentation using Competitive Collaboration achieves state-of-the-art results among unsupervised approaches.
Our findings provide support for our hypothesis that explicit constraints on structure and geometry of the world lead to better methods for motion estimation.
NeuroImage, 202(15):116085, November 2019 (article) , Zhao, M., , , Mohler, B. J., Bartels, A., Bülthoff, I.
IEEE Robotics and Automation Letters, Robotics and Automation Letters, 4(4):4491-4498, IEEE, October 2019 (article) , , , Karlapalem, K., Bülthoff, H. H., ,
Hesse, N.,Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article) , , Arens, M., Hofmann, U., Schroeder, S.
Kenny, S.,ACM Trans. Appl. Percept., 16(1):2:1-2:18, Febuary 2019 (article) , Honda, C., ,
IEEE Transactions on Visualization and Computer Graphics, 25, pages: 1887,1897, IEEE, 2019 (article) , , , , , Hesse, N., Bülthoff, H. H.,
Psychological Science, 27(11):1486-1497, November 2016, (article) , , Hahn, C. A., , O’Toole, A. J.
ETH Zurich, July 2016 (phdthesis)
ACM Trans. Graph. (Proc. SIGGRAPH), 35(4):54:1-54:14, July 2016 (article) , , Hill, M. Q., Hahn, C. A., , O’Toole, A.,
International Journal of Computer Vision (IJCV), 118(2):172-193, June 2016 (article) , Ballan, L., , Aponte, P., Pollefeys, M.,
Marcard, T. V.,Transactions on Pattern Analysis and Machine Intelligence PAMI, 38(8):1533-1547, January 2016 (article) , Rosenhahn, B.
Robotics and Autonomous Systems, 83, pages: 275-286, 2016 (article) , Bülthoff, H.
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 0(0):1-8, 2016 (article) , , , Machann, J.,
Feix, T.,Human-Machine Systems, IEEE Transactions on, 46(1):66-77, 2016 (article) , Schmiedmayer, H., Dollar, A., Kragic, D.
Brubaker, M. A.,IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 2016 (article) , Urtasun, R.
ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 33(6):220:1-220:13, ACM, New York, NY, USA, November 2014 (article) , ,
Piryankova, I., Stefanucci, J.,ACM Transactions on Applied Perception for the Symposium on Applied Perception, 11(3):13:1-13:18, September 2014 (article) , de la Rosa, S., ,
Computer Vision and Image Understanding, 125, pages: 172-183, August 2014 (article) , Xavier, J., Santos-Victor, J., Lima, P.
ACM Transactions on Graphics, (Proc. SIGGRAPH), 33(4):52:1-52:11, ACM, New York, NY, July 2014 (article) , ,
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article) , Lauer, M., Wojek, C., Stiller, C., Urtasun, R.
Brown University, Department of Computer Science, May 2014 (phdthesis)
Homer, M. L., Perge, J. A.,IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22(2):239-248, March 2014 (article) , Harrison, M. T., Cash, S. S., Hochberg, L. R.
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(2):209-221, IEEE Computer Society, Febuary 2014 (article) , Tai, Y., Shin, J. S.
Peruch, F.,IEEE Transactions on Biomedical Engineering, 61(2):557-565, February 2014 (article) , Bonazza, M., Cappelleri, V., Peserico, E.
2014, Scene Understanding Workshop (SUNw, CVPR workshop) (unpublished) , Andriluka, M., Milan, A., Schindler, K., Roth, S., Schiele, B.
Foster, J., Nuyujukian, P.,J. of Neural Engineering, 11(4):046020, 2014 (article) , Gao, H., Walker, R., Ryu, S., Meng, T., Murmann, B., , Shenoy, K.
(Journal version of the Longuet-Higgins Prize paper on "Secrets of Optical Flow")
International Journal of Computer Vision (IJCV), 106(2):115-137, 2014 (article) , Roth, S.,
International Journal of Computer Vision, 110, pages: 58-69, 2014 (article) , Andriluka, M., Schiele, B.
S. Y. Yeo, X. Xie, I. Sazonov, P. NithiarasuInternational Journal for Numerical Methods in Biomedical Engineering, 30(2):232- 248, 2014 (article)
C. W. Lim, Y. Su, S. Y. Yeo, G. M. Ng, V. T. Nguyen, L. Zhong, R. S. Tan, K. K. Poh, P. Chai,PLOS ONE, 9(4), 2014 (article)
Roth, S.,International Journal of Computer Vision (IJCV), 82(2):205-29, April 2009 (article)
Liang Zhong, Yi Su, Si Yong Yeo, Ru San Tan Dhanjoo Ghista, Ghassan KassabAmerican Journal of Physiology – Heart and Circulatory Physiology, 296(3):H573-84, 2009 (article)
Si Yong Yeo, Liang Zhong, Yi Su, Ru San Tan, Dhanjoo GhistaMedical & Biological Engineering & Computing, 47(3):313-322, 2009 (article)
Ostrovsky, Y.,Journal of Vision, 7(9):315-315, ARVO, June 2007 (article) , Sinha, P.
Donoghue, J., Hochberg, L., Nurmikko, A.,Medicine & Health Rhode Island, 90(1):12-15, January 2007 (article) , Simeral, J., Friehs, G.
Roth, S.,International Journal of Computer Vision, 74(1):33-50, 2007 (article)
Donoghue, J. P., Nurmikko, A.,Journal of Physiology, Special Issue on Brain Computer Interfaces, 579, pages: 603-611, 2007 (article) , Hochberg, L.