I am working as a student assistant within the AirCap (Aerial outdoor motion capture) project. The project goal is developing a 3D shape and motion capture system in outdoor scenarios using multiple co-operative UAVs. Possible applications of this system are autonomus search and rescue robot teams and autonomus systems for crowd supervision.
My duties include integrating sensors into the current distributed system and taking care of software and hardware repositories. Currently, I am taking part in writing a contoller module for the Blimp, which should be integrated in the current UAV system.
I have completed my 4-year B.Sc. in Electrical and Computer Engineering at the University of Belgrade, School of Electrical Engineering, Signals and Systems Department. Currently, I am studying M.Sc. in Neural Information Processing at the International Max Planck Research School of Cognitive and Systems Neuroscience, University of Tuebingen. My studies' curricular focus includes Machine Learning, Neural Data Analysis, Computational Vision and Rehabilitation Robotics.
IEEE Robotics and Automation Letters, Robotics and Automation Letters, IEEE, June 2018 (article) Accepted
Multi-camera tracking of humans and animals in outdoor environments is
a relevant and challenging problem. Our approach to it involves a team
of cooperating micro aerial vehicles (MAVs) with on-board cameras only.
DNNs often fail at objects with small scale or far away from the
camera, which are typical characteristics of a scenario with aerial
robots. Thus, the core problem addressed in this paper is how to
achieve on-board, online, continuous and accurate vision-based
detections using DNNs for visual person tracking through MAVs. Our
solution leverages cooperation among multiple MAVs and active selection
of most informative regions of image. We demonstrate the efficiency of
our approach through simulations with up to 16 robots and real robot
experiments involving two aerial robots tracking a person, while
maintaining an active perception-driven formation. ROS-based source
code is provided for the benefit of the community.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems