I am interested in robotics and intelligent control system. The goal is to develop an intelligent robotic system such that a robot is able to learn how to perform a task in a fast and safe fashion. This is a new exciting field where AI meets well-established traditional control theory.
To achieve the goal, there are several aspects needed to be taken into account: system stability, motion planning, path following and so on. To solve these tasks efficiently, we leverage the ability of learning algorithms for dynamic modeling, uncertainty estimation, and policy learning. To test the idea, an autonomous blimp is built as the test bed for comparing performance of different learning algorithms.
I have completed my Master degree in Max Planck Institute Intelligent System and Automation and IT, Technical University of Cologn. Before that, I have done my Bachelors in Electrical Engineering from National Tsing Hua University, Taiwan.
Autonomous MoCap systems, like AirCap, rely on robots with on-board cameras that can localize and navigate autonomously. More importantly, these robots must detect, track and follow the subject (human or animal) in real time. Thus, a key component of such a system is motion planning and control of multiple...
IEEE Robotics and Automation Letters, IEEE Robotics and Automation Letters, 5(4):6678 - 6685, IEEE, October 2020, Also accepted and presented in the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (article)
In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose, and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system, and observation models. Such models are difficult to derive, and generalize across different systems. Moreover, the non-linearities, and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems