Hang is a Postdoctoral Researcher at the Robotics, Perception and Learning Group, KTH Royal Institute of Technology. Hang’s interests lie in the intersection of robotics and machine learning and he is enthusiastic about finding and integrating problem structures, such as task representation, dynamical systems and optimization-based control, to facilitate learning-based robotics. Hang obtained his PhD from EPFL and IST, University of Lisbon under the supervision of Prof. Aude Billard, Prof. Ana Paiva and Prof. Francisco S. Melo. Prior to that, Hang completed his master and bachelor studies in Shanghai Jiao Tong University. He also worked as a software engineer in Siemens.
Exploiting efficient representations is the key to learning and automating real-world robotic tasks. In this talk, I will present recent results at RPL about learning state or modeling action spaces for roboticagents to address challenging manipulation tasks. In the first part, I will present a framework of learning a compact representation to encode high-dimensional and complex state/observations, e.g. configurations of a clothing item. I will show a variant of variational autoencoders which imposes a low-dimensional latent space with an improved structure. This allows us to build a roadmap in the embedding space and perform effective action planning in a cloth folding task. In the second part, the talk will focus on incorporating well-established robot control results in the reinforcement learning action design. I will introduce a policy in the form of a stable variable impedance controller and a variant of Cross Entropy Method to guarantee stable explorations in learning contact-rich skills. Our results demonstrate a superior performance in simulated and real-world peg-in-hole tasks, despite of admitting a more restricted class of behaviors comparing to baseline neural network policies.