Talking Robotics

Logo


Organizers: Patrícia Alves-Oliveira,
Silvia Tulli, Miguel Vasco
contact us: talkingrobotics at gmail dot com — support us: buymeacoffe

twitter youtube

Speaker

Daniel is a postdoc at UC Berkeley, advised by Anca Dragan and Ken Goldberg. His research interests include robot learning, reward inference, AI safety, and multi-agent systems. He received his Ph.D. in computer science from the University of Texas at Austin, where he worked with Scott Niekum on safe imitation learning. Prior to starting his PhD, Daniel worked for the Air Force Research Lab’s Information Directorate where he studied bio-inspired swarms and multi-agent planning.

Speaker Links: Google Scholar - Website- Linkedin - Twitter - Github


Abstract

In this talk he will discuss recent work that seeks to develop robots that use human input to enable robust learning. In particular, he will focus on autonomous systems that seek to learn reward functions and policies from human demonstrations and preferences. One problem that arises when learning from human input is that there is often a large amount of uncertainty over the human’s true intent and the corresponding desired robot behavior. To address this problem, he will discuss research on how to enable robots to maintain efficient and accurate representations of their uncertainty, how robots can use these representations of uncertainty to generate risk-averse solutions, and how a robot can actively query for additional human feedback to reduce its uncertainty over the human’s intent and improve the robustness of its learned policy.


Papers covered during the talk