Micah Carroll
Date:
Speaker
Micah Carroll is an AI PhD student at UC Berkeley advised by Professors Anca Dragan and Stuart Russell. Originally from Italy, Micah graduated with a Bachelor’s in Statistics from Berkeley in 2019. He has worked at Microsoft Research and at the Center for Human-Compatible AI (CHAI). His research interests lie in human-AI systems: in particular measuring the effects of social media on users, and improving techniques for human modeling and human-AI collaboration. You can find him on his website or on Twitter.
Speaker Links: Website - Google Scholar
Abstract
Generally, current paradigms for AI treat human objectives as static. However, human objectives change, and, even more, AI actions can influence them. In this work, we argue that when objectives change, the problem of assisting humans with their objectives becomes ill-posed: there will be multiple conflicting notions of optimal behavior, and neglecting the dynamics of changing objectives makes an implicit (and often unfavourable) choice among them. We further argue that failing to model aspects of human behavior relating to changing objectives can lead to poor intent inference and behavior prediction, which will also be fundamental to successful assistance. We describe various possible optimization objectives – with their strengths and limitations – and delineate challenges related to assistance. Our findings caution against applying off-the-shelf AI approaches to settings with dynamic rewards, and stress the importance of developing specialized AI frameworks for this purpose.