Talking Robotics


Organizers: Patrícia Alves-Oliveira, Silvia Tulli, Miguel Vasco, Joana Campos
contact us: talkingrobotics at gmail dot com — support us: buymeacoffe

twitter youtube


Carl Mueller is a Ph.D. student of computer science at the University of Colorado - Boulder, advised by Professor Bradley Hayes within the Collaborative Artificial Intelligence and Robotics Laboratory. He graduated from the University of California - Santa Barbara with a degree in Biopsychology and after a circuitous route through the pharmaceutical industry, he ended up in the tech, founding his own company building intelligent chat agents for business analytics. Drawn to human-computer interaction, Carl moved to Colorado to start graduate school, discovering robotics to be a wonderful platform for AI/ML/HCI research. His predominant focus is a subfield of Human-Robot Interaction called Robot Learning from Demonstration (LfD), also known as Imitation Learning or Programming by Demonstration, with a specific emphasis on how human users can provide richer information through the communication of behavioral constraints on the task the robot is learning. To this end, Carl’s research develops novel interface design for LfD, expands upon traditional LfD methods via constraint integration, and explores constrained motion planning methods to ensure automated movements are consistent with human intent.

Speaker Links: Website - Google Scholar


Historically, robots have been exclusive to industries that require consistency, precision, and long-term operation. Tasks that are dynamic or require operation in close proximity to human workers render traditional robots inflexible, costly, and unsafe. A field of research that addresses these limitations is Robot Learning from Demonstration (LfD). Robot LfD methods enable users to teach desired skills the robot must learn through the exhibition of such skills, forgoing the need for programming expertise. Clever learning methods enable the robotic system to construct a control model from captured robot state data that enables the successful execution of the demonstrated skill. However, one limitation of traditional LfD methods is that they utilize information mediums that limit communication of pertinent skill information. Such mediums often consist of robot configuration data, which poorly captures more abstract, but equally important, information about a skill. For example, when demonstrating a cup carrying task, robot configuration data only loosely and implicitly captures the skill requirement that a cup must remain in an upright orientation. The major theme of my research is the enablement of human users to communicate additional information to the robot learning system through ‘concept constraints’. Concept Constraints are abstract behavioral restrictions grounded as geometric and kinodynamical planning predicates that prohibit or limit the behavior of the robot resulting in more robust, generalizable, and safe skill execution. In this talk, I will discuss how conceptual constraints are integrated into existing LfD methods, how unique interfaces can further enhance the communication of such constraints, and how the grounding of these constraints requires constrained motion planning techniques.

Papers covered during the talk