Bradley Hayes

Date:

Speaker

Bradley Hayes is an Assistant Professor of Computer Science at the University of Colorado Boulder, where he runs the Collaborative AI and Robotics (CAIRO) Lab and serves as co-director of the university’s Autonomous Systems Interdisciplinary Research Theme. Brad’s research develops techniques to create and continuously validate autonomous systems that learn from, teach, and collaborate with humans to improve coordination, safety, and capability at scale. His work primarily leverages novel approaches at the intersection of human-robot interaction and explainable artificial intelligence, providing autonomous systems with the ability to generalize skills while limiting risk, to act safely while being productive around humans, and in general to make human-autonomy teams more powerful than the sums of their parts. His efforts towards safe, reliable, and responsible autonomy, in particular his habit of systematically putting humans and autonomous systems into often entertaining and occasionally productive situations, has been featured by TEDx, Popular Science, Wired, and MIT Technology review, and has been recognized with nominations and awards from the University of Colorado Boulder, HRI, AAMAS, and RO-MAN communities. Brad also serves as CTO at Circadence, building high-fidelity simulation, test, and evaluation environments for cyber-physical systems at nation-state scale.

Relevant video, article

Abstract

Clear and frequent communication is a foundational aspect of collaboration. Effective communication not only enables and sustains the shared situational awareness necessary for adaptation and coordination during human-robot teaming, but is often a requirement given the opaque nature of decision-making in autonomous systems. In this talk I will share some of our recent work using visual (augmented reality) and semantic (spoken language) modalities to improve safety and capability in human-robot teams, introducing insights into human behavior and compliance in safety-critical, partially observable situations. Finally, I will ground these contributions in a call to action within an assistive technology application area for which we are actively building a network of collaborators.

Papers:

  • (HRI 2019) Explanation-based Reward Coaching to Improve Human Performance via Reinforcement Learning (link)
  • (AAMAS 2022) Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming (link)
  • (ICRA 2021) ARC-LfD: Using Augmented Reality for Interactive Long-Term Robot Skill Maintenance via Constrained Learning from Demonstration (link)
  • (ICRA 2023) Human Non-Compliance with Robot Spatial Ownership Communicated via Augmented Reality: Implications for Human-Robot Teaming Safety (link)
  • (IROS 2022) A Novel Perceptive Robotic Cane with Haptic Navigation for Enabling Vision-Independent Participation in the Social Dynamics of Seat Choice (link)
  • (AAMAS 2023) ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic Cane (link)