Lindsay Sanneman

Date:

Speaker

Lindsay Sanneman is a postdoctoral associate in the Department of Aeronautics and Astronautics at MIT and a member of the Interactive Robotics Group and the Algorithmic Alignment Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). Her research focuses on the development of models, metrics, and algorithms for explainable AI (XAI) and AI alignment in complex human-autonomy interaction settings. Since 2018, she has been a member of MIT’s Work of the Future task force and has visited over 50 factories worldwide alongside an interdisciplinary team of social scientists and engineers in order to study the adoption of robotics in manufacturing. She has also been a Siegel Research Fellow and has presented her work in diverse venues including the Industry Studies Association, the Federal Aviation Administration (FAA), and the UN Department of Economic and Social Affairs.

Speaker Links: Website - Google Scholar

Abstract

Alignment of robot objectives with those of humans can greatly enhance robots’ ability to act flexibly to safely and reliably meet humans’ goals across diverse contexts from space exploration to robotic manufacturing. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate their objectives comprehensively, accurately, and in forms that are readily usable for robot planning. Value alignment is an open challenge in artificial intelligence that aims to address this problem by enabling robots and autonomous agents to infer human goals and values through interaction. Providing humans with direct and explicit feedback about this value learning process through explainable AI (XAI) can enable humans to more efficiently and effectively teach robots about their goals. In this talk, I will introduce the Transparent Value Alignment (TVA) paradigm which captures this two-way communication and inference process and will discuss foundations for the design and evaluation of XAI within this paradigm. First, I will present a novel suite of metrics for assessing alignment which have been validated through human subject experiments by applying approaches from cognitive psychology. Next, I will discuss the Situation Awareness Framework for Explainable AI (SAFE-AI), a human factors-based framework for the design and evaluation of XAI across diverse contexts including alignment. Finally, I will propose design guidance for XAI within the TVA context which is grounded in results from a set of human studies comparing a broad range of explanation techniques across multiple domains. I will conclude by briefly discussing our current and future work involving information-theoretic approaches to automatically generating abstract explanations that address XAI design tradeoffs and the application of our proposed alignment metrics to Large Language Models.

Papers

  • Sanneman, Lindsay, and Julie A. Shah. “The situation awareness framework for explainable AI (SAFE-AI) and human factors considerations for XAI systems.” International Journal of Human–Computer Interaction, 2022.
  • Sanneman, Lindsay, and Julie A. Shah. “An empirical study of reward explanations with human-robot interaction applications.” IEEE Robotics and Automation Letters, 2022.
  • Sanneman, Lindsay, and Julie A. Shah. “Validating metrics for reward alignment in human-autonomy teaming.” Computers in Human Behavior, 2023.