Damien Rudaz
Date:
Speaker
Damien Rudaz is a postdoctoral researcher at the University of Copenhagen and a former User Experience researcher at the robotics company Softbank Robotics. As opposed to viewing the inner workings of human-robot interactions as black boxes, his research investigates the finely tuned micro-interactional practices through which a robot emerges, momentarily, as a “social agent” in the presence of humans. Using the micro-analytic approach of Ethnomethodological Conversation Analysis, Damien’s results rely on the analysis of large video corpora of naturalistic encounters between humans and humanoid robots (in a museum, an office building, etc.). During his doctoral work, the detailed exploration of these data allowed Damien to highlight that, to this day, interactional work (e.g., repairing the interaction, monitoring for mistakes, etc.) is still mainly incumbent upon human participants, even in interactions with new voice agents based on recent language models. Notably, Damien highlighted how, in multiparty interactions, a robot’s conduct is often framed a posteriori by the audience to be meaningful for the person directly interacting with this robot. This “pre-chewing” of a robot’s conduct by a third party (after this robot has spoken, gestured, moved, etc.) – to re-configure it as a relevant contribution for the main speaker – is one facet of the work to make technology “work”.
Speaker Links: X - Google Scholar
Abstract
A large literature points to the fact that many “seen-but-unnoticed” practices typical of human conversation are still not relevantly produced by even the most recent vocal agents. Yet, many of the subtle micro-adjustments that make a conversation feel “smooth” and “natural” are difficult to articulate for humans: we often remain unaware of their existence until they are missing. In this talk, I will detail some of those (absent) practices and show how humans make up for their absence by doing more of the interactional heavy lifting. I will argue that this “work to make technology work”, which weighs on humans, is often obscured by terms such as “social” or “conversational” robots. Moreover, an additional difficulty for voice agents is that they evolve in informational ecologies sometimes deeply different from those in which ordinary human conversation takes place. For example, when a robot displays what it “hears” on its tablet (i.e., when it displays the result of its automatic speech recognition), this information is likely to strongly reconfigure the interaction and what actions are ascribed to the robot by human participants. I will discuss some consequences of this form of “robot transparency” on conversations. Namely, what would happen if, in ordinary conversation, our interlocutors displayed on their forehead a transcript of what they hear when we speak?
Papers covered during the talk
Rudaz, D., & Licoppe, C. (2024). “Playing the Robot’s Advocate”: Bystanders’ Descriptions of a Robot’s Conduct in Public Settings. Discourse and Communication, 18(4). link
Rudaz, D., Tatarian, K., Stower, R., & Licoppe, C. (2023). From Inanimate Object to Agent: Impact of Pre-Beginnings on the Emergence of Greetings with a Robot. J. Hum.-Robot Interact., 12(3). link
Rudaz, D., & Licoppe, C. (2023, August). Public speech recognition transcripts as a configuring parameter in human-agents interactions. IEEE RO-MAN 2023, Busan. link