Glenda Hannibal is currently a PhD student in the Trust Robots Doctoral College and Human-Computer Interaction group at TU Wien while also working as an expert in the HUMAINT project at the European Commission. Her research focuses mainly on combining insights from philosophy and sociology with topics in social robotics and HRI. Glenda holds a BA and MA in Philosophy from Aarhus University and has previously worked at the University of Vienna.
The topic of trust in human-robot interaction is considered important for a successful uptake of advanced robotic systems in human everyday life and society. In this talk, I will present my research on the nature of trust in HRI, which is motivated by my disciplinary background in philosophy. In the first part of my talk I will present some theoretical considerations to re-frame the current discussion on trust in HRI. Specifically, I will account for the conceptualizations of trust in HRI and explain why they are problematic for the specific case of agent-like robotic systems. This is because the conceptualization of trust as reliance is too weak while trust as interpersonal is too strong. I will suggest an emphasis on vulnerability as the precondition of trust as a way to avoid this conceptual challenge. To conclude the theoretical part I will reflect on how this strategy translates into empirical work. In the second part of my talk, I will present the results of two empirical studies that aimed to explore the different aspects of vulnerability during trust in HRI from both a robot-centered and human-centered perspective. When discussing how these studies contribute to current work on trust in HRI, I will also address some of the methodological issues often raised as major challenges. To end this talk, I will shortly discuss some broader reflections about studying trust in HRI that I have not been able to include but are still highly relevant.