Glenda Hannibal focuses in her research mainly on combining insights from philosophy and sociology with both theoretical and empirical work in human-robot interaction (HRI). She has previously worked in the Department of Sociology at the University of Vienna and as an expert for the HUMAINT project at the European Commission. Glenda holds a BA and MA in Philosophy from Aarhus University and is currently a PhD student in the Trust Robots Doctoral College and Human-Computer Interaction group at TU Wien.

Speaker Links: Website - Google Scholar - Twitter


The practical value of studying trust in HRI rests on the assumption that in the long-term people will accept, interact, and collaborate more with robots that they trust or consider trustworthy. In this talk, I will present my research on vulnerability as a precondition of trust in HRI. In the first part I will present some theoretical perspectives to re-frame the current discussion. I will argue that while the most commonly cited definitions of trust used in HRI recognize vulnerability as an essential element of trust, it is also often considered somewhat problematic too. This is unfortunate as I will show that an emphasis on vulnerability is in fact the key to our understanding of trust in HRI and that previous empirical studies on vulnerability in HRI have failed to understand its significance. To conclude the theoretical part, I will reflect on how this strategy translates into empirical work. In the second part of my talk, I will present the results of two empirical studies I have undertaken to explore trust in HRI in relation to vulnerability. To study human vulnerability, I will present work from an online survey, and for the robot vulnerability I present work from expert interviews. When discussing how these studies contribute to current work on trust in HRI, I will also reflect on few ethical aspects related to this theme to end this talk.

Paper covered during the talk