Our current research is focused on enabling situated multi-party human-robot interactions. In general, our work combines elements from machine learning, artificial intelligence, social psychology, and design.

Social Group Phenomena

We study group phenomena that is typical of human social encounters in the context of Human-Robot Interaction (HRI). Our hypothesis is that by studying group phenomena in HRI, we will be able to devise mechanisms to help robots cope with the complexity of multi-party interactions.

Our work in this direction has validated the idea that face formations, or F-Formations as coined by A. Kendon, emerge in Human-Robot Interaction [HRI, 2014; HRI LBR, 2015]. We have also proposed methods for studying group interactions with robots [CSCW WS, 2017], and studied social influence in HRI [CTS, 2011; HRI, 2018]. We are excited about prosocial computing, especially prosocial robotics.

Social Perception

Our research investigates fundamental principles and algorithms to enable autonomous perception of human behavior and group social phenomena. For example, we have worked on enabling robots to identify spatial patterns of behavior that are typical of social group conversations [IROS, 2015]. Additionally, we are interested in developing methods to enable situated, interactive agents to reason about verbal [EMNLP, 2018] and non-verbal human behavior [HRI, 2017].

Autonomous Social Robot Behavior

We believe that machine learning is a key enabler for autonomous, social robot behavior in unstructured human environments. Thus, we explore using data-driven learning methods, often in combination with model-based approaches, to enable robots to act in social settings. For example, we have used Reinforcement Learning for generating non-verbal robot behavior during group conversations [RO-MAN, 2016]. In collaboration with the Stanford Vision & Learning Lab, we are currently working on social robot navigation (see the JackRabbot website for additional information).