Social Learning in Physical and Virtual Worlds
Personal robots are an emerging technology with the potential to have a significant positive impact across broad applications in the public sector including eldercare, healthcare, education, and beyond. Given the richness and complexity of human life, it is widely recognized that personal robots must be able to adapt to and learn within the human environment from ordinary citizens over the long term. Although tremendous advances have been made in machine learning theories and techniques, existing frameworks do not adequately consider the human factors involved in developing robots that learn from people who lack particular technical expertise but bring a lifetime of experience in learning socially with others. We refer to this area of inquiry as Socially Situated Robot Learning (SSRL).

This work is motivated by our desire to develop social robots that can successfully learn what matters to the average citizen from the kinds of interactions that people naturally offer and over the long-term.

To make progress, we need to better understand the human factors associated with teaching robots. For instance, issues of limited patience, long-term engagement, ambiguous human input, transparency of the learning process, the kind of mental model people have for the robot learner, how the robot's behavior influences the human teaching process, etc. need to be understood and characterized. Although early explorations have provided a glimpse into these issues, the research community needs to understand these issues far more deeply. Such findings will be critical to informing the development of more effective robot learners that better support the human teaching process to more successfully learn what people intend to teach in a transparent and comprehensible way.

We approach this problem from a human-robot interaction (HRI) perspective to empirically investigate what the average citizen wants to teach robots and the process they go about to do so. We shall implement technologies required to conduct a human subjects experiments involving mass participation of the general public at the Boston Museum of Science. Our first milestone target is a two-week pilot at the Museum in the summer of 2009. With further funding, we hope to conduct longer duration experiments in the real world.

From this initial experiment, we shall amass a corpus comprised of the complete set of multi-modal interactions that capture the activities and experiences that transpire between people and our robotic systems. We recognize that there are a number of reasons why it is difficult to conduct a longitudinal robot learning experiment with the general public.

- One issue is time. Physical robots are a limited resource where few people can interact with robots at any given time. Robots must behave at human time scales. Issues of maintenance and robustness often restrict robots to running for relatively short periods of time. Commercial robot platforms have been used to conduct longitudinal experiments with human subjects, but not in the context of interactions that advance robot learning objectives. To date, the most successful platforms for extended learning interactions with the general public are computer and video game characters.

- Another issue is engagement. It is difficult to develop robotic systems that people are willing to interact with over an extended period of time. This is particularly difficult for designing robots that are expected to learn from natural multi-modal interactions. Even learning simple tasks or concepts from human subjects is difficult for robots because of the challenging perceptual problem introduced by the variability and richness of human behavior. One way that researchers have been able to surmount the engagement issue is by tele-operating the robot in a Wizard-of-Oz fashion.

- The pure technical difficulty of having robots learn from human subjects in the real world is significant given the complexity, unstructured nature, and inherent uncertainly in interactions with the real world and real people. This, of course, is exacerbated by the fact that these interactions are poorly understood (which motivates this work in the first place). Promisingly, appropriate use of simulation in tandem with physical robotic systems has been an effective way to make progress on difficult problems that involve learning over extensive trials and must ultimately perform in the physical world.

To surmount these hurdles, we propose a methodology that combines virtual world robots with state-of-the-art social robots and a sliding autonomy interface. Real robots interact with Museum visitors in a playroom environment inspired by a toddler preschool scenario - a sort of "robot romper room".
The virtual world installation mirrors the real-world interactions but using virtual robots in a corresponding virtual playroom. Initially, the virtual part of this experiment can run for a much longer duration than the physical installation.

These two technological platforms are being designed to work together (leveraging their respective strengths while mitigating their limitations) to enable us to acquire a rich corpus of human behavioral data and learning experiences for the real and virtual robots. The design of the virtual installation, the visualization and analysis tools, the real-world robot installation, the sliding autonomy interface, and the cognitive architecture to support learning via face-to-face interactions or tele-operation are the technical cornerstones of this effort.

We shall analyze these interactions to extract lessons for how the general public approaches the task of teaching social robots. This shall inform the development of new extensions to our cognitive-affective learning architecture to improve the robot's ability to learn from these socially situated interactions. Certainly, once we have this corpus, it opens new questions with respect to methods and techniques to successfully consolidate what is learned by each of the robots (virtual and physical), and how this body of knowledge and skills can be successfully transferred and shared by all. These are fascinating and important questions, and there are many other scientific questions that this corpus enables to be investigated.

The Boston Museum of Science is an ideal collaborator given that access to the general public is crucial for this proposed activity. The Museum receives 1.5 million visitors per year from a wide range of audiences (including underserved communities), and has extensive experience in providing high-quality technology learning experiences that achieve the Museum's educational mission. This strategic partnership gives us the opportunity to include the participation of the general public in multiple ways --- either as participants in our experiments or as trained team members that help us staff the experiment. In either case, we hope this outreach activity not only helps us to achieve our scientific goals, but also educates and excites K-12 students to consider careers in the STEM (science, technology, engineering, mathematics) areas.

This project is funded by grant from Microsoft Research. 9/01/2007-9/01/2009.