Papers
C. Breazeal, C. Kidd, A. L. Thomaz, G. Hoffman, M. Berlin (2005) “Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork.” In Proceedings of IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS). 708-713.

C. Breazeal, A. Brooks, D. Chilongo, J. Gray, G. Hoffman, C. Kidd, H. Lee, J. Lieberman, A. Lockerd (2004). “Working Collaboratively with Humanoid Robots”.  In Proceedings of the IEEE/RAS Fourth International Conference on Humanoid Robots (Humanoids 2004), Los Angeles, CA. 253-272.

Hoffman and C. Breazeal (2004). “Collaboration in Human-Robot Teams.” In Proceedings of AIAA First Intelligent Systems Technical Conference, Chicago, IL.


Teamwork
Using joint intention theory as our theoretical framework, our approach integrates learning and collaboration through a goal-based task structure. In any collaboration, agents work together as a team to solve a common problem. Team members share a goal and a common plan of execution (Grosz 1996). Bratman's analysis of Shared Cooperative Activity (SCA) introduces the idea of meshing singular sub-plans into a joint activity. In our work, we generalize this concept to a process of dynamically meshing sub-plans between human and robot.

Bratman also defines certain prerequisites for an activity to be considered shared and cooperative: he stresses the importance of mutual responsiveness, commitment to the joint activity and commitment to mutual support.

Cohen et al. support these guidelines and provide the notion of joint stepwise execution. Their theory also predicts that an efficient and robust collaboration scheme in a changing environment requires an open channel of communication. Sharing information through communication acts is critical given that each teammate often has only partial knowledge relevant to solving the problem, different capabilities, and possibly diverging beliefs about the state of the task.

Our work with our humanoid robot, Leonardo, integrates these ideas and uses collaborative discourse with accompanying gestures and social cues to teach the robot a structurally complex task. Having learned the representation for the task, the robot then performs it shoulder-to-shoulder with a human partner, using social communication acts to dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot. We have evaluated the efficacy of these non-verbal cues in a human subjects study and found that they improve the robustness, transparency, and efficiency of human-robot teamwork.
This video highlights the collaborative and communication skills of Leonardo.
This video illustrates Leo learning how to do a collaborative task with a human (eg. Building a sailboat and smiley face using colored blocks).