Website Interface
We have developed a web interface to enable a remote operator (e.g., educator, grandparent, friend, etc.) to view the state of robot and evoke its behaviors. The web interface is a combination of a website and an application for streaming audio and video to and from the robot. The website includes a diagram that shows the robot’s body pose trajectory over time (via the potentiometer sensors), several buttons to execute different actions (movement and sound), an interface to enter text for the robot to speak via speech synthesis, and various check-boxes to toggle ON/OFF several of the aforementioned technologies. There are two video streams, one incoming from the robot’s video camera and another from the 3D virtual model of the robot. There is also a small animated cartoon used to indicate the whole-body gesture recognition state of the robot (e.g., whether it is being picked up, rocked, bounced, etc.). The operator can also talk to the user (i.e., the child interacting directly with the Huggable) through the robot’s speaker and listen to the user via microphones.
Stale Panorama
One challenge of teleoperation is providing the remote operator with adequate situational awareness of the context surrounding the robot. In particular, the robot’s camera often has a much narrower field of view than that of human peripheral vision. This gives the operator a sense of tunnel vision. It is also an issue in social interaction between user (child) and the Huggable because user assumes the Huggable has human-like field of view.

To cope with this issue, we have implemented a “stale panorama” interface. To build the stale panorama, the robot autonomously captures video frames and associated head angle as it looks around the room. The captured frames are then projected to a much larger canvas. The result is a collage of still images that present the remote operator with a panorama of the environment.

The remote operator receives a real-time video stream from the robot’s eye camera as indicated by the “gaze window” on the GUI. The operator can direct the robot’s gaze to a new location by dragging the “target gaze” window to a different location in the stale panorama. The robot then performs IK to reposition the robot’s head so that the real-time video feed shown through the gaze window co-aligns with the target gaze window.
Wearable Interface
We have also developed a wearable interface that consists of a set of motion capture devices that the remote operator wears to stream his/her body posture and gestures to the Huggable.

We have two methods: direct control or gesture recognition. The human operator wears a set of orientation sensor units for his/her head and the arms while holding a Wii Remote and a Nunchuk in both hands. Gestures performed via Wii Controllers can be recognized as abstract gestures such as 'waiving arms', 'holding arms up', 'holding arms forward' and sent to the robot to mimic the same gestures. Alternatively, the orientation information captured via arm-strapped sensors can be utilized to directly drive the robot's arms and neck.
Sympathetic Interface
The sympathetic interface consists of a waldo-like device that maps to the Huggable robot’s body and joint angles. As the remote operator moves the waldo of the Huggable, joint angle positions are streamed in real-time to command the joint motors of the Huggable robot. For instance, the user can directly control the gaze and pointing direction of the robot by moving the waldo-Huggale arms and neck.



Papers
Robert Toscano, Building a Semi-Autonomous Sociable Robot Platform for Robust Interpersonal Telecommunication, May 2008, M.Eng. Department of Electrical Engineering and Computer Science.

J.K. Lee, R. Toscano, D. Stiehl, C. Breazeal (2008). “The Design of a Semi-Autonomous Robot Avatar for Family Communication and Education”. Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN-08). Munich, Germany.