Social Robotic Companion

    The Huggable™ is a new type of robotic companion being developed at the MIT Media Lab for healthcare, education, and social communication applications. The Huggable™ designed to be much more than a fun interactive robotic companion. It is designed to function as a team member that is an essential member of a triadic interaction. Therefore, the Huggable™ is not designed to replace any particular person in a social network, but rather to enhance that human social network.
    It is featured with a full body sensitive skin with over 1500 sensors, quiet back-drivable actuators, video cameras in the eyes, microphones in the ears, an inertial measurement unit, a speaker, and an embedded PC with 802.11g wireless networking. An important design goal of the Huggable™ is to make the technology invisible to the user. You should not think of the Huggable™ as a robot but rather as a richly interactive teddy bear. The actuators are designed to be silent and back drivable so as the Huggable™ moves, you do not hear or feel gears. The movements, gestures and expressions of the bear convey a personality-rich character, not a robotic artifact. A soft silicone-based skin covers the entire bear to give it a more lifelike feel and heft, so you do not feel the technology underneath. Holding the Huggable™ feels more like holding a puppy, rather than a pillow-like plush doll.
    We worked with various Media Lab sponsors to create a series of Huggables for real-world applications and trials. We also collaborated with Microsoft Research, using Microsoft Robotic Studio to develop the communication avatar implementation.

    Website Interface

    We have developed a web interface to enable a remote operator (e.g., educator, grandparent, friend, etc.) to view the state of robot and evoke its behaviors. The web interface is a combination of a website and an application for streaming audio and video to and from the robot. The website includes a diagram that shows the robot’s body pose trajectory over time (via the potentiometer sensors), several buttons to execute different actions (movement and sound), an interface to enter text for the robot to speak via speech synthesis, and various check-boxes to toggle ON/OFF several of the aforementioned technologies. There are two video streams, one incoming from the robot’s video camera and another from the 3D virtual model of the robot. There is also a small animated cartoon used to indicate the whole-body gesture recognition state of the robot (e.g., whether it is being picked up, rocked, bounced, etc.). The operator can also talk to the user (i.e., the child interacting directly with the Huggable) through the robot’s speaker and listen to the user via microphones.

    Stale Panorama

    One challenge of teleoperation is providing the remote operator with adequate situational awareness of the context surrounding the robot. In particular, the robot’s camera often has a much narrower field of view than that of human peripheral vision. This gives the operator a sense of tunnel vision. It is also an issue in social interaction between user (child) and the Huggable because user assumes the Huggable has human-like field of view.To cope with this issue, we have implemented a “stale panorama” interface. To build the stale panorama, the robot autonomously captures video frames and associated head angle as it looks around the room. The captured frames are then projected to a much larger canvas. The result is a collage of still images that present the remote operator with a panorama of the environment.
    The remote operator receives a real-time video stream from the robot’s eye camera as indicated by the “gaze window” on the GUI. The operator can direct the robot’s gaze to a new location by dragging the “target gaze” window to a different location in the stale panorama. The robot then performs IK to reposition the robot’s head so that the real-time video feed shown through the gaze window co-aligns with the target gaze window.

    Wearable Interface

    We have also developed a wearable interface that consists of a set of motion capture devices that the remote operator wears to stream his/her body posture and gestures to the Huggable.We have two methods: direct control or gesture recognition. The human operator wears a set of orientation sensor units for his/her head and the arms while holding a Wii Remote and a Nunchuk in both hands. Gestures performed via Wii Controllers can be recognized as abstract gestures such as ‘waiving arms’, ‘holding arms up’, ‘holding arms forward’ and sent to the robot to mimic the same gestures. Alternatively, the orientation information captured via arm-strapped sensors can be utilized to directly drive the robot’s arms and neck.

    Sympathetic Interface

    The sympathetic interface consists of a waldo-like device that maps to the Huggable robot’s body and joint angles. As the remote operator moves the waldo of the Huggable, joint angle positions are streamed in real-time to command the joint motors of the Huggable robot. For instance, the user can directly control the gaze and pointing direction of the robot by moving the waldo-Huggale arms and neck.

    Robot Communication Avatar

    In a social communication application the triad includes the Huggable™, a remote family member, and the child. For instance, the family member may be a parent who is away on a business trip, or a grandparent who lives far from the child. The Huggable™ enables a richer, multi-modal interaction — supporting communication and play through touch, shared space, vision and speech. The remote family member interacts with the child through the Huggable™ — controlling the semi-autonomous robot via a website and seeing and hearing the child through the eyes and ears of the Huggable™.
    This video highlights the role of MSRS in our implementation of the robot communication avatar application.

    Early Education Companion

    In a distance education application, this triad includes the Huggable™, the student, and the teacher. Here the Huggable™ serves as a semi-autonomous robotic communication avatar that a remotely located teacher controls via the internet to interact with a student in an educational activity. The teacher can see the child through the Huggable’s™ cameras, hear the child through the microphones, talk to the child through the speaker, and gesture and express via animations the Huggable™ can perform. The Huggable™ can locally process how the child is touching it, picking it up, etc. and relay this information back to the educator.

    Therapeutic Companion

    In a healthcare application, the interaction triad includes the Huggable™, a member of the hospital or nursing home staff, and the patient or resident. Here the fully autonomous Huggable™ interacts with the patient to provide therapeutic benefit of a companion animal, and can also communicate behavioral data about this interaction to the nursing staff to assist them in promoting improved well-being of the patient.To serve as new type of robotic companion for therapeutic applications our design goals are:

    • To be viscerally and emotionally pleasing to interact with, both with respect to how it feels to touch and how it responds to people.
    • To provide measurable health benefit to people, especially health benefits that arise from touch and social support.
    • To be a useful tool for the nursing staff or other care providers that augments existing animal assisted therapy programs (if present).
    • To be a computationally flexible platform that allows us to explore other health related.

    One important and novel capability we are developing for the Huggable™ is its ability to participate in active relational and affective touch-based interactions with a person. Social-relational touch interactions play a particularly important role for companion animals in their ability to provide health benefits to people. Touch can convey a wide variety of communicative intents — an animal can be tickled, petted, scratched, patted, rubbed, hugged, held in ones arms or lap just to name a few. To be effective, therapeudic robotic companions must also be able to understand and appropriately respond to how a person touches it — e.g., communicating with the right kind of emotive expression or performing and appropriate touch response such as nuzzling.
    We have carried out initial experiments to assess the ability of the skin and somatic perceptual algorithms to classify the affective content of touch. A neural network was implemented to recognize nine classes of affective touch – tickling, poking, scratching, slapping, petting, patting, rubbing, squeeze, and contact. Each of these classes were again combined into six response types – teasing pleasant, teasing painful, touch pleasant, touch painful, punishment light, and punishment painful. The response type defines how the Huggable™ interprets the intent of the touch and what behavior to perform in response. For example, a pleasant touch should signify a happy reaction while strong punishment should result in a pain response.

    “The Huggable”: Static Display of V2 Huggable Prototype.  Star Wars Where Science Meets Imagination.  International Touring Exhibit, 2008.
    “The Huggable”: Interactive Demonstration of Third Generation Prototype at the San Raffaele Del Monte Tabor Foundation (HSR), Milan, Italy, May 6-7, 2008.
    “The Huggable”: Interactive Demonstration of Second Generation Prototype at the Space Between: Making Connections during Palliative Care Conference Sponsored by the Highland Hospice, Inverness, Scotland, November 8th-9th, 2007.
    “The Huggable”: Interactive Demonstration of Second Generation Prototype at the “Our Cyborg Future?” Exhibition as part of the Designs of the Time 2007 Festival, Newcastle, UK, October 19th, 2007
    “The Huggable”: Interactive Demonstration of Second Generation Prototype at the AARP Life@50+ Conference, Boston, MA, September 6th-8th, 2007.
    “The Huggable”: Interactive Demonstration of Second Generation Prototype at the Robots at Play Festival, Odense, Denmark, August 23rd-25th, 2007.
    “The Huggable”: Static Display and Interactive Touch Sensor Panel as part of the “Our Cyborg Future?” Exhibition as part of the Designs of the Time 2007 Festival, Newcastle, UK, August 10th-October 27th, 2007.
    “The Huggable”: Booth at the World Healthcare Innovation and Technology Congress Washington, DC, November 1st-3rd, 2006.
    “The Huggable”: Interactive Technology Demonstration at Disney New Technology Forum: Best of SIGGRAPH 2006 at the Walt Disney Studios in Burbank, CA September 8th, 2006.
    “The Huggable”: Interactive Technology Demonstration in Emerging Technologies Pavillion at SIGGRAPH 2006, Boston, MA, July 30th-August 3rd, 2006.
    “The Huggable”: Static Display and Interactive Touch Sensor Panel as part of the “Tech’ing it to the Next Level: Highlights from iCampus, the MIT-Microsoft Alliance” Exhibition, MIT Museum, Cambridge, MA, May 23rd – December, 2006.
    “The Huggable”: Booth and Focus Groups at The Digital Future – Creativity without Boundaries Conference, Aviemore, Scotland, May 11th, 2006
    “The Huggable”: Technology Demonstration at the IEEE Consumer Communications and Networking Conference, Las Vegas, NV, Jan 9-10, 2006.  W.D. Stiehl, J. Lieberman, C. Breazeal, L. Basel, R. Cooper, H. Knight, L. Lalla, A. Maymin, and S. Purchase.
    “The Huggable”: Technology Demonstration at Microsoft Research Faculty Summit, Microsoft Conference Center, Redmond, WA, July 19th, 2005.

    The Huggable™ is funded by the Things That Think (TTT) and Digital Life (DL) consortia, a Microsoft iCampus grant, a Highlands & Islands Enterprise grant, and other generous donations from MIT philanthropists.

    Team Huggable™ Alumni

    Yingdan Gu
    Yi Wang
    Dimitrios Poulopoulos
    Jonathan Salinas
    Daniel Fuentes
    Dennis Miaw
    Justin Kosslyn
    Cheng Hau Tong
    Aseem Kishore
    Iris Cheung
    Levi Lalla
    Louis Basel
    Scott Purchase
    Michael Wolf
    Kuk-Hyun Han

    Visiting Contributors

    Jimmy Samaha
    Hanna Barnes
    Daniel Bernhardt

    Personal Robots Group

    Professor Cynthia Breazeal

    Graduate Students

    Dan Stiehl
    Jun Ki Lee
    Robert Toscano

    UROPs

    Allan Maymin
    Helen O’Keefe
    Heather-Marie C Knight
    Kristopher Dos Santos
    Nikki Akraboff
    Dave Foster
    Maria Prus
    Lily Liu
    Herman Mutiso
    Nick Grabenstein
    Jessica Colom

    External Collaborators

    The Distance Lab (Highlands & Islands Enterprise)

    Advisory Board

    Don Steinbrecher
    Irving Wladawsky-Berger

    Plush Bear Design

    Tammy Hendricks
    Stacey Dyer

    Animator

    Fardad Faridi
    Jason Wiser

    Publications

    Noteworthy

    Bringing Social Affective Touch to Human-Robot Interaction