Mapping Emotions to Avatar Gestures

July 5, 2010

Here is another project from the Spatial Media Group at the University of Aizu brought to us by the same team responsible for the Wonderland-CVE Bridge described in a previous Wonderblog post.

Mapping Emotions to Avatar Gestures

By Rasika Ranaweera, Doctoral student, University of Aizu, Japan

Representing emotions in collaborative virtual environments is important to make them realistic. To display emotions in such environments, facial expressions of avatars have been previously deployed. Avatars in Wonderland can be animated with limited predefined gestures, but there is no limitation to integrating new animations along with new artwork or combination of existing contents. Our goal is to introduce emotional communication to Wonderland to make it more realistic and natural. In this case existing gestures were mapped rather than introducing new gestures to avatars of the virtual world. We also used user-friendly keywords to trigger gestures of avatars, for example :cheer: triggers “cheer” animation.

Avatar cheering image

Avatars in a conversation with emotion embedded textchat

The following table shows emoticons to gestures/animations mapping in our system.

Emoticon
Emotion
Representation/Animation
:S Anger TakeDamage
:( Dislike No
3:( Fear TakeDamage
:D Joy Laugh
{:( Sadness GoHome
:O Surprise Cheer

Watch the demo video to see the gesture mapping in action.

This project has been written up in the following paper:

Senaka Amarakeerthi, Rasika Ranaweera, Michael Cohen, and Nicholas Nagel. Mapping selected emotions to avatar gesture. In IWAC: 1st Int. Workshop on Aware Computing, Japan Society for Fuzzy Theory and Intelligent Informatics. Sep. 2009, Aizu-Wakamatsu.


Wonderland–CVE Bridge

June 25, 2010

I have received a number of guest blog posts from students in Japan and China. This article is the first in a series describing some interesting work being done at the University of Aizu in Japan.

Wonderland-CVE Bridge

By Rasika Ranaweera, Doctoral student, University of Aizu, Japan

The Wonderland–CVE Bridge allows any device that can connect to our CVE server to communicate with Wonderland regardless of any architectural differences. The Wonderland client consists of Avatar System, which is responsible for controlling avatars, including gestures. We used the Avatar client plug-in which initializes avatars to initialize the Wonderland–CVE Bridge.

CVE Bridge image

Bridge in action with NTT DoCoMo iappli i·Con mobile interface

What is CVE?

The CVE is a Java client-server protocol developed by the Spatial Media Group of the University of Aizu. Clients connect to a session server host via channels and when clients need to communicate with each other they subscribe to the same channel. The CVE server captures updates from session clients and multicasts (actually replicated unicast) to other clients via relevant channels. An extensibility “extra parameter” can also be used to exchange information other than position details. This architecture allows multi-modal displays to collaborate with each other, including mobile applications, motion platforms, map interfaces, spatial sound, and stereographic displays.

CVE System Schematic

CVE Bridge System Schematic

CVE Clients

Mobile Interfaces

We have tested the bridge communication with mobile phone interfaces programmed with J2ME on the NTT DoCoMo iappli mobile phone platform. For instance, “Musaic” controls narrowcasting in virtual concerts including local spatialization, µVR4U2C features panoramic images and intensity modulated music, i·Con controls narrowcasting in virtual chat spaces, and i·Con-s models a chromastereoptic spiral spring for such panoramic interfaces.

Rotary Motion Platform

We imported a roller coaster model to Wonderland world and let the avatar walk on the path. Then with some modifications we recorded the avatar location periodically and saved to an xml file. With another application called “Script Interpreter,” the location events can be streamed to the CVE server and trigger the motion platform, so that a user-delegated avatar can ride the roller coaster while the real user can feel proprioceptive sensation through the motion platform.

Multiplicity

“Multiplicity” is a workstation-based narrowcasting interface. An arbitrary number of avatars can be instantiated and dynamically associated with respective users at runtime. Attributes of narrowcasting privacy functions extend figurative avatars to denote the invoked filters. Multiplicity features “mute”, “deafen”, “select”, and “attend” functions, which allow controlling the avatar in a shared conference space. The bridge can now synchronize Wonderland avatars, but in the future we intend to implement narrowcasting functions to Wonderland avatar system.

Map Interfaces

Map interfaces such as “PriMap” & “2.5D Map” display the location of sources controlled by different channels and also allow users to select and move the sources and sinks. Regardless of source and sinks, we can control the avatars and get their location updated and displayed on the maps.

About the Team

  • Rasika Ranaweera – Doctoral student of University of Aizu, Japan, whose research interests include multi-modal interfaces, immersive virtual environments, and spatial sound.
  • Senaka Amarakeerthi - Doctoral student of University of Aizu, Japan, whose research interests include emotion recognition and emotion expression using virtual environments.
  • Prof. Michael Cohen – Professor in the Dept. of Computer and Information Systems, University of Aizu, and Director of Spatial Media Group.
  • Prof. Michael Frishkopf – Assoc. Prof. in the Dept. of Music, Associate Director of the Canadian Centre for Ethnomusicology, and Associate Director for Multimedia at FolkwaysAlive!
  • Dr. Nick Nagel - Senior fellow, architect, researcher and developer for the Media Grid Institute and lecturer at the Boston College of Advancing Studies

For More Information

For more information about this project, please refer to the foll0wing publication:

Rasika Ranaweera, Nick Nagel, and Michael Cohen. Wonderland–CVE bridge. In 12th Int. Conf. on Humans and Computers, Dec. 2009, Hamamatsu.


Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: