January 7, 2011
Cheering for the new gesture HUD
After a week on vacation, I was excited to find an unexpected holiday gift in my inbox: code contributions.
From fixes for typos in the web UI, to animation updates, voice bridge updates, and even a whole new gesture HUD (!), my inbox was brimming with patches.
I’ve tried to acknowledge contributions in the various change logs (which you can find here, here and here), but I wanted to say a more public “thank you!” to all the contributors.
I would also like to add a word of encouragement to people who have already made changes to Open Wonderland code: please send us your patches! Whether they are simple or complicated, related to the core or to modules, we would all love to see the results of your hard work.
If you have a change you would like to contribute, I recommend you file a request for enhancement in the issue tracker, and attach your patch there to make it available to everyone. This will also allow the module owners to review the patch for inclusion in the Wonderland trunk (we just ask you to sign our contributor agreement).
Jonathan Kaplan, Open Wonderland Architect
July 5, 2010
Here is another project from the Spatial Media Group at the University of Aizu brought to us by the same team responsible for the Wonderland-CVE Bridge described in a previous Wonderblog post.
Mapping Emotions to Avatar Gestures
By Rasika Ranaweera, Doctoral student, University of Aizu, Japan
Representing emotions in collaborative virtual environments is important to make them realistic. To display emotions in such environments, facial expressions of avatars have been previously deployed. Avatars in Wonderland can be animated with limited predefined gestures, but there is no limitation to integrating new animations along with new artwork or combination of existing contents. Our goal is to introduce emotional communication to Wonderland to make it more realistic and natural. In this case existing gestures were mapped rather than introducing new gestures to avatars of the virtual world. We also used user-friendly keywords to trigger gestures of avatars, for example :cheer: triggers “cheer” animation.
Avatars in a conversation with emotion embedded textchat
The following table shows emoticons to gestures/animations mapping in our system.
Watch the demo video to see the gesture mapping in action.
This project has been written up in the following paper:
Senaka Amarakeerthi, Rasika Ranaweera, Michael Cohen, and Nicholas Nagel. Mapping selected emotions to avatar gesture. In IWAC: 1st Int. Workshop on Aware Computing, Japan Society for Fuzzy Theory and Intelligent Informatics. Sep. 2009, Aizu-Wakamatsu.