Wonderland Webcaster, part one

April 26, 2011

In this blog post, Christian O’Connell, a student at the University of Essex in the UK, provides a brief overview of a new module: the Wonderland Webcaster.

The uses of virtual worlds can sometimes extend beyond the need for individual interaction; users can benefit from presentations and events without actually requiring an in-world presence.

The Webcaster module integrates RTMP – Adobe’s media streaming protocol – with Open Wonderland. Based on the Xuggler libraries, the Webcaster relays in-world video footage from a camera cell to a Red5 media server; from where it can viewed anywhere with a suitable Flash or RTMP capable client.

The advantage of this functionality is the capability to bring the experience of seeing a Wonderland virtual world to a much wider audience without adding additional load to the Darkstar server; as well as making content more accessible to those who might otherwise shy away from using virtual world platforms.

A demo of the module in action can be seen below. On the left is a regular web browser using a Flash plugin, displaying a web-page that resolves to an address on the wonderland web server. On the right is a user controlling the Webcaster cell (which appears in world much like a webcam.)

The Webcaster incorporates a modular design making it highly extensible, and can be easily adapted to include such features as recording streams and RTP. Currently available in the unstable modules directory, the webcaster requires an install of the Red5 server and the Xuggler core update. The current version does not include audio, this is still under development, so expect to see additional features in time.

Instructions:

  1. Follow the instructions for adding video support to Wonderland, this will create a new binary that can be started in the usual way, i.e. java -jar dist/Wonderland.jar my.run.properties.
  2. Deploy the webcaster module to the Wonderland server.
  3. Start a Wonderland client using webstart
  4. Insert a webcaster object, and open its HUD control panel and click the 'Start Capture' button
  5. Direct your browser to http://<wonderland-webserver>/webcaster/webcaster/ and connect to the server <wonderland-webserver> using the stream 'wonderland' (without the quotes--this should be the same stream name that you see in the HUD control panel for the webcaster). Click the start button.

You should see the video from the webcaster object rendered in the browser.


Using the Kinect as an Input Device for Open Wonderland

April 25, 2011

By Matthew Schmidt

Microsoft’s Kinect is a device that needs little introduction. This 3D camera is capable of tracking body movements and gestures, and packs a capable webcam as well. Shortly after its release for the Xbox 360, some industrial developers were able to create drivers for the device. What followed was a veritable explosion of innovation, with reports of Kinect hacking cropping up in diverse fields such as medicine and robotics. There is even a blog dedicated to hacking the Kinect, aptly titled “Kinect Hacks.” Clearly, the Kinect is a powerful device that has the potential to impact how we think about human computer interaction.

The University of Missouri iSocial project, where I work as a post-doctoral fellow, was kind enough to purchase a Kinect for our team to experiment with. My initial experiments were simple and were focused mainly on getting the device to connect to a Ubuntu Linux computer and display its output. Since we were very early adopters and the open source drivers were still in very early development, this was more of a chore than I anticipated, but I was ultimately able to get it to work.

Kinect connected and outputting 3D (left) and webcam (right) images

3D (left) and webcam (right)

Success! Kinect connected and outputting 3D (left) and webcam (right) images.

Following this, I began to look for ways to use the Kinect to interface with Open Wonderland. The rest of this blog post outlines how I was able to do just this.

There are a few prerequisites. You will need a Microsoft Kinect and a newer computer running Windows 7. While other versions of Windows may work (and perhaps even other operating systems), I have not experimented with them.

The software I use to interface with the connect is FAAST, which stands for “Flexible Action and Articulated Skeleton Toolkit.” You can download the software at the FAAST website. Note that in order to use FAAST, you will need to download OpenNI v1.0.0.25, PrimeSense NITE v1.3.0.18, and hardware drivers for the Kinect. Instructions and links for doing this are located at the FAAST website.

To start capturing input from the Kinect, you must first calibrate the Kinect. To do this, stand in front of the Kinect and position your body in the skeleton calibration pose.

Calibration pose

Calibration pose

Once the software recognizes your skeleton, you are ready to begin experimenting with moving your avatar around in Open Wonderland using the Kinect as an input device.

When you load the FAAST software for the first time, the default key bindings will work for controlling your avatar in Open Wonderland, since they are mapped to W, A, S and D. You will not, however, be able to use your mouse, select objects, or do anything advanced. In order to move your avatar forward, simply lean forward. To move backwards, lean backwards. To rotate left, lean left. And to rotate right, lean right. The video below is a demonstration of me using the default key bindings to move my avatar.

As an aside, I’m using the free BB FlashBack screen recording software for screen capture.

More advanced functionality can be achieved by reading the documentation on the FAAST homepage and editing the FAAST.cfg file by hand. You can use the FAAST forums to ask questions and see others’ configuration files. I provide the configuration file that I used below. Here’s a video of me using it.

My next steps include coming up with a way to translate real-world gestures into avatar gestures and refining the parameters which control the avatar.

While this research is still in the early phases, it is clear that the Kinect indeed holds promise as a potential human interface device for Open Wonderland. There is still a long way to go, however, and problems to solve before the Kinect will be usable enough for everyday usage. In particular, the software which controls the Kinect and translates body movement into avatar movement will need to improve significantly. And, hopefully, a completely open source solution that is more feature rich than libfreenect will become available.


# FAAST 0.07 configuration file

[Sensor]

sensor_resolution 0
mirror_mode 0
smoothing_factor 0
skeleton_mode 0
focus_gesture 0

[Mouse]

mouse_enabled 1
mouse_control 0
mouse_body_part 1
mouse_origin 1
mouse_left_bound 12
mouse_right_bound 12
mouse_bottom_bound 12
mouse_forward_threshold 12
mouse_top_bound 12
mouse_relative_speed 30
mouse_movement_threshold 2
mouse_multiple_monitors 0

[Actions]

# mappings from input events to output events
# format: event_name threshold output_type event

left_arm_forwards 24 mouse_click left_button

turn_left 30 key_hold a
turn_right 30 key_hold a
walk 3 key_hold w
lean_backwards 15 key_hold s


Two new Admin modules

April 13, 2011

The development of the Web-based poster module that I described in an earlier blog posting led me to consider what other kinds of cells would benefit from a similar approach.

One of the guiding principles of the MiRTLE project is that the technology does not interfere with existing teaching practice, and thus requires no additional skills of the teacher. I’ve been working with my colleagues in Virtual Learning Labs to create a pre-packaged version of MiRTLE that “just works” — with no interaction from the teacher. The MiRTLE installation consists of virtual teaching rooms each of which contain a TightVNC Viewer cell (to render the teacher’s PC into the world) and a Webcam Viewer cell (to render a view of the classroom into the world). Given that the settings for these cells are dependent on the institution in which MiRTLE is installed, we needed a way to manage them without end-user interaction: hence the creation of two new modules for use by a systems administrator — one to configure the settings of TightVNC Viewer cells and the other to configure the settings of Webcam Viewer cells.

The code for both modules is in the “unstable” directory of the wonderland-modules SVN repository and the binaries are now in the module warehouse (note that each requires its respective pre-requisite module before installing into a Wonderland server).

Screenshots of the web pages of the two modules are shown below:

Web page showing TightVNC Viewers

Web page showing Webcam viewers


Items of Interest

April 2, 2011

I’ve received a number of items of interest in my inbox this week that I thought I’d share, including April Fools humor, a pointer to free modeling software, an interview opportunity, and some upcoming virtual world conferences.

April 1 Humor

First, on the fun side: April Fools Dilbert Cartoon

Free Modeling Software for Educators

MayaFor any students and faculty reading this, be sure to check out the:

Autodesk Education Community

If you have a .edu email address,  I was told that you can register as a member of the Education Community and download for free 30 different Autodesk applications including three modeling programs that work with Wonderland: Maya, 3ds Max, and Softimage.  The person who told me about this program said that if you are a student or educator in a country that does not use .edu email addresses, you can write to Autodesk and ask to have your school added to the participating schools list.

Interview Opportunity for Educators

I received this notice from Edita Kaye, Founder of the Association of Virtual Worlds:

There is increasing interest both in the media and among publishers about all things virtual. I am delighted as the Founder of the Association of Virtual Worlds to be involved in an exciting project–a series of articles for national publication on the topic of ‘The Virtual Teacher’ dealing with the skills and challenges facing educators and educational institutions in an increasingly virtual world. It is planned that this series will subsequently become part of a book.

I would like to interview two types of educators. One, educators who have personal teaching experience with the application of virtual worlds, games, 3D immersive environments and social networks in their school or classroom. Second, educators who can discuss the role of virtual worlds, online games, 3D immersive environments, and social networks on the future of education, schools, and educators.

If you would like to be interviewed for this project or know of anyone who could contribute their thoughts, experiences or case studies please send a brief note to me directly edita@associationofvirtualworlds.com

Upcoming Virtual World Conferences

If you are looking for a place to publish your virtual world project, check out the call for abstracts just announced for the Researching Learning in Immersive Virtual Environments 2011 (ReLIVE11) conference to take place in the UK in September:

ReLIVE11 Call For Abstracts

In addition to submitting abstracts for in-person presentations and workshops, you can also submit abstracts for virtual world events to take place at the “Virtual Festival” the day before the conference. These events can either take place in their venue in Second Life or in any other virtual world. It would be great to host a few sessions in Wonderland.

The ReLIVE web site mentions another in-world conference taking place a bit earlier in September:

The Virtual World Conference

The organizers haven’t updated this web site yet, but it’s something to keep an eye on as it’s geared toward both business and education collaboration.


Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: