Printscreen Plugin with a Photo Gallery

August 21, 2013

By Dominik Alessandri & Christian Wietlisbach
Hochschule Luzern Technik & Architektur

As part of our bachelor thesis, along with our Kinect module described last month, we developed a printscreen plugin for Open Wonderland. When the plugin is installed on a server, all connected users can choose if screenshots should be saved locally or on the server. If saved on the server, the pictures can be displayed in an automatically updated photo gallery. The update of the picture gallery is made with a shell script which is running as task in the background on the server. We had to implement it this way, as we did not find a way to save directly in the ‘docRoot’ folder in the ‘run’ directory. The shell script uses rsync to keep these two folders in sync.

When a user is connected to a server with the printscreen plugin installed, he can display the controls of the plugin by selecting ‘Window -> Printscreen Folder’:

printscreen_display_window

The Printscreen Folder dialog allows the user to choose where to save the images:

printscreen_settings

When pressing the ‘o’ key, a new screenshot will be taken and either saved locally or on the server. The reason why the key binding was done with ‘o’ is very simple: The printscreen key is not forwarded from the client to the module. The capture of the key event is done on a level before the module gets the event. On this higher level, the “Print Screen” key is filtered and the module doesn’t get any event. Instead the message “Key is not allowed” appears. Version 3 of jMonkey will support taking screenshots by default, making it easier to capture the screen.

On the server-side, the screenshots are saved in /.wonderland-server/0.5/run/content/images/screenshot/. This folder should be created by an administrator when installing the plugin.

To run the photogallery, the content of the file ‘lightbox.zip’ needs to be extracted to /.wonderland-server/0.5/run/docRoot/lightbox/. This photo gallery shows all images stored in /.wonderland-server/0.5/run/docRoot/screenshot/. To update the photo gallery, we need a background task which copies the files from /.wonderland-server/0.5/run/content/images/screenshot/ to /.wonderland-server/0.5/run/docRoot/screenshot/. For example, the shell-script ‘copyScreenshot.sh’ can do the job.

An example of this photo gallery can be seen here: http://147.88.213.71:8080/lightbox/

Photo-gallery

You will need the following files to get the printscreen plugin running:


Avatar Control with Microsoft Kinect

August 1, 2013

By Dominik Alessandri & Christian Wietlisbach
Hochschule Luzern Technik & Architektur

As part of our bachelor thesis, we developed a module called ‘kinect-control’. This module lets you control your avatar by using your body doing gestures.

This module mainly runs on the client but necessary information will be transferred from the server to the client as needed. All the user needs to have is a connected Kinect device and the Kinect SDK Beta 2 installed. This means the module is only available for Windows x86/x64 clients. All other requirements will be shipped from server to client when logging in. This makes it easy for interested parties to use.

To set up a Wonderland server to use the kinect-control module, the administrator needs to change two files on the server:

/.wonderland-server/0.5/run/deploy/wonderland-web-front.war/app/win32/
wonderland_native.jar
/.wonderland-server/0.5/run/deploy/wonderland-web-front.war/app/win64/
wonderland_native.jar

These two files must be replaced. The extended files can be downloaded from

http://147.88.213.71:8080/modules/KinectControl/win32/wonderland_native.jar
http://147.88.213.71:8080/modules/KinectControl/win64/wonderland_native.jar

These new files contain the DLL ‘KinectDLL.dll’ which is necessary for the connection between Open Wonderland and the Kinect.

If you are running this module on the client-side, the first thing you need to do is to connect the Kinect device to your Open Wonderland client. This can be done using the kinect-controller dialog. After installing the kinect-control module, in Wonderland, click on ‘Window -> Kinect Controller’:

kinect_display_window

The dialog contains two buttons: ‘Start Kinect’ and ‘Stop Kinect’:

kinect_settings
If you have a Kinect device connected to your PC, you can click the button ‘Start Kinect’. After a while, your Kinect Device moves to the initial position and the window displays ‘Connection Works’:

kinect_settings_connection_works

You can adjust the angle of your Kinect device by sliding the slider to the desired position:

kinect_settings_angle_slider

Now you are ready to move your avatar by using your body. Place yourself in front of the Kinect device and control your avatar as follows:

  • Walk: Just move your legs up and down
  • Turn right: Hold right arm to the right side
  • Turn left: Hold left arm to the left side
  • Fly up: Hold right arm up
  • Fly down: Hold left arm up

It is possible to extend the gestures recognized by this module. For this, you need to modify the file ‘gesturesBDA.txt’ located in ‘kinect-control-client.jar’ inside ‘kinect-control.jar’ using the software ‘KinectDTW‘. After this file contains your new gesture, you need to map this gesture to a keyboard-input.

The file ‘keyMapping.txt’ contains the allocations from gestures to keyboard-inputs. It is located on server in /.wonderland-server/0.5/run/content/modules/installed/kinect-control/client/. The structure of the file is as follows:

[Name of gesture]=[isMouse[0/1]];[keyCode1[decimal]];[keyCode2[decimal]];[time[millis]]

Example 1:

@Run=0;16;87;2500
Description:
When gesture @Run is recognized, press key 16 (shift) and 87 (w) 2.5 seconds long.

Example 2:

@Walk=0;87;0;3000
Description:
When gesture @Walk is recognized, press key 87 (w) for 3 seconds long.

For a list of all keycodes you can consult http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes/.

You will need the following files to get the kinect module running:

A video of the running module can be seen on YouTube:


OWL Chatbot Module for a Virtual Campus

November 26, 2012

By Nisarg Naik

The research project summarized in this article was conducted as part of the requirements for degree of MSc Computer Science at Nottingham Trent University in the UK. For this project, I created an Open Wonderland Chatbot Module.

A chatbot is a virtual character that simulates an intelligent conversation with humans. The main purpose of this project was to extend Open Wonderland’s non-player character (NPC) functionality to interact with human avatars. The chatbot is embedded in a virtual campus simulated learning environment (see screenshots of the virtual campus). The main purpose of the chatbot is to communicate or guide users to solve their problems.

Clifton Campus

Virtual campus in which chatbot is integrated.

Here is an example of an integrated chatbot window in which a user has engaged in a conversation with the chatbot from inside Open Wonderland.

Chatbot window

Chatbot window

While chatbots can be used to simulate conversations that convince people the bot is a real person, they can also be used as an advanced search engine to retrieve factual knowledge for users.

Technical Details

Initially, my intention was to develop a basic conversational agent program which could perform keyword-matching to scan for user inputs and generate replies. In the final project, however, I instead integrated a highly developed existing artificial intelligence engine. This engine is based on ALICE (Artificial Linguistic Internet Computer Entity) which uses an AIML-based (Artificial Intelligence Markup Language) interpreter to query and retrieve information. To integrate this technology into Open Wonderland, I experimented with both a Java-based AIML interpreter known as Program-D and a web-based AIML interpreter called Pandorabots. I ended up using Pandorabots for the Chatbot module prototype.

Pandorabots uses the XML-RPC (remote procedure call) communication protocol. It uses XML to encode data, and it works by sending an HTTP request to the server. A client can interact with Pandorabots using a Bot ID. The main advantage of using Pandorabots is that it is easy for users to create and add knowledge to their chatbots by uploading AIML files.

The diagram below illustrates where the Chatbot module fits into the structure of the Virtual Campus once it is complete.

Virtual Campus system diagram.

Virtual Campus system diagram.

As envisioned, the Virtual Campus will include seven buildings covering many disciplines including business, arts, technology, team collaboration, socializing room, student club, and student service. Students will be able to get virtual resources from various specialized virtual departments and directly obtain relevant information easily. The Virtual Campus will also include non-academic rooms for entertainment.

The chatbot functionality will be used throughout the Virtual Campus as an automated guide or instructor, able to interact with students to solve their problems. The AIML integration will allow us to interface with a variety of additional knowledge bases, such as WolframAlpha and DBpedia, allowing students to retrieve information from the chatbot without leaving the virtual world environment.

A big advantage of using Open Wonderland is that it is a great multi-user virtual environment engine that provides many functionalities and in-world applications that students and lecturers can utilize in the Virtual Campus. A well organized virtual campus can be the most efficient way for students and lecturers to collaborate. To support this collaboration, we are planning to use the PDF viewer, an in-world web browser, the Microsoft Office document viewer, text-chat, VOIP audio, webcam video integration, the Screen Sharer and VNC Viewer for desktop sharing, NetBeans for programming projects, in-world music players, wall posters, and the multi-user white board for discussions and sketching. These tools, embedded in the virtual world, are similar to desktop-based applications.

For more information, please refer to my full dissertation, “Integration of a Chatbot Engine on a Multiuser Virtual Environment to Enhance Educational Framework for a Virtual Campus.

I have also made the source code for the Chatbot module available, as well as the 3D models used to create the Virtual Campus.

 

 


Summary of +Spaces Development in Wonderland

July 10, 2012

By Bernard Horan

One of the goals of the +Spaces project is to prototype three applications in Open Wonderland (OWL) by which citizens can be engaged in policy discussion: polls, debates and role-play simulations. In the earlier blog posts, I focused on the way in which the applications appeared to users and the interaction features they provided. In this blog post, I provide a brief description of the other less visible features of the applications and the way in which, from an architectural standpoint, the applications are implemented in Open Wonderland.

The additional invisible responsibilities of each application are:

  • to receive updates from the +Spaces middleware
  • to communicate user actions to the +Spaces middleware

Each application is implemented as an OWL ‘cell’, packaged in an OWL module. However, there are some additional supporting modules that provide cells and other functionality used by the three applications. I’ll go through each application, beginning with the Polling Booth, and then briefly describe the other functionality before summarising our work.

Polling Booth

The Polling Booth is the simplest of the applications and is implemented as a 2D cell. The client implementation is a 2D Swing application that displays the current users of the Polling Booth, along with metadata describing the poll, and a button by which the user may take the poll. The user interaction of voting in a poll is implemented as a heads-up-display (HUD) component—this is because the activity is private to the user (see Figure 1, below). The description of the poll, such as its title and sequence of questions, is represented as an instance of a Java class that takes the role of a data model, with no behaviour. The Java Poll class is dependent on some other data-model-like classes such as BinaryPollQuestion, and FreeTextAnswer.

Figure 1: +Spaces Polling Booth

The server implementation of the Polling Booth cell acts as a gateway between the client implementation and the +Spaces middleware; thus it has two responsibilities: communication with the client and communication with the +Spaces middleware. The server communication with the client follows the regular OWL architecture, relying on subclasses of App2DCellClientState, App2DCellServerState and CellMessage. However, server communication with the +Spaces middleware follows the example of the connectionsample module that relies on the use of a subclasses of BaseConnection, Message and ResponseMessage and implementations of ServerPlugin and ClientConnectionHandler.

The communication flow is as follows (and is common to all three +Spaces OWL applications):

  • The +Spaces middleware creates a connection to the OWL server, and calls a method on the connection to set the data model of a cell (identified by its cell id) to be an instance of class Poll. The connection sends a message (an instance of a subclass of Message) to the OWL server and waits for a response.
  • On the OWL server, the instance of (an implementation of ClientConnectionHandler) receives the message. It then looks up the instance of CellMO according to the cell id (via the method CellManagerMO.getCell()), casts it to be of the correct class and calls a method to set its poll data model. The CellMO instance, in turn, sends an instance of (a subclass of) CellMessage to its clients, causing the clients to update. Finally, the ClientConnectionHandler returns an instance of (a subclass of) ResponseMessage to indicate if the operation to set the poll was a success.
  • From the client, when the user interacts with the Polling Booth, the answers (and other actions) are sent from the Cell to the CellMO on the server via an instance of (a subclass of) CellMessage. The Polling Booth CellMO then calls a method on the instance of (an implementation of) ClientConnectionHandler to send the user’s actions to the instance of (a subclass of) BaseConnection, where it is received and passed on to the +Spaces middleware via a listener pattern.

Debating Chamber

The Debating Chamber application is also implemented as a cell, taking on two roles: it acts as a container for other elements of the chamber; and it presents a ‘carpet’ to the user on which he or she may express a vote.

Figure 2: +Spaces Debating Chamber

When the CellMO that represents the Debating Chamber is inserted into the world, instances of other subclasses of CellMO are added as its children, as shown above in Figure 2. These are:

  • Two instances of StickyNoteCellMO—these post-its are used to display the ‘statement’ and ‘description’ of the debate.
  • An instance of ModelCellMO—this is used to hold the 3D model of the chamber (the OWL auditorium model).
  • An instance of AudioRecorderCellMO—this is used to produce an audio recording of the debate (it adds the instance of the Debating Chamber CellMO as a listener to be informed when a user terminates a recording).
  • An instance of SharedActionCellMO—this is used to render messages received from other virtual spaces such as Twitter and Facebook (this is based on the twitter viewer module that is in the OWL modules source code repository and rendered as a pigeon on the client).
  • An instance of LinkListCellMO—this is used to render the list of URLs that users may double-click on to open a browser. This cell is part of a new module that is based on the Sticky Note cell: it is rendered similarly, with a list replacing the text area.

The CellMO that represents the Debating Chamber maintains managed references to each of these children (except for the instances of ModelCellMO and AudioRecorderCellMO). For example the Debating Chamber CellMO has a field labelled ‘debateDescriptionCellRef’ which is an instance of ManagedReference<StickyNoteCellMO>. The references are used as follows:

  • When the setDebate() method is called on the Debating Chamber CellMO, it calls the method setText() on each instance of StickyNoteCellMO, with the parameter representing the description or statement, appropriately. It also calls the method setLinks() on the the instance of LinkListCellMO, with a list of links (URL-displayname pairs). The result of these method calls is to update the display of the contents of the cells on the clients.
  • From the client, when the user interacts with the Debating Chamber, the user votes are sent from the Cell to the CellMO on the server via an instance of (a subclass of) CellMessage. From then on, the mechanism follows the same pattern as the Polling Booth.
  • When the +Spaces middleware reports a comment from Twitter or Facebook, it is received as a Message by the implementation of ClientConnectionHandler associated with the Debating Chamber CellMO. This, in turn, calls the method notifySharedMessage() on the Debating Chamber CellMO, which then calls the method addRecord() on the instance of SharedActionCellMO, eventually causing the clients to display the contents of the comment.

In terms of its communication with the +Spaces middleware, the implementation of the Debating Chamber application follows the pattern set by the Polling Booth. That is, it reports user actions and receives comments from other virtual spaces via the OWL server.

Simulation Agora

The ‘Simulation Agora‘ is the name we give to the virtual space that hosts the role-playing simulation. It is the most complex of the applications and is also implemented as a cell, somewhat similar to the Debating Chamber. However, unlike the Debating Chamber it does not provide any direct user interaction features for 3D objects, and instead represents the floor of the agora (rendered as an EU flag) and provides a HUD control panel to a privileged user role by which the stages of the simulation can be managed.

Figure 3: +Spaces Simulation Agora

Figure 3 illustrates some of the elements of the Simulation Agora and the HUD control panel that is accessible to the moderator (or any member of the admin user group).

The HUD control panel is implemented as a Swing tabbed pane. One of the tabs provides the moderator with the means to step forwards and backwards through the stages of a ‘template’ that describes activities of a role-play simulation (more information about the use of role-play templates is provided in an earlier blog posting). Each of the two templates is implemented in Java in the form of a (pseudo) state machine: as the moderator progresses the template from one stage to the next, the state machine calls methods on the CellMO that represents the Simulation Agora to insert and remove child cells and to update the state of those child cells. Without going into too much detail, the sequence of stages of the state machine are as follows:

  1. Meet Participants—this is the starting stage of the simulation
  2. Ice Breaker Vote—this is an optional stage, specified by the policy maker, in which participants may vote on a question related to the policy
  3. Ice Breaker Results—the results of the ice breaker vote are presented
  4. Assign Roles—the moderator assigns the participants to the roles described in the template
  5. Prepare Statements—the participants create a number of statements according to the description of their role (the number of statements is specified by the template)
  6. Read Statements—the participants read out their statements
  7. Mark & Group Statements—the participants and the moderator collaborate to group their statements and to assign votes to statements
  8. Present Results—the participants discuss the results of marking and grouping the statements
  9. Final Vote—if there was an ice-breaker vote, then the participants are given the opportunity to vote again to determine if their views on the policy have changed
  10. End Simulation—the results of the final vote are presented

The children of the cell that represents the Simulation Agora are:

  • An instance of PosterCellMO—this shows the current stage of the simulation by rendering an appropriate image. This cell remains visible at all times.
  • An instance of a simulation carpet cell—this in turn has a child Poster cell that displays the question that the carpet is to be used to answer. The simulation carpet is used to capture the participants’ vote on the ice breaker vote and the final vote. It is added and removed as necessary by the state machine, in response to the moderator’s interaction with the HUD control panel.
  • Two instances of PosterCellMO—to present the results of the ice-breaker vote and final vote. The state machine adds these cells when appropriate and sets their contents to present the results of the preceding votes.
  • A cell that is used by participants to add ‘statements’—this is a cell developed specifically for the project. It is a 2D cell, in which the client implementation is provided by a 2D Swing application that uses an instance of JDesktopPane. The cell is added as a child when the state machine is at the ‘Prepare Statements’ stage.
  • Pairs of cells to represent each user role. Each role is associated with a poster cell describing the role, and next to it a ‘podium’ cell that acts as a place mark. The placemarks are used to enable the participants to jump to a description of their roles. These pairs are added at the ‘Assign Roles’ stage by the state machine.
  • A cell to represent a countdown timer. This is added and removed when necessary by the state machine.
  • A shared action cell, as described in the Debating Chamber application.

The CellMO that represents the Simulation Agora maintains references to the CellIDs of each of these children. For example the Simulation Agora CellMO has a field labelled ‘simulationDescriptionCellID’ which is a reference to the CellID of the instance of PosterCellMO that displays the description of the simulation. The fields are used as follows:

  • When the setSimulation() method is called on the Simulation Agora CellMO, it follows more or less the same pattern as the Debating Chamber, except that it also retrieves the template identifier from the simulation and creates a new instance of the state machine that represents that template.
  • When the moderator steps through the stages of the template, the CellIDs are used to identify the appropriate cells to delete or update, as described in the stages above.
  • From the client, when the user interacts with the Simulation Agora, the user votes and statements are sent from the Cell to the CellMO on the server via an instance of (a subclass of) CellMessage. From then on, the mechanism follows the same pattern as the Polling Booth and Debating Chamber.
  • When the +Spaces middleware reports a comment from Twitter or Facebook, it follows the same pattern as the Debating Chamber.

The other two tabs on the HUD control panel enable the moderator to inform the server when ‘something interesting happens.’ That is, something worth noting by the data analysis service; for example, when a video recording of the session is replayed, viewers are able to jump to appropriate timestamps to view the users’ activities (and listen to their comments). This is implemented using the regular OWL architecture—a message from the Simulation Agora cell to the OWL server which is then forwarded to the +Spaces middleware. All other communication between the +Spaces middleware, the OWL server and the OWL clients is as described in the two applications above.

Other +Spaces Functionality

In addition to the cells that provide the functionality for the Polling Booth, Debating Chamber and Simulation Agora, we also implemented a module with some extra features:

  • A means of reporting user activity to the +Spaces middleware:
    • two users come in range of each other (implemented using a client plugin)
    • a user ‘chats’ (using the text chat)
    • a user is forcibly disconnected
    • a user is forcibly muted
    • a user logs in or logs out
    • a user begins speaking
  • a servlet which provides a means of capturing profile information about a user (which is then forwarded to the +Spaces middleware)

Summary

In summary, we have developed 19 OWL modules for the +Spaces project, the following of which have been (or will be) donated to the OWL source code repository:

  • countdown timer
  • office-converter
  • postercontrol
  • tightvnccontrol
  • twitter
  • webcamcontrol
  • webcaster

However, we could not have completed any of the Wonderland aspects of the +Spaces project without the use of many of the existing modules and the wonderful support of the OWL community. So, on behalf of the project, I’d like to express my thanks to all the member of the community.


Wonderland Webcaster, part two

May 14, 2012

By Bernard Horan

Some time ago, I described work on an Open Wonderland webcaster that had been undertaken by one of our students at the University of Essex. One of the motivations behind the Webcaster was to provide access to those users who merely want to observe (and listen to) the activities in an Open Wonderland (OWL) virtual world instead of actively participating. In particular, we wanted to enable access to a set of users in the +Spaces project, such as policy makers, who wish to watch a session in which a policy is debated or acted out in a role play simulation and not be able to influence the session.

Since the first video-only version of the Webcaster, we’ve experimented with several different approaches to extend the Webcaster to provide audio, one of which is described below. After several unsuccessful attempts, we now have a successful prototype implementation that combines the video broadcast functionality with an existing OWL virtual phone object to provide a unified audio-video webcast.

The video above shows the OWL webcaster being used by two clients.

  • One client is a regular OWL webstart client, into which a Webcaster ‘object’ has been inserted.
  • The other client is a Web Browser that has two standard plugins: Flash and Java. The Browser opens a URL (provided by the Webcaster) hosted on the OWL web server and downloads the HTML content. The content includes a Flash object and a Java applet. (The Java applet is provided by doddlephone–a web-based SIP client.)

The deployment requirements are as follows:

  • The SIP client requires that authentication is enabled on the OWL server
  • The following ports must be open on the server host (and accessible through a firewall): 5080 and 1935.

The Webcaster requires further testing with more concurrent users before release to the OWL module warehouse–we’ll post a message on the forum when it’s ready. However, as ever, the source code is available in the unstable directory of the wonderland-modules SVN repository.

Technical Details

In this section I’ll briefly describe the video-only webcaster, the failed experiment at combining audio and video, and an idea for how the current implementation could be improved.

1. Video-only Webcaster

The video functionality of the webcaster requires a full OWL client to capture the graphics rendered by the in-world webcaster ‘camera’. The captured graphics are transmitted to a Red5 RTMP Server that is likely running on the same host as the OWL server. From there, lightweight clients can connect to receive a Flash ‘stream’ to play in a Flash player (or Web browser plugin). In this case, the firewall has to be configured to enable the OWL client to transmit graphics to the Red5 server, as well as for Flash clients to stream from the Red5 server. This architecture is represented by Figure 1 below.

Architecture for Video-only Webcaster

Figure 1

2. Combining Audio & Video on the OWL Client

The OWL Client receives audio from the voicebridge that it plays through the client’s audio-out drivers, such as its speakers or a headset. The audio it receives is attenuated/spatialised according to the location of the avatar that is associated with the user of the OWL client, but does not include the audio from the microphone connected to the hardware on which the OWL client is running.

Our attempt to combine audio and video on the client first required us to mix the audio from the voicebridge with the audio from the client microphone and then combine that with the captured graphics, before sending it on to the Red5 server. From that point, the Flash clients would be ignorant of the change in architecture and would receive a combined audio/video ‘stream’ to play. This architecture is represented in Figure 2 below.

Figure 2

However, this approach failed due to several problems, including:

  • The audio was received from the perspective of the avatar of the user associated with the client, NOT from the perspective of the webcaster object in the virtual world.
  • The frequency of the received audio could be changed by the user, which made mixing with the audio from the microphone unstable.
  • The extra processing that was required on the OWL client to mix the audio and then combine with the graphics was significant.

Thus, this approach was abandoned as infeasible. An alternative approach: to mix the audio on the voicebridge before sending to the OWL client was also considered. This suffered from the drawback that there was no existing mechanism to send the mixed audio to the OWL client–it would have required an additional connection through a firewall. Again, this was abandoned due the problems of managing ad hoc connections.

3. An Alternative Approach?

The current approach works around the audio problem by using an existing virtual phone object. However, we recognise that it is less than an elegant solution–we would still like to be able to provide a combined audio and video Flash stream to Web Browser clients. Ideally, we’d like the architecture as illustrated in Figure 3. Here the  video is sent from an OWL client to the Red5 server where it is mixed with the spatialised audio that comes from the voicebridge, from the perspective of the webcaster. We have no idea if this would be possible, as it requires expertise in developing Red5 applications. However, if any readers would like to take this on as an experiment, please reply via a comment below. Or, if you have experience of Red5 and think this is a daft idea, let us know via the comments!

Figure 3


View Office documents in Open Wonderland

September 12, 2011

One of the great features of Open Wonderland is its application-sharing facility. And one of the benefits this provides is the ability to use OpenOffice in-world to create and edit documents that are compatible with Microsoft’s ubiquitous Office suite.

But what if you just want to show a document in-world without the overhead of application-sharing? Or, what if you’re running your Open Wonderland server on a host, like Windows, that doesn’t provide application sharing?

Well, the long answer is to open your document, save it (or print it) to a PDF, and then drag-and-drop the PDF document into your Open Wonderland client. For the +Spaces project we needed something easier; something that would enable users to easily share their documents with other users in the context of a debate about policy proposals.

A new module in the Open Wonderland Module Warehouse streamlines this process by allowing you to drag-and-drop any text document, spreadsheet or presentation supported by OpenOffice (in addition to its own file formats, of course). The office-converter module relies on a little advertised feature of OpenOffice: its ability to run in headless server mode. The module bundles the open source JODConverter software to provide a web-based API to OpenOffice and adds the necessary hooks into the Open Wonderland client and server so that it all works seamlessly.

In the movie below, I demonstrate how to drag-and-drop common Microsoft Office documents into a wonderland client.

And here, an interesting use case that extends the functionality of the office converter by combining it with the PDFSpreader module.


Google 3D Warehouse Integration

August 19, 2011
Alexios Akrimpai
 
 
 
By Alexios Akrimpai
University of Essex, UK
 

Hi everyone, my name is Alexios, and I am (was) a member of Frontrunners project for Open Wonderland (OWL) at Essex University under the supervision of John Pisokas.

My project was to develop a module for OWL that will help virtual world designers to quickly and conveniently browse 3D objects from Google Warehouse (GW) and manipulate them in the ‘world’.

The functionality of the module includes:

1.      Easy search and page navigation within the Wonderland client.

2.      Download and install 3D objects in Collada format (Google SketchUp not yet supported)

3.      Save and browse 3D models locally

4.      Model details available on selection

5.      Search history

The internal design of the module is very simple. GW provides an RSS feed that is used inside the module to search for the models and all their info (e.g. author, description, download url, and etc.). Once this info is available, the software displays it in a nice format for the user, and allows easy manipulation (e.g. download, install) of the models.

The only major drawback of the module at the moment is that Google SketchUp models are not yet supported. This may change in the near future, when I, or someone else, may develop a ‘SketchUp to Collada’ independent (light) converter, or support to the SketchUp format directly from OWL.

The source code of the module is available in a github repository called Google-Warehouse-Explorer as well as in the Wonderland unstable module directory. I am pretty sure that the module contains a few small bugs that I missed. Also, I am planning some improvements for the next version. Please do not hesitate to let me know of any errors found, any improvements, and/or additional futures that you think will make it more useful and user friendly.


Composing Music with JFugue Editor Module

July 7, 2011

By Hans Beemsterboer

Five years ago, I was looking for a tool to compose music online. All music tools seemed to be only available in standalone versions. Therefore, I started to develop a web application for online music composition. For this purpose, the JFugue library, was perfectly suitable. I’ve called the resulting web application Jammidi.

One thing that is still missing in the Jammidi website is a voice- and text-chat possibility. When I read that the piano module, contributed to OWL last year, is using JFugue, I started to think of composing music within OWL. Then, all collaborative power of OWL can be used for composing music. This idea resulted in the JFugue Editor module. This first version of the JFugue Editor integrates two open source projects: JFugue and OWL.

As hinted at in the video, it would be nice in the future to also integrate with the LilyPond project. Then, besides composing music online, the JFugue Editor could also be used for teaching music online.


Using the Kinect as an Input Device for Open Wonderland

April 25, 2011

By Matthew Schmidt

Microsoft’s Kinect is a device that needs little introduction. This 3D camera is capable of tracking body movements and gestures, and packs a capable webcam as well. Shortly after its release for the Xbox 360, some industrial developers were able to create drivers for the device. What followed was a veritable explosion of innovation, with reports of Kinect hacking cropping up in diverse fields such as medicine and robotics. There is even a blog dedicated to hacking the Kinect, aptly titled “Kinect Hacks.” Clearly, the Kinect is a powerful device that has the potential to impact how we think about human computer interaction.

The University of Missouri iSocial project, where I work as a post-doctoral fellow, was kind enough to purchase a Kinect for our team to experiment with. My initial experiments were simple and were focused mainly on getting the device to connect to a Ubuntu Linux computer and display its output. Since we were very early adopters and the open source drivers were still in very early development, this was more of a chore than I anticipated, but I was ultimately able to get it to work.

Kinect connected and outputting 3D (left) and webcam (right) images

3D (left) and webcam (right)

Success! Kinect connected and outputting 3D (left) and webcam (right) images.

Following this, I began to look for ways to use the Kinect to interface with Open Wonderland. The rest of this blog post outlines how I was able to do just this.

There are a few prerequisites. You will need a Microsoft Kinect and a newer computer running Windows 7. While other versions of Windows may work (and perhaps even other operating systems), I have not experimented with them.

The software I use to interface with the connect is FAAST, which stands for “Flexible Action and Articulated Skeleton Toolkit.” You can download the software at the FAAST website. Note that in order to use FAAST, you will need to download OpenNI v1.0.0.25, PrimeSense NITE v1.3.0.18, and hardware drivers for the Kinect. Instructions and links for doing this are located at the FAAST website.

To start capturing input from the Kinect, you must first calibrate the Kinect. To do this, stand in front of the Kinect and position your body in the skeleton calibration pose.

Calibration pose

Calibration pose

Once the software recognizes your skeleton, you are ready to begin experimenting with moving your avatar around in Open Wonderland using the Kinect as an input device.

When you load the FAAST software for the first time, the default key bindings will work for controlling your avatar in Open Wonderland, since they are mapped to W, A, S and D. You will not, however, be able to use your mouse, select objects, or do anything advanced. In order to move your avatar forward, simply lean forward. To move backwards, lean backwards. To rotate left, lean left. And to rotate right, lean right. The video below is a demonstration of me using the default key bindings to move my avatar.

As an aside, I’m using the free BB FlashBack screen recording software for screen capture.

More advanced functionality can be achieved by reading the documentation on the FAAST homepage and editing the FAAST.cfg file by hand. You can use the FAAST forums to ask questions and see others’ configuration files. I provide the configuration file that I used below. Here’s a video of me using it.

My next steps include coming up with a way to translate real-world gestures into avatar gestures and refining the parameters which control the avatar.

While this research is still in the early phases, it is clear that the Kinect indeed holds promise as a potential human interface device for Open Wonderland. There is still a long way to go, however, and problems to solve before the Kinect will be usable enough for everyday usage. In particular, the software which controls the Kinect and translates body movement into avatar movement will need to improve significantly. And, hopefully, a completely open source solution that is more feature rich than libfreenect will become available.


# FAAST 0.07 configuration file

[Sensor]

sensor_resolution 0
mirror_mode 0
smoothing_factor 0
skeleton_mode 0
focus_gesture 0

[Mouse]

mouse_enabled 1
mouse_control 0
mouse_body_part 1
mouse_origin 1
mouse_left_bound 12
mouse_right_bound 12
mouse_bottom_bound 12
mouse_forward_threshold 12
mouse_top_bound 12
mouse_relative_speed 30
mouse_movement_threshold 2
mouse_multiple_monitors 0

[Actions]

# mappings from input events to output events
# format: event_name threshold output_type event

left_arm_forwards 24 mouse_click left_button

turn_left 30 key_hold a
turn_right 30 key_hold a
walk 3 key_hold w
lean_backwards 15 key_hold s


Wonderland Wednesday EZScript Demo

February 13, 2011

At the February 9th Wonderland Wednesday session, Ryan Babiuch (aka Jagwire), a member of the iSocial team at the University of Missouri, demonstrated his new EZScript module. This module integrates JavaScript as a capability that can be applied to any in-world object. While this is still a work in progress and not yet available in the Module Warehouse, it is now functional enough to experiment with. Here’s a demo of how it works:

And here’s Ryan presenting at the Wonderland Wednesday session:

For those who want to experiment, Ryan has provided code below for the four examples shown in the videos, plus another script that places a message in the HUD (heads-up display) when the object is clicked. The source code for the EZScript module is currently in the wonderland-modules/unstable directory hosted on google code. The full path is: wonderland-modules/unstable/EZScript. If you want to try out EZScript on your own server, you can use Subversion to check out the code.

EZScript Examples

// Hover
for(var x = 0; x < 2; x++) {
   animateMove(cell, 0, 0.5, 0, 1);
   animateMove(cell, 0, -0.5, 0, 1);
}


// Play animation already associated with an object
AnimateCell(cell);


// Spin 
ScriptContext.enableProximityEvents();
function s() {
   spin(cell, 5, 5);
}
ScriptContext.onApproach(s, false);

// Move NPC
ScriptContext.clearCallbacks();
ScriptContext.enableMouseEvents();

var x = 0;

function walk() {
   MoveNPC(cell, x, 0, 0);
}
ScriptContext.onClick(walk, false);

// Show message in HUD
ScriptContext.enableMouseEvents();
function show() {
   ShowHUDMessage("Welcome to 
        Open Wonderland!");
}
ScriptContext.onClick(show, false)


Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: