Project Wonderland Technical Overview Video

October 29, 2008

Earlier this month, Jordan wrote about our machinima experiences. Now you can see the results in the new Project Wonderland Technical Overview video. This 8-part video highlights 9 different Wonderland community projects and features all 8 of the Wonderland core team members in Sun Labs.

Thumbnail from Technical Overview videoIn the first segment, aimed at a non-technical audience, I take you on a tour of 4 education and 3 business-related Wonderland virtual worlds.  Jonathan Kaplan then provides an overview of the Wonderland version 0.4 architecture and sets the stage for Jordan Slott’s discussion of extensibility, Joe Provino’s description of audio features, Deron Johnson’s explanation of Xll application sharing, and Nigel Simpson’s account of collaboration-aware applications. The last two segments of the video are devoted to version 0.5. Paul Byrne first provides an overview of the new 0.5 architecture, including two community projects slated for inclusion, and Doug Twilleager ends the program with a preview of the new graphics and avatars systems.

I want to add special thanks to the 9 early adopters in the Wonderland open source community who were willing to share video footage with us to include in this production.These include four education projects:

  • Barcelona Memories, Great Northern Way Campus
  • Virtual Northstar, St. Paul College
  • iSocial, University of Missouri
  • Molecule Visualization, Free University of Berlin

Three business-related projects:

  • ProjectVS, Applied Minds
  • Virtual Academy, VEGA
  • 6th Space, Malden Labs

And two significant enhancements to the Wonderland environment:

  • WonderDAC (discretionary access control), Tim Wright
  • Scripting, Morris Ford

I look forward to compiling another Community Showcase to feature the great virtual worlds and new Wonderland features that many others of you are working on now.


Foundations for a new Whiteboard

October 28, 2008

In addition to the work undertaken by our interns in Burlingon to create a new virtual team room in Wonderland (as described in last week’s blog), we’ve had the pleasure of working with an intern based in the UK, James Barratt. James worked on a couple of topics: providing a feature to web-upload documents for sharing with shared applications such as OpenOffice; and creating the foundations of a new whiteboard, based on Scalable Vector Graphics (SVG). We asked James to give us some background to the whiteboard; here’s his blog:

A new whiteboard application has been developed for Wonderland, which aims to improve upon the existing whiteboard by using the SVG standard, which promises many new capabilities.

Its current functionality, as demonstrated in the following video, allows multiple users to draw text, lines and basic shapes, which may be filled. It is also possible to select individual elements, which can then be moved to a new location or removed from the whiteboard entirely.

In SVG, text, lines and shapes are represented as XML elements, which have various attributes. The ease with which these attributes can be changed provides great scope for future manipulation, including resizing, changing colour and adding transparency. SVG also enables Elements to be grouped.

Using SVG also means the state of the whiteboard could be saved and opened at a later time, or used elsewhere, perhaps displayed on a web page.

To view a video of the new SVGWhiteboard in use within Wonderland please go here, http://screencast.com/t/245k9W5QhH

The source code for the SVG whiteboard is available in the Wonderland Modules Incubator project on java.net


Mixed-Reality Team Room and GVU Demo Showcase

October 27, 2008

Last week, Jordan and I returned from a two day trip to Atlanta to visit Georgia Tech.  We were there to attend a corporate sponsor event and to meet with Blair MacIntyre, a professor in the GVU Center, and his students.  Blair heads the Augmented Environments Laboratory. He and his students are embarking on a Wonderland project focused on prototyping a mixed-reality team room. The goal of the project is to figure out how to design a physical team room and a virtual team room in concert, so that local and remote people can interact as effectively as possible. We had a lively brainstorming session, focusing on various options for how the real world and the physical world should meet. Stay tuned for more details on this project as the Georgia Tech team ramps up and decides on a focus for the project.

The second day of the visit was devoted to an afternoon of demos. I snapped a few photos of the projects we found interesting that had relevance to virtual worlds. Starting with projects in Blair’s group, the "pit" was one of the most compelling. This setup involved wearing goggles and walking around an installation. To the naked eye, this installation looks like a green floor surrounded by wooden planks. It doesn’t look in the least bit scary. But put on those goggles, and suddenly you’re standing on the edge of roof looking WAY down into a room below.

Jordan wearing augmented reality goggles in the "pit" View of "pit" as seen through the goggles Nicole looking down into "pit" with goggles

The screen in the middle shows what Jordan and I were looking at, but it does not come close to capturing the sense of immersion that you feel looking down through the goggles. I managed to walk out over the pit onto one of the flat planks, but only after tapping my foot in the "air" to make sure it was really solid ground. Even after doing this, I couldn’t quite bring myself to walk on the wobbly plank. Jordan didn’t even make it out onto the flat plank! It was quite an amazing illusion.

All of the other augmented reality demos involved handheld devices rather than goggles. The handhelds are used as "portals" which you look through to see the real-life scene augmented with virtual content. In the first demo, they created a space in Second Life and a parallel space on a table in the real world. The space in the real world had some lego blocks in it (below left). These were modeled in the Second Life space. The Second Life space had some additional objects. When you viewed the physical space through the handheld "portal," you could see the extra Second Life objects superimposed on the physical space (below right). For example, in the close-up below you can see a Second Life character standing on one of the lego blocks and behind that you can see a sign that only exists in the virtual space. As you move the handheld around, you are panning around both the physical and virtual spaces.

Augmented reality table Augmented reality table closeup

Two other demos involved augmented reality games. In the first (below left), you could move the handheld "portal" over a water scene. When you did this, a boat appeared on the screen that seemed to be floating in the water. You could navigate the boat around the water and look for fish swimming. After you chase down the fish, you can press a button to throw out a line and press the button again to reel in the fish. The other (below right) was an augmented reality board game that used physical playing pieces on a table, which, when put together, formed a path that virtual characters could drive on. You could also physically draw on the playing pieces to direct the action in the game. The monitor in the photo is showing what one of the players is seeing on the handheld screen.

Augmented reality fishing game Augmented reality board game

Another demo in this vein used cardboard trading cards printed with characters and special markers. When you look at a single card through the handheld portal, you see the character pop up off the card and dance on top of it in 3D. When a second card is placed next to the first, the second character animates and interacts with the first. I can really imagine kids enjoying this.

Augmented reality trading cards

In the Computer Graphics lab, there was a demo of motion capture using a full-body suit. You can’t see it clearly in the photo below, but the person wearing the body suit is controlling a stylized avatar interacting with virtual monkey bars. The live person uses hand and arm motions to climb up a pole and then grab hold of the bars and swing from one to the other. When he reaches the last bar, he has to swing his upper body back and forth to build up enough momentum to swing himself up onto a platform.

Stylized avatars Motion capture suit

Right now, this is done with passive sensors and cameras, but it was fun to imagine the suit with Sun SPOTs instead. I was quite taken with the abstract avatar used in this animation. It looks very similar to the two avatars above that are taken from a different application by this same group. Those interested in avatars and avatar motion might be interested to see other projects from Professor Karen Liu’s group.

Two other demos I liked, but that are not directly related to virtual worlds included a tabletop game (below left) similar to twister and an enhanced fish tank (below right). Here they had a live fish tank with a computer next to it. The screen showed video of the live tank with the names of the fish superimposed over the moving video of the fish.

Twister-like game on a table surface Augmented fish tank

There were other interesting demos as well, which you can read about on the GVU Center’s Demo Showcase page.


A Virtual Team Room in Wonderland

October 20, 2008

For the past eight weeks, Sun Labs in Burlington, MA has had the honor of hosting four teams of interns from Worcestor Polytechnic Institute (WPI) in Worcestor, MA. The Wonderland project had one of the teams: Joshua Dick and Gerard Dwan, both seniors at WPI. For those who have been inside our MPK20 demo building, our "team room" is quite sparse: besides some pretty graphics, there is in fact, nothing in it. Josh and Gerard’s project was to think about what might go in this team room specifically to support the activities of students in the software engineering class at WPI, and build some components to realize that vision. Curious what they came up with? Well, Josh, Gerard, take it away… 

Guest blog contributed by WPI Students Joshua Dick and Gerard Dwan. You may watch a video demo about this blog at http://www.youtube.com/watch?v=IrahHyFTDWA&fmt=18.

A Virtual Team Room 

We, Josh and Gerard, are seniors at Worcester Polytechnic Institute (WPI) who have been developing components for Project Wonderland for the past two months, as part of our senior project. This post is adapted from a presentation that we gave at Sun Labs after completing the project.

WPI offers a course called Software Engineering, which allows students to emulate working on a team in the software industry. In the course, the professor defines a project for the class, and students form (competing) teams to work on that project. During team meetings, students brainstorm, assess tasks, evaluate their progress and (of course) code! It is the team meeting aspect of this course that our project aims to improve.

Students don’t always have the proper accommodations to conduct meetings. Students come from all over campus, there are (at times) no places to meet, and scheduling is often part of the problem. Using a virtual team room could potentially eliminate all of these issues. Throughout the project, we had a vision of a Virtual Team Room in Wonderland. The Virtual Team Room is a place where students taking WPI’s Software Engineering course can meet to collaborate in a 3D space 24×7. They can assess how their team is progressing as well as plan for the future. In addition, course faculty can easily monitor student activity and progress, while offering students relevant advice. Our virtual team room could make use of components that are already built in to Wonderland: things like its phone and voice capabilities, white boards, and PDF Viewers.

So, we started brainstorming ideas for new components that could address our particular set of problems. We thought of things like a virtual library for all of the required reading/on-line materials, task orbs to show project progress, and other (more crazy) ideas.

HTML Viewer

The first component that we decided to move forward with is the HTML Viewer, which was our ‘training wheels’ project that we used to get comfortable with developing for Wonderland. The HTML Viewer can display web pages in a way that is far more lightweight than the way it is currently done in Wonderland, with no dependencies on external web browsers/X11. It displays single web pages as an in-world poster, displaying the latest Hudson build statistics or other information in the team room.

The HTML viewer is simple to use, and works similarly to Wonderland’s existing PDF Viewer. (Incidentally, the HTML Viewer is based on the PDF Viewer’s code.) Users can open web pages, zoom in and out, and refresh pages in their local Wonderland client by using the appropriate HUD buttons. Like the PDF Viewer, the HTML viewer can also be toggled between synchronized and desynchronized modes using the appropriate HUD button. When a page is opened in any client that is in synchronized mode, all other clients that are in synchronized mode begin to render that same page. When in desynchronized mode, a client can open a new web page independently (without affecting other clients,) and can then resynchronize at a later time to see whatever page everyone else is currently seeing. It should be noted that all web page rendering is done on each Wonderland client locally, rather than on the Wonderland server.

 

HTML Viewer Screenshot   HTML Viewer Architecture

We feel that our lightweight HTML viewer is valuable both for our vision of the Virtual Team Room, as well as for Wonderland in general, in situations when a full-featured web browser is unnecessary to simply display a web page passively.

Here are some features we would have liked to add to the HTML viewer if we had more time to work on it:

  1. ‘Web browser-like’ behavior with clickable links
  2. Panning/scrolling of web sites, possibly using mouse dragging
  3. Refreshing the currently dipslayed web site on a timer, automatically (The current HTML Viewer does have a manual refresh feature.)

We spent a total of two weeks working on the HTML viewer from start to finish.

WonderBlocks

After completing the HTML Viewer, we removed our aforementioned training wheels and did some more brainstorming, considering components that would uniquely utilize Wonderland’s 3D space. We came up with an idea that we decided to call WonderBlocks. At first we wanted WonderBlocks to be a utility to display tasks and their dependencies inside the Virtual Team Room. Over time, however, WonderBlocks evolved into a 3D diagramming and data visualization tool for Wonderland. WonderBlocks can be used to display any kind of relational data, from tasks and their dependencies to Flickr tag clouds.

WonderBlocks’ current functionality is accessed through its HUD, which is triggered either by clicking any Block or connection, or by simply walking up to the WonderBlocks in-world. There are buttons in the HUD for:

  1. Creating Blocks, which can be assigned a name, color, shape, size, and position.
  2. Creating connections between Blocks, which can be assigned a name and direction. When in Connect Mode, click two Blocks in succession to connect them.
  3. Editing Blocks and connections, which allows the user to change the properties that were assigned to the Blocks/connections when they were first created. When in Edit Mode, click any Block or connection to edit it.
  4. Deleting Blocks and connections. When in Delete Mode, click any Block or connection to delete it. Deleting a Block also deletes any connections it’s associated with.

The HUD is always manually dismissed by the user by clicking its close button, no matter how the HUD is initially triggered. This way, the user can easily manipulate WonderBlocks from any angle and distance, without the HUD automatically vanishing.

WonderBlocks Screenshot   WonderBlocks Architecture

In the future, we hope that Wonderland developers will utilize and continue to to improve WonderBlocks. Our ideas for expansion include:

  1. Changing the 3D drawing method for scalability. Right now, whenever the diagram changes, the entire diagram is redrawn, rather than only the parts that changed. This may be slow for very large diagrams.
  2. ‘Prettier’ connections utilizing Java3D cylinders rather than unshaded lines.
  3. Custom metada
    ta for Blocks / Connections. Right now, the only data that can be associated with WonderBlocks components is the data we outwardly present to the user (color, shape, size, etc.) It would be nice to be able to hover over a Block or connection and see custom, user-assigned data associated with that Block or connection.
  4. The ability to drag Blocks in Edit Mode.
  5. All user-triggered changes to WonderBlocks diagrams are logged by both the Wonderland client and server, using a consistent format. If some sort of log parser were to be written, Software Engineering course faculty could easily see how students are actually using WonderBlocks, and potentially gauge their work.
  6. The ability to create different views of the same data. If WonderBlocks is being used to display project tasks, for example, it would be nice to be able to arrange the diagram by completion status, by deadline, or by owner. This ability largely depends on idea 3 being implemented first.
  7. Use WonderBlocks with external data sources.

About idea 7: Currently, WonderBlocks can be used to manually construct 3D diagrams. However, we’ve taken steps to ensure that WonderBlocks can eventually be used to visualize preexisting data from elsewhere. Picture these potential uses:

  1. WonderBlocks could potentially connect to Facebook and log into someone’s account, and create a 3D diagram of their Facebook friends and the connections between them.
  2. WonderBlocks could connect to the Flickr photography web site, and generate a diagram representing Flickr tags and their relationships (similar tags, etc.) Perhaps clicking on a WonderBlock could display photos associated with the tag (or group of tags) that the Block could potentially represent.
  3. For our original vision of the Virtual Team Room for Software Engineering students, WonderBlocks could connect to the SourceForge software development management system and create a diagram of project tasks and their corresponding dependencies and status. Maybe Block shape could represent task type/category, and Block color (green, yellow, and red in this case) could represent completion status of a task.

As you can see, there is a very wide range of possibilities and uses for WonderBlocks.

We spent about five weeks working on WonderBlocks from start to finish.

Summary

The HTML Viewer and WonderBlocks are the two components that we created to work towards our vision of a Virtual Team Room in Wonderland. We feel that both components are valuable contributions to the Wonderland community, and we hope that they’ll both be used and expanded upon in ways that we never imagined.

The source code for the HTML Viewer and WonderBlocks are available at the following locations in the Wonderland Modules Incubator Subversion repository:

AN IMPORTANT NOTE: There are free libraries required for the HTML Viewer that are not included the repository because of licensing issues. Please see the README file in the HTML Viewer repository for more information.


Machinima

October 11, 2008

Machinima (or /məˈʃɪnəmə/), a Portmanteau of machine cinema, is a collection of associated production technique whereby computer-generated imagery (CGI) is rendered using real-time, interactive 3-D engines instead of professional 3D animation software. -Wikipedia

Over the past couple of weeks, we’ve (well, mostly Nicole actually) spent a fair bit of time generating some machinima (the group here at Sun Labs is generating a video for an internal review that happens at this time each year). We decided to give a technical description of Wonderland in Wonderland–the team members as avatars talking about their part of the project. It turned out to be about 40 minutes long, where each team member spoke for about 5 minutes each. Much credit (as always) is also due to the community: we gratefully included snippets of worlds that folks like yourself have created. (And yes, we are hoping to release this video to the public, stay tuned).

Machinima requires some very unique skills that we had to develop. I’m still not sure I’m very good at this myself, but I wanted to at least share some experiences.

Camera Client

In our movie, we wanted our characters to face the camera and speak (so the camera perspective is in the third-person) — this is a problem if you simply do a "movie capture" of your screen, because you’ll always see the back of your avatar’s head! To capture an avatar facing the camera and speaking, we need to use two clients: one as the avatar and one as a camera. Here, the ‘c’ key to control the camera view is really useful: this hides the avatar for the camera client so the speaker is all that you’d see. Since we want the avatar to appear large on the screen, we found adjusting the viewing angle helped: typically we used viewing angles between 40 and 60 degrees. All of this is in fact the purpose that Bernard’s Movie Recorder serves–it’s a specialized client that lets you shoot movies in third-person. For the times when you truly wish to capture the scene in first-person (seeing the back of your avatar’s head), then a "movie capture" of your normal Wonerland client is what you’ll use.

Jordan facing frontHaving the camera move during filming was also challenging: despite being able to control the speed of motion with the ‘+’ and ‘-’ keys, I found the movement to be somewhat jerky at times and difficult to control effectively. Often times I had to practice the movement several times before getting it right. (As I’m writing this I’m realizing the scripting work by community member Morris Ford may really help out here — a "movie camera motion" script could control all of this. I’m not sure if this violates the spirit of machinima though where each participant should be controlled by a live human!). We often had to shoot the movie in short segments at a time–needless to say there was a lengthly post-processing editing step in Final Cut Express.

That leads me to my next point: placemarks were extremely useful and solved the issue of the director’s cry "places everyone". Especially when filming a complex scene, it was imperative for all actors to be able to start at the same location over and over again, particularly when these scenes were spliced together during the final editing step.

Audio

Distance attenuation is a great feature in Wonderland, except when trying to film a movie! Since we recorded the audio from the camera client, the volume of the audio depended upon the distance between the camera and the speaking avatar (not to mention the audio gain of the speaker’s microphone). This means the audio level can fluctuate greatly, not really a desired effect in a movie. Take for instance, the picture below, where the camera starts out far away from my avatar and zooms in over time. In this case we had to record the audio track separately from the video track, where we kept the distance between the avatar and camera fixed to record the audio track. Actually, the audio levels have proved to be quite a challenging aspect of doing machinima in Wonderland. 

Jordan far away Jordan near

Avatar Expressiveness

Our current avatars are noticeably expressionless, although they do naturally sway a bit. The only indication that an avatar is talking is their name flashing above their head. This indication was as important for machinima with a single avatar present as it is when trying to distinguish who is speaking amongst a crowd of avatars. Even with this indication, a scene with a stationary avatar speaking can get boring after a while, so motion is very important as Nicole discovered. Here’s where some editing and splicing of scenes from different perspectives becomes very important. What would be even better is if we could have multiple invisible, motion-scripted cameras filming a scene at the same time.

Conclusions?

Overall, I’m not certain you’ll see a major mot
ion picture filmed in Wonderland anytime soon, but we were pleased that we could film 40 minutes worth of cinema in it. Certainly our brand new avatars in v0.5 will help a great deal in making our characters more, well… charismatic.

A great community project would be to enhance Wonderland to make machinima easier: clients could support both first and third-person points of view. This comes easier in v0.5 which lets you attach your client camera to any attach point in the world, not just the several camera positions possible in v0.4 today. You should also be able to "turn off" audio attenuation similar to how the Cone of Silence operates: all avatars within a certain radius are set to full volume. And finally, although it may violate the spirit of machinima, I’m looking forward to how scripting can help us automate the camera motion throughout a scene.

 


Using a Sun SPOT to control a Wonderland Client

October 3, 2008

I thought I’d bring to your attention some work we’ve done to get the Sun SPOT and Wonderland working together. Last year we made available the source code for users to be able to use a Sun SPOT as a controller–using the accelerometer in the Sun SPOT to guide your avatar. However, to be frank, the code was very well hidden.

So, we’ve updated the code and added the ability to control some primitive avatar gestures, using the switches on a Sun SPOT. I can imagine that this doesn’t necessarily sound terribly impressive, but… the reason we’re pursuing this is that some of our colleagues in the MiRTLE project at the University of Essex in the UK have been working on connecting a combined thumb-sized ‘bio-sensor’ to a Sun SPOT. Here’s an illustration of an early prototype, connected to a rev B Sun SPOT. (The bio-sensor combines a galvanic skin response sensor, a temperature sensor and an infrared pulse sensor.)

B-SPOT & bio-sensor

The goal is to use the bio-sensor to sense the user’s emotional state (in terms of arousal and valence) and then use that information to change the appearance/posture/movement/etc of your avatar. And why would we want to do this? Well, we think that one of the current problems with using VWs for education is that there’s no implicit non-verbal communication (other than requiring users to explicitly type emoticons and the like). Our hypothesis is that we can replicate some of this non-verbal communication using this kind of technology.

Oh, and we’ve updated the source code and tidied it up so everyone can use it. Check out the sunSpot directory of the CVS repository in the Wonderland Incubator project. And to see it in action, take a look at the video.

Thanks to our colleagues Xristos Kalkanis and Malcolm Lear in the Department of Computing and Electronic Systems at the University of Essex for their help.


Follow

Get every new post delivered to your Inbox.

Join 54 other followers

%d bloggers like this: