Last week, Jordan and I were in Washington, DC to attend the 3D Training, Learning and Collaboration (3D TLC) conference and demo Wonderland v0.5 at the Federal Consortium for Virtual Worlds Conference vendor day. We stayed at the Kellogg Conference Hotel on the campus of Gallaudet University, a world renowned school for students who are deaf. I also arrived in DC early enough before the conference to catch one of two weekends a year that the White House grounds are open to the public. Along with thousands of others, I was able to get a glimpse of the back of the White House and take a peak at the Rose Garden.
3D TLC Highlights
Lots of other people have blogged about the 3D TLC conference, so check the list I’ve included if you’d like to learn more details about the overall conference. In this post, I’m just going to focus on the bits I thought were particularly relevant to Wonderland.
The highlight of the conference for me was the presentation by Mitzi Montoya, a professor of marketing innovation at North Carolina State University. Mitzi, and her collaborator, Indiana University Professor Anne Massey, are working on developing a measurement tool to assess "perceived virtual presence." This tool includes 8-10 measures and touches on how people relate to objects and other people in the virtual world. This article, "Divining Value: Dr. Mitzi Montoya measures the perceived value of virtual reality," provides a bit more detail. Their preliminary study results show that in both training and collaboration tasks, presence does have an impact on performance. Mitzi then went on to describe her current work. She is in the process of conducting a controlled study of students learning MATLAB in a Wonderland world. In talking with her after her presentation, Mitzi explained that she had done most of her work in Second Life, but Wonderland allows her to run MATLAB inside the virtual world using their own "virtual lab" VNC-like software. Since the MATLAB exercises have correct answers, she will be able to assess which student teams are most successful in completing the experimental tasks under different conditions. She is also considering running another study using Wonderland that involves a virtual maze. In this setup, one collaborator is inside the maze, and the other collaborator is on top of the maze. The person on top needs to direct the person in the maze to the exit.
In addition to Mitzi’s mention of Wonderland, both Sun presenters – Mary Smaragdis and Robin Williams – showed some video clips of Wonderland v0.5. In addition to talking about Sun’s use of Second Life for internal training and external events, they emphasized Wonderland’s extensibility, the ability to drag and drop content into the world, the ability to connect to enterprise resources, and object-level security.
Here are some other blogs about the conference for more information:
- Ian Hughes (E-Predator) Washington 3DTLC, twitter, education and progress
- 3D: TLC Conference Panel Discussion
- Where Virtual Worlds are Going: Why is the New What, How is for Tomorrow (Report on 3DTLC)
- Jeff Lowe – Terminology Tossed Salad (3DTLC pt 1)
- 3D Training Learning & Collaboration: A virtual worlds conference LIVE from DC
- Live Twitter stream from conference
A Few Interesting Technologies
A company called The Venue Network was demo’ing a web-based 3D environment called VenueGen. The functionality was quite limited, but it seemed like it could be quite effective for large presentations that require a low barrier to entry for new users. What particularly interested me, however, was the research they had done on gestures. They came up with the concept of gesture archetypes, described in detail in their Gestures Whitepaper. The concept is fairly simple. Each time you click on the wave icon, for example, your avatar waves slightly differently. If you double-click on any of the gesture icons, the gesture is exaggerated, so you get a big wave instead of an ordinary wave. And if you click and hold on the icon, the gesture continues until you release the mouse button. I’m not sure this user interface is ideal, but The Venue Network designers’ emphasis on gestures is right on target. Varying the way gestures are animated and enabling gestures of different magnitudes is likely to add a significant amount of realism to the environment.
Another product that caught my attention was PowerU demoed at the Federal Consortium event. Until I reviewed my notes, I didn’t realize that this product is related to VenueGen in that both products are based on Icarus Studios game technology. PowerU is a collaboration between Icarus Studios and the American Research Institute, a training services provider. Check out the "PowerU Tools Suite" and the "In World Motion Capture" videos available on the PowerU home page (click the "Next >>" link under the video thumbnail to find the different videos). The software runs on a Linux server, but the client-side software is Windows-only. The tools are all Collada-based, so it certainly would be interesting to see if there was a way to use the output of their world builder or terrain generator in Wonderland. Maybe some day! An interesting bit of gossip that came from this session is that the new movie called "Avatar," directed by James Cameron, was filmed largely using this technology.
One more technology I want to highlight is Vastpark, also demoed at the Federal Consortium event. Vastpark has an architectural model that has some similarities to Wonderland. Their Immersive Media Markup Language (IMML) is akin to our Wonderland File System (WFS). Both are based on XML. They also have a plugin system for extensibility. That said, Vastpark is not platform-independent. It currently runs on Windows and they have plans to create a Mac client. One cool feature is their Continuum event recorder mechanism. It looked fairly easy to capture 3D content and share the recording with others.
Our Project Wonderland v0.5 Demo
We conducted two hour-long sessions on Wonderland at the Federal Consortium Vendor Day pre-conference event. We showed lots of video of v0.4 to give the audience a sense of Wonderland’s features and what people in the open source community have been doing with the toolkit. We then did a live demo of v0.5. Despite not successfully running through the demo the night before without a major failure, the de
mo worked extremely well in both sessions. Jordan was able to get drag and drop of .kmz 3D models working just in time, so we were able to demo that, as well as drag and drop of images and an .svg whiteboard document. We showed how multiple users can edit the whiteboard at the same time, how the tool palette can be used to add interactive components to the world such as the cool new video player, and how the in-world tools can be used to manipulate 3D objects.
This audience was particularly interested in the new security features. We showed how you can add a security component to any object in the world by right-clicking on an object, selecting "Properties" from the context menu, and then clicking on the "+" below the Capabilities list in the Cell Property Editor window. This brings up the Capabilities dialog with the Cell Security Component option:
We then selected the group called "users," which is a default group set up to encompass all users of the system other than the owner of the object. We then clicked "Edit…" and changed the View permission setting from "Granted" to "Denied":
Doing this caused the object to immediately disappear on the second client we were using. We also showed how the group feature could be used to set up access control groups so that security permissions can be set differently for different groups of users. To create a user group, select "Groups…" from the Wonderland Edit menu and then click the "Add…" button in the Edit Groups window. This allows you to enter a name for the group and add users:
This group will then show up in the Permissions list on Cell Security Component property sheet.