Machinima

Machinima (or /məˈʃɪnəmə/), a Portmanteau of machine cinema, is a collection of associated production technique whereby computer-generated imagery (CGI) is rendered using real-time, interactive 3-D engines instead of professional 3D animation software. -Wikipedia

Over the past couple of weeks, we’ve (well, mostly Nicole actually) spent a fair bit of time generating some machinima (the group here at Sun Labs is generating a video for an internal review that happens at this time each year). We decided to give a technical description of Wonderland in Wonderland–the team members as avatars talking about their part of the project. It turned out to be about 40 minutes long, where each team member spoke for about 5 minutes each. Much credit (as always) is also due to the community: we gratefully included snippets of worlds that folks like yourself have created. (And yes, we are hoping to release this video to the public, stay tuned).

Machinima requires some very unique skills that we had to develop. I’m still not sure I’m very good at this myself, but I wanted to at least share some experiences.

Camera Client

In our movie, we wanted our characters to face the camera and speak (so the camera perspective is in the third-person) — this is a problem if you simply do a "movie capture" of your screen, because you’ll always see the back of your avatar’s head! To capture an avatar facing the camera and speaking, we need to use two clients: one as the avatar and one as a camera. Here, the ‘c’ key to control the camera view is really useful: this hides the avatar for the camera client so the speaker is all that you’d see. Since we want the avatar to appear large on the screen, we found adjusting the viewing angle helped: typically we used viewing angles between 40 and 60 degrees. All of this is in fact the purpose that Bernard’s Movie Recorder serves–it’s a specialized client that lets you shoot movies in third-person. For the times when you truly wish to capture the scene in first-person (seeing the back of your avatar’s head), then a "movie capture" of your normal Wonerland client is what you’ll use.

Jordan facing frontHaving the camera move during filming was also challenging: despite being able to control the speed of motion with the ‘+’ and ‘-‘ keys, I found the movement to be somewhat jerky at times and difficult to control effectively. Often times I had to practice the movement several times before getting it right. (As I’m writing this I’m realizing the scripting work by community member Morris Ford may really help out here — a "movie camera motion" script could control all of this. I’m not sure if this violates the spirit of machinima though where each participant should be controlled by a live human!). We often had to shoot the movie in short segments at a time–needless to say there was a lengthly post-processing editing step in Final Cut Express.

That leads me to my next point: placemarks were extremely useful and solved the issue of the director’s cry "places everyone". Especially when filming a complex scene, it was imperative for all actors to be able to start at the same location over and over again, particularly when these scenes were spliced together during the final editing step.

Audio

Distance attenuation is a great feature in Wonderland, except when trying to film a movie! Since we recorded the audio from the camera client, the volume of the audio depended upon the distance between the camera and the speaking avatar (not to mention the audio gain of the speaker’s microphone). This means the audio level can fluctuate greatly, not really a desired effect in a movie. Take for instance, the picture below, where the camera starts out far away from my avatar and zooms in over time. In this case we had to record the audio track separately from the video track, where we kept the distance between the avatar and camera fixed to record the audio track. Actually, the audio levels have proved to be quite a challenging aspect of doing machinima in Wonderland. 

Jordan far away Jordan near

Avatar Expressiveness

Our current avatars are noticeably expressionless, although they do naturally sway a bit. The only indication that an avatar is talking is their name flashing above their head. This indication was as important for machinima with a single avatar present as it is when trying to distinguish who is speaking amongst a crowd of avatars. Even with this indication, a scene with a stationary avatar speaking can get boring after a while, so motion is very important as Nicole discovered. Here’s where some editing and splicing of scenes from different perspectives becomes very important. What would be even better is if we could have multiple invisible, motion-scripted cameras filming a scene at the same time.

Conclusions?

Overall, I’m not certain you’ll see a major mot
ion picture filmed in Wonderland anytime soon, but we were pleased that we could film 40 minutes worth of cinema in it. Certainly our brand new avatars in v0.5 will help a great deal in making our characters more, well… charismatic.

A great community project would be to enhance Wonderland to make machinima easier: clients could support both first and third-person points of view. This comes easier in v0.5 which lets you attach your client camera to any attach point in the world, not just the several camera positions possible in v0.4 today. You should also be able to "turn off" audio attenuation similar to how the Cone of Silence operates: all avatars within a certain radius are set to full volume. And finally, although it may violate the spirit of machinima, I’m looking forward to how scripting can help us automate the camera motion throughout a scene.

 

About these ads

One Response to Machinima

  1. Nigel Simpson says:

    Interesting article, Jordan.
    There’s a solution to the audio recording problem you describe in the Audio section.
    You can tweak the audio fall off dynamically from the Wonderland Manager application. You can increase the full volume radius so that you can hear avatars at a greater distance than normal. Here’s how:
    If you have the developer features enabled, go to ‘Dev Tools->Wonderland Manager. Click on the [Voice Manager] tab and then adjust the "Live full Vol Radius" for live avatars. To get a sense of the effect, increase the "Stationary full Vol Radius". As you increase the radius, each of the scripted avatars in the world will come into audio range. It’s kind of like having super-sensitive ears :)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: