Rapid Animation Software

From Lunatics
Jump to: navigation, search

Why Frame-By-Frame Isn't Feasible

A bit of calculation based on the animation budgets for the Blender Open Movies ("Sintel", "Big Buck Bunny", "Elephants Dream") shows that the frame-by-frame animation techniques used in those films (and for which Blender is optimized) is simply too expensive for "Lunatics" where we plan to have 30-45 minute full-length episodes.

We'd need to raise something like $1.5 to $2 million an episode to do it that way, and I really don't think that's going to happen. Nor am I going to be able to convince animators to do that amount of work for free or on speculation of getting enough money from profit-sharing to justify their investment of time.

If that were the whole story, I'd probably have to drop the project. Fortunately, though, it's not.

Animating in Real-Time

Of course, not all animation is done by pain-staking frame-by-frame adjustment. In 3D video games, for example, the animation is typically patched together by an artificial intelligence program, based on individual motions which were generally added by motion capture. These same kinds of methods can be applied to creating animation for stories, and we are going to try to use that to keep our animation budget manageable (ideally it will be low enough that we _can_ do it on a spare-time basis and just get paid on speculation from profit-sharing -- failing that, we'd at least be able to keep costs low enough to handle with a small fund-raising campaign for each "block" or "volume" of episodes).

This is probably the highest-risk issue with the production plan right now, so I'm giving it a lot of attention.

First of all, I've broken this down into three main techniques (main article: Animation Style):


I recently published an article in Free Software Magazine, "Three Real-Time Animation Methods: Machinima, Digital Puppetry, and Motion Capture", that explored solutions:

Machinima

Machinima, broadly, is the practice of using engines designed for video games to animate dramatic presentations. It started as simply screen-captures of scenes played out within games, but has evolved to include engines optimized for machinima production. More narrowly and definitively, I would say that machinima is a technique that draws heavily on the use of simulated physics and artificial intelligence techniques to have characters move on their own, with relatively little guidance from the animator. In this way, even high-end systems like "Massive" (used by Peter Jackson for the big battle scenes in the "Lord of the Rings" movies) should be considered examples of "machinima". In Blender, there are some techniques that use physics and AI simulations to generate animation, as illustrated by this video:

{{#ev:youtube|duIhI4Ae8aM}}

This particular approach is probably the way to go for exterior mechanical shots, like driving a rover or flying a spacecraft. It might also be used to convert simple movement commands into actual walking patterns for characters' legs (although patching together motion capture material is another way to do that).

Another possibility would be to use some existing machinima engine, possibly the open source Open Simulator package, derived from Second Life, which is already used by a modest-sized machinima community.

Digital Puppetry

Sometimes, though, you want finer control. This can be achieved better without AI or simulation interfering. Instead, you directly control the character using game-control surfaces. This example video is based on an early version of Pyppet. An updated version is being worked on now, and funding development and adaptation of Pyppet may well be part of the production for "Lunatics" (worth the cost, considering how much it could save relative to frame-based animation). The author, "HartsAntler", has expressed some interest in this already:

{{#ev:youtube|P3g63sdYtig}}

Motion Capture

Unlike digital puppetry, which requires the operator to learn the rig and possibly control the character in artificial, learned ways, motion capture attempts to capture an actor's performance directly and allow it to be mapped onto a CG character. This is a fairly hard problem, but there are a couple of possible solutions.

Here's a Blender example using some motion capture data available from online archives:

{{#ev:youtube|wc00tvc8u5k}}

This example of facial capture in Blender is from "Monet", which is a non-realtime system created by Mark Kane using Python scripting in Blender. Mark has also expressed some interest in updating and improving his code:

{{#ev:youtube|09oEfXyUQkA}}

Honestly, I don't know which techniques will work best in Lunatics, whether a mixture will be better than sticking to one technique, or how exactly it will "feel" on screen. I do think these techniques will affect the visual style of the show, and I think I'm okay with that. Not only will they save money, but they can also lend a degree of spontaneity to the production that can be difficult with traditional animation. Another advantage is that these techniques may be easier to learn, making the animation task a less esoteric skill.

Adapting Rapid Animation Techniques to Lunatics

For us to use these techniques, we're going to have to get the software up to a level of stability sufficient for production, and we need to make the different packages work with each other in consistent ways and work with our models and rigs. For example, it's going to be fairly awkward if our facial capture software works all with "shape keys" but our digital puppetry software is based entirely on armatures. So we pretty much have to work out a single rig that can work with all the techniques if we want to be able to use them in combination -- which would allow for the greatest artistic flexibility. We'd also like to be able to resort to frame-by-frame tweaking if the situation calls for it.

Motion capture generally works with "BVH" motion data -- so that's another element we need to be able to include.

If we work with an external machinima environment, such as OpenSim, rather than with Blender's built-in game engine, we'll probably also need to write some adapter and listener code to combine BVH animations for import into Blender.