- 1 Animation Style for Lunatics
- 2 Comparisons
- 3 Animation Methods
Animation Style for Lunatics
We've decided to go with true-3D production rather than "2.5 D", but I've retained some of my original thought process about this at the end of this page.
Along the way, I looked at some other (budget-conscious) productions and looked at how they dealt with these issues and where they lay stylistically.
For interest's sake, here are some images of other productions, with my pro/con comments on the style:
I liked The Incredibles a lot. However, the characters are too exaggerated for Lunatics. For one thing, the disproportionate characters would require similar deformations of the props, clothing, and sets, which I don't like, because that would make them that much less realistic. I also just think it goes too far. On the other hand, the characters are very expressive.
This is probably my favorite style of the movies I reviewed for this (which was, admittedly not that many). I like the way that cel-shading has been used to give the 3D characters a kind of warmth that is hard to achieve in 3D.
This show also made me realize how much animation quality affects my perception of the modeling -- in stills like these, I find the Barbie characters are a little bit into the "uncanny valley", but because motion capture was used so effectively, they look much more satisfying when watching the show.
Although it was obviously low budget, The Infinite Quest is a great example of the digital 2.5D technique. Many backgrounds and a few characters are animated in 3D. The main characters are in 2D cels. They do seem to be using some kind of 2D skeletal animation, and there is a lot of reuse of cels, as the character animation is somewhat limited. But it's very engaging, stylish, and warm.
The best I can say about the animation in Dreamland is that the writing is good. The 3D animation is extremely wooden and the characters are in a very stylized form that I don't like that much. They'd be okay in a video game, but for dramatic presentation, they leave a lot to be desired. On the other hand, the sets and texturing are "good enough".
After considering more of the technical obstacles in animation, my opinion of "Dr. Who: Dreamland" has softened a bit.
This is a no-budget amateur animation production using a mostly 2D animation technique. It's really acceptable as an animatic, but again, I wouldn't want our production to fall to this level. The drawings are a little "off", with proportions and poses strangely distorted. Some of the lines are sketchy. Not apparent in the stills shown here, is the creepiness of the very unnatural movement of the characters.
Most people would not call Sky Captain "animation" at all, but it is almost entirely a 3D animated film. Only the characters, costumes, and a few props are physical. The actors worked on a mostly barren blue-screen stage and were then composited into the digital sets. I show this, because I wouldn't be totally opposed to approaching Lunatics from this angle. However, I think it would be much more expensive, especially since it would require much more of the actors.
2.5 D Pros and Cons
One method, "2.5D", is to render 3D elements into digital cels, then finally composite the results into a 2D animation. This has several advantages:
- Some complex sets can be eliminated, provided we shoot them from limited angles
- Easier to create warm-feeling characters as 2D art
- Can pick and choose which approach is easier on a case-by-case basis
And it has some disadvantages (compared to full "3D"):
- Character compositing is harder
- Character animation will require a nearly constant outlay of effort -- almost every scene will need to be animated special
- 2D compositing is an extra layer of complexity in the production process
- Can use rotoscoping techniques for animation
Full 3D Pros and Cons
This method eliminates 2D compositing entirely (or almost entirely). Instead, everything (characters, sets, props, clothes, etc.) are 3D models with animated 3D rigging. This has a number of advantages:
- Easy integration of all of the various elements in Blender
- Easier to re-use character movement (can move the camera, but leave movement the same)
- Early investment in character models and rigging will eliminate many costs as the show progresses (we can increasingly rely on already developed rigs)
- Can use motion capture, digital puppetry, hand-over, face-capture techniques for rapid animation
And it has some disadvantages:
- Everything has to be modeled, even single-use/single-shot props and sets
- Characters have a tendency to look "wooden", unless the modelling and animation is done very well
- It's hard to back away from this if we change our minds (hard to mesh 2D or 2.5D material with 3D -- mind you, it's hard, but it's not impossible).
We might be able to "cheat" some shots, even in a "full 3D" production. When the camera angles are very locked-off and the animated objects are relatively flat background elements, it might be possible to use "cels" modeled in the 3D environment to represent props which would otherwise be prohibitively difficult to model. I have considered doing some of the scenes in the pilot episode this way, simply to reduce overheads on props that won't be reused later in the series.
With cel-shading and other techniques, is it possible to give 3D characters a similar level of "warmth" as we can get with 2D characters?
I did some calculations early on and concluded that even with the "bargain-basement" rates that Blender Foundation paid animators on "Sintel", the costs for 30-45 minute "Lunatics" episodes would probably be prohibitive for our production (something like $1.5 to $2 million per episode!).
However, this traditional frame-by-frame approach to animation is not the only way it can be done. There are a number of methods which greatly reduce animation time -- even to the point of being real time.
I recently published an article in Free Software Magazine, "Three Real-Time Animation Methods: Machinima, Digital Puppetry, and Motion Capture", that explored solutions:
Machinima, broadly, is the practice of using engines designed for video games to animate dramatic presentations. It started as simply screen-captures of scenes played out within games, but has evolved to include engines optimized for machinima production. More narrowly and definitively, I would say that machinima is a technique that draws heavily on the use of simulated physics and artificial intelligence techniques to have characters move on their own, with relatively little guidance from the animator. In this way, even high-end systems like "Massive" (used by Peter Jackson for the big battle scenes in the "Lord of the Rings" movies) should be considered examples of "machinima". In Blender, there are some techniques that use physics and AI simulations to generate animation, as illustrated by this video:
This particular approach is probably the way to go for exterior mechanical shots, like driving a rover or flying a spacecraft. It might also be used to convert simple movement commands into actual walking patterns for characters' legs (although patching together motion capture material is another way to do that).
Another possibility would be to use some existing machinima engine, possibly the open source Open Simulator package, derived from Second Life, which is already used by a modest-sized machinima community.
Sometimes, though, you want finer control. This can be achieved better without AI or simulation interfering. Instead, you directly control the character using game-control surfaces. This example video is based on an early version of Pyppet. An updated version is being worked on now, and funding development and adaptation of Pyppet may well be part of the production for "Lunatics" (worth the cost, considering how much it could save relative to frame-based animation). The author, "HartsAntler", has expressed some interest in this already:
Unlike digital puppetry, which requires the operator to learn the rig and possibly control the character in artificial, learned ways, motion capture attempts to capture an actor's performance directly and allow it to be mapped onto a CG character. This is a fairly hard problem, but there are a couple of possible solutions.
Here's a Blender example using some motion capture data available from online archives:
This example of facial capture in Blender is from "Monet", which is a non-realtime system created by Mark Kane using Python scripting in Blender. Mark has also expressed some interest in updating and improving his code:
Honestly, I don't know which techniques will work best in Lunatics, whether a mixture will be better than sticking to one technique, or how exactly it will "feel" on screen. I do think these techniques will affect the visual style of the show, and I think I'm okay with that. Not only will they save money, but they can also lend a degree of spontaneity to the production that can be difficult with traditional animation. Another advantage is that these techniques may be easier to learn, making the animation task a less esoteric skill.