Real-time Motion Retargeting to Highly Varied User-Created Morphologies.

Hecker, C., Raabe, B., Enslow, R. W., DeWeese, J., Maynard, J., and van Prooijen, K.
in: Proceedings of ACM SIGGRAPH ’08, 2008
URL, Google Scholar

Abstract

Character animation in video games””whether manually key-framed or motion captured””has traditionally relied on codifying skeletons early in a game’s development, and creating animations rigidly tied to these fixed skeleton morphologies. This paper introduces a novel system for animating characters whose morphologies are unknown at the time the animation is created. Our authoring tool allows animators to describe motion using familiar posing and key-framing methods. The system records the data in a morphology-independent form, preserving both the animation’s structural relationships and its stylistic information. At runtime, the generalized data are applied to specific characters to yield pose goals that are supplied to a robust and efficient inverse kinematics solver. This system allows us to animate characters with highly varying skeleton morphologies that did not exist when the animation was authored, and, indeed, may be radically different than anything the original animator envisioned.

Review

The paper explains how motion retargeting to wildly varying creatures is done in Electronic Arts’ game Spore. The crucial point is that they devised an animation system (Spasm) in which animators do not work on a fixed body, but on meta-level descriptions of body parts which are showcased on a small set of example bodies in the program window. Animators first select body parts by choosing descriptors like “grasper in front”. Then they can define movements in different modes e.g. relative to rest position, relative to external target, relative to limb length and similar others. The authors say in the discussion: “However, it takes weeks to build up an intuition about which kinds of motions generalize across a wide range of characters and which don’t.”

In principal they have devised an animation system in which motions are described in task space instead of in joint space (the standard in animation with e.g. Maya). Well, it’s some kind of hybrid. In general, everything in the paper is very ad hoc as the main objective is to make it work in real-time for the game. Anyway, the paper is not really addressing the problem of motion retargeting where you observe motion on one body and try to get it on another. Rather here they are concerned with representing motion from the beginning in such a way that it is easily transferred to a wide range of very different bodies.

A large part of the paper is about playing the so stored motion on a particular body (“specialization”). For this they use their own ad hoc IK solver. I didn’t find any interesting principles here (they sort out position of the spine first and only then solve for constraint satisfaction of the limbs), but I also didn’t put a lot of effort to understand what’s going on.