composition of complex optimal multi-character motions · automatic controller synthesis...

8
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006) M.-P. Cani, J. O’Brien (Editors) Composition of Complex Optimal Multi-Character Motions C. Karen Liu 1,2 Aaron Hertzmann 3 Zoran Popovi´ c 1 1 University of Washington 2 University of Southern California 3 University of Toronto Abstract This paper presents a physics-based method for creating complex multi-character motions from short single- character sequences. We represent multi-character motion synthesis as a spacetime optimization problem where constraints represent the desired character interactions. We extend standard spacetime optimization with a novel timewarp parameterization in order to jointly optimize the motion and the interaction constraints. In addition, we present an optimization algorithm based on block coordinate descent and continuations that can be used to solve large problems multiple characters usually generate. This framework allows us to synthesize multi-character motion drastically different from the input motion. Consequently, a small set of input motion dataset is sufficient to express a wide variety of multi-character motions. 1. Introduction Multiple character interaction is critical to the percep- tual immersion for training simulation, tele-communication, and video games. One possible avenue to realistic multi- character motion is through reuse and editing of example motion sequences, using both physics-based and data-driven approaches. Despite the recent advent of automatic tech- niques for motion synthesis, the scope of most existing meth- ods can only generate motion of a single character perform- ing a single task. Physics-based motion editing requires very little input data to modify basic dynamic characteristics of a motion, such as speed or orientation. However, these methods are limited to the low-level modifications of motion parameters, unable to change the high-level content of the motion. Fur- thermore, physics-based methods rely on intensive compu- tation that scales poorly to a large optimization problem. In contrast, data-driven approaches take advantage of large mo- tion databases by composing segments of example motions. This approach can create long sequences that achieve coarse- grained goals, but only if appropriate motion segments exist in the database. Clearly, data-driven methods cannot record all possible interactions among multiple characters in ad- vance. In this paper, we describe techniques for adapting a small collection of motions to create complex new motions with multiple characters. For example, beginning with one char- acter walking and an another character running, we can cre- ate a new animation, in which one character attempts to tackle while the other character tries to dodge the tackle (Figure 1). The user can determine the scenario of the motion by specifying whether the dodge is successful, and, if so, by how much. These new motions are defined as solutions to a physics-based spacetime optimization problem. When directly applied to multi-character motion, the stan- dard spacetime constraints framework fails on two accounts. First, existing spacetime algorithms require the user to spec- ify detailed environment constraints for a motion, such as specific locations and the timing of all footsteps. The realism of the output motion largely relies on the user’s experience in setting the environment constraints. When multiple char- acters interact with each other, specifying these constraints manually could be exceedingly difficult. Second, the space- time optimization framework is prohibitively expensive for large problems and increasingly prone to local minima. In this paper, we introduce methods that automatically opti- mize the timing and the placement of the constraints, avoid- ing suboptimal output motion due to unrealistic environment constraints specified by the user. This framework augments a spacetime optimization with a time-warping parameteriza- tion, in which both the timing of the motion and the con- straints are optimized in synchrony. To address the second problem, we extend the spacetime windows approach with block coordinate descent and continuation strategies to solve for each character separately even when the characters’ mo- tion is mutually constrained (e.g. walking hand-in-hand). c The Eurographics Association 2006.

Upload: others

Post on 01-Mar-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006)M.-P. Cani, J. O’Brien (Editors)

Composition of Complex Optimal Multi-Character Motions

C. Karen Liu1,2 Aaron Hertzmann3 Zoran Popovic1

1University of Washington 2University of Southern California 3University of Toronto

AbstractThis paper presents a physics-based method for creating complex multi-character motions from short single-character sequences. We represent multi-character motion synthesisas a spacetime optimization problem whereconstraints represent the desired character interactions. We extend standard spacetime optimization with a noveltimewarp parameterization in order to jointly optimize the motion and the interactionconstraints. In addition,we present an optimization algorithm based on block coordinate descent and continuations that can be used tosolve large problems multiple characters usually generate. This framework allows us to synthesize multi-charactermotion drastically different from the input motion. Consequently, a small setof input motion dataset is sufficientto express a wide variety of multi-character motions.

1. Introduction

Multiple character interaction is critical to the percep-tual immersion for training simulation, tele-communication,and video games. One possible avenue to realistic multi-character motion is through reuse and editing of examplemotion sequences, using both physics-based and data-drivenapproaches. Despite the recent advent of automatic tech-niques for motion synthesis, the scope of most existing meth-ods can only generate motion of a single character perform-ing a single task.

Physics-based motion editing requires very little inputdata to modify basic dynamic characteristics of a motion,such as speed or orientation. However, these methods arelimited to the low-level modifications of motion parameters,unable to change the high-level content of the motion. Fur-thermore, physics-based methods rely on intensive compu-tation that scales poorly to a large optimization problem. Incontrast, data-driven approaches take advantage of large mo-tion databases by composing segments of example motions.This approach can create long sequences that achieve coarse-grained goals, but only if appropriate motion segments existin the database. Clearly, data-driven methods cannot recordall possible interactions among multiple characters in ad-vance.

In this paper, we describe techniques for adapting a smallcollection of motions to create complex new motions withmultiple characters. For example, beginning with one char-acter walking and an another character running, we can cre-

ate a new animation, in which one character attempts totackle while the other character tries to dodge the tackle(Figure1). The user can determine the scenario of the motionby specifying whether the dodge is successful, and, if so, byhow much. These new motions are defined as solutions to aphysics-based spacetime optimization problem.

When directly applied to multi-character motion, the stan-dard spacetime constraints framework fails on two accounts.First, existing spacetime algorithms require the user to spec-ify detailed environment constraints for a motion, such asspecific locations and the timing of all footsteps. The realismof the output motion largely relies on the user’s experiencein setting the environment constraints. When multiple char-acters interact with each other, specifying these constraintsmanually could be exceedingly difficult. Second, the space-time optimization framework is prohibitively expensive forlarge problems and increasingly prone to local minima. Inthis paper, we introduce methods that automatically opti-mize the timing and the placement of the constraints, avoid-ing suboptimal output motion due to unrealistic environmentconstraints specified by the user. This framework augmentsa spacetime optimization with a time-warping parameteriza-tion, in which both the timing of the motion and the con-straints are optimized in synchrony. To address the secondproblem, we extend the spacetime windows approach withblock coordinate descent and continuation strategies to solvefor each character separately even when the characters’ mo-tion is mutually constrained (e.g. walking hand-in-hand).

c© The Eurographics Association 2006.

Page 2: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

Figure 1: Top: The input motion contains a walk and a run performed by two characters. Bottom: In the output motion, theblue character attempts to tackle while the yellow character tries to dodge the tackle.

Combining these techniques significantly expands theflexibility of physics-based animation and motion editing tothe domains of complex multi-character motion. We demon-strate our algorithms on both collaborative and adversarialinteractions, that prior spacetime approaches have not beenable to successfully synthesized. Furthermore, we believethat our techniques are general enough to be applied to allprevious spacetime formulations in the literature, thus sig-nificantly enhancing their applicability.

2. Related work

Recent research in character animation has been focusedon two main themes: physics-based and data-driven motionsynthesis. We build our work on a physics-based frameworkwhile exploiting a small data set of basic motion from thereal world.

In data-driven approaches, motions are created by edit-ing and combining example motions. One simple approachis to perform direct filtering of joint angles [BW95, WP95]and interpolation of example motions [WH97, RCB98]. Asin these methods, we adopt motion time-warping tech-nique [WP95], and extend it to time-warping both motionand constraints. More recent approaches create motions bysplicing together a collection of smaller motion segments.[AF02,KGP02,LCR∗02,LWS02,PB02] or poses similar toexamples poses [BH00,GMHP04,LWS02]. Since all posescome from real human data, the rich nuances of the mo-tion are easily preserved. However, only poses that existin the motion database (or very similar ones) can be gen-erated, thus requiring potentially vast databases of examplemotions; handling multi-character interactions would makethe requirements even more stringent.

The spacetime constraints framework casts motion syn-thesis as a variational constrained optimization problem[WK88, LGC94, PSE∗00]. The algorithm minimizes a spe-cific physical measure of energy, such as joint torquesor power consumption, while satisfying dynamic or user-specified constraints. The optimal energy movement and in-tuitive user control are appealing for motion synthesis in low

dimensional space. However, when applied to complex hu-man motion, the optimization quickly becomes intractableand highly prone to local minima. Much research work hasfocused on optimizing simplified models, including simpli-fied characters [PW99], momentum constraints [LP02], ag-gregate force models [FP03], and data-driven parameteriza-tion [SHP04]. In each case, the results of the optimizationis strongly dependent upon choosing an appropriate param-eterization.

Physics-based methods for editing motions combine therealism of example motions with dynamically plausiblemodifications [Gle97, PR01, RGBC96, ZH99], but are lim-ited to relatively small, user-specified changes to constraintpositions or timing. Gleicher [Gle98] described a method forfootprint variation, when no dynamic constraints are present.Liu and Cohen [LC95] presented a technique that allowsthe keyframe timing to vary, thereby changing the param-eterization of the motion, but holding all other constraints(e.g footprint constraints) fixed. We introduce a novel time-warping parameterization to jointly optimize the motion andthe constraints in concert. Our algorithms extend spacetimeframework to optimize not only the joint angles, but also theconstraints, to achieve optimal motion for complex multi-character interactions.

Despite great advances in spacetime optimization, it re-mains applicable only for relatively short animation se-quences with single character. Our framework of motioncomposition exploits strategies of block coordinate descentand continuations [Bet01]. Cohen [Coh92] introduced aspacetime method similar to block coordinate descent algo-rithm. This method breaks a large problem into a set of sub-problems, optimized sequentially over windows of time. vande Panne [vL95] applied the continuation strategy to learn-ing of balanced locomotion by using gradually reduced guid-ing forces to assist the process of learning. Combining thestrategies of block coordinate descent and continuations, ouroptimization framework is able to scale to large problemsthat multi-character motions usually generate.

Multi-character animations has been demonstrated us-

c© The Eurographics Association 2006.

Page 3: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

ing data-driven methods [AF02], controller-based methods[Rey87], or a combination of both [ZMCF05]. However,data-driven methods based on kinematic models require suf-ficient example motions to describe the variability of multi-character motion. Our approach is also data-driven, but usesthe data to learn a dynamic model that can generalize tonew dynamic configurations not present in the training data.Robot controller design for multi-character motions remainsa difficult process in spite of some recent advances towardsautomatic controller synthesis [HP97,FvdPT01]. Moreover,these methods usually lack the intuitive control provided bythe spacetime framework. The goal of our work is to showthat complex multi-character dynamics can also be synthe-sized with spacetime optimization.

Our work is also related to motion planning whichsearches for a motion sequence that satisfies complex kine-matic constraints [KKKL94, YKH04]. While these meth-ods satisfy complex constraints, they do not model dynamicproperties of motion, resulting in less realistic motions.Kuffner et al. [KKK ∗02] describe a method for planning dy-namically stable robot motions, but require user-specifiedkeyframe constraints and fixed positions for either or bothfeet. Our method combines physics-based animation with aform of motion planning, allowing for minimally-specifiedconstraints with time-varying dynamic configurations, al-though our system cannot solve fully-general planning prob-lems.

3. Overview

We represent multi-character motion synthesis as a space-time optimization problem where constraints represent thedesired character interactions. Our system is composed oftwo major components: motion optimization and motioncomposition. The first component describes an extendedspacetime optimization framework that solves forbothmo-tion and constraints. One distinctive feature of our frame-work is the novel constraint representation that allows thetiming and the spatial coefficients (e.g. desired location ofthe foot for a footprint constraint) of a constraint to vary dur-ing the course of the optimization.

The flexibility to vary a constraint is crucial to the synthe-sis of interactive motion between multiple characters. It doesnot, however, alleviate the poor convergence and scaling is-sues raised by large optimization problems. In the secondpart of this paper, we describe a generalized framework thatsynthesizes complex motion as an optimal composition ofsmaller optimal sequences, each of which is the solution toa subproblem. The user can then arrange the subproblemsinto different schedules depending on the scenarios of theinteraction among multiple characters.

The input to our system comprises input motion clips, ini-tial environment constraints (e.g. footprints of the input mo-tion clips), and the user-specified constraints that indicate the

t = g( t, hg )qj ( t, hqj )

t

hqj hg

t

Figure 2: Left: A joint DOF function plotted in warped timedomain. B-spline coefficientshq determine the joint anglevalues throught. Right: Time transformation defined as amonotonic, non-linear function g from the warped timet toactual time t. B-spline coefficientshg determine the shape ofthe time transformation.

t t t

qj ( t, hqj ) qj (t, hqj )t = g( t, hg )

5.0

5.0

5.0 3.4

3.4

5.0

Figure 3: Left: Time transformation from an initial identityfunction shown in orange, to an arbitrary non-linear func-tion shown in green. The gray bar indicates a constraint thatoccurs att = 5.0. Middle: A joint DOF function plotted inwarped time domain. Time transformation does not changethe timing of the constraint in warped time domain. Right:The same joint DOF shown in actual time domain, wherethe actual timing of the constraint changes to t= 3.4. Notethat the length of the motion is also shortened by the timetransformation.

desired interaction. Our system automatically optimizes themotion subject to the user-specified constraints, and modi-fies the timing and the spatial coefficients of the initial envi-ronment constraints accordingly.

4. Optimal constraints

The standard spacetime optimization usually formulates aconstraint as a function of motion variablesq, with a timeinstancet and spatial constantp as function coefficients. Forexample, a foot position constraintCp(q;p, t) would enforcethe position of the foot at the desired locationp at framet.This framework allows the user to determine the timing andspatial coefficients of each constraint, as an intuitive way toprovide controls to the user. However, manually specifyingthese constraints becomes difficult, if at all possible, whenthe interactions between characters take place.

One obvious approach to solve this problem is to directlyoptimize the timing and the spatial coefficients of each con-straint. However, this approach leads to a highly discon-tinuous search space. Optimization with both motion vari-ables and timing would almost always prefers to modify the

c© The Eurographics Association 2006.

Page 4: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

motion variables, leaving the timing of the constraints un-changed. For example, consider changing the timing of arunning sequence so that the foot constraint occurs slightlyearlier. For this modification to be feasible while reducingthe energy, the entire dynamics preceding and following themodified constraint needs to be updated in a global man-ner. Consequently, there is a huge energy barrier between theoriginal state and a new state that satisfies all constraints withlower energy. This large, discontinuous gap in the searchspace is impossible for any derivative-based optimization toget across, effectively preventing the optimization from everchanging the timing of a constraint.

To address this problem, we design a new framework byaugmenting spacetime optimization with a time-warping pa-rameterization, in which both the timing of the motion andthe timing of the constraints are optimized in synchrony.This coupling is crucial to avoiding large energy barrierswhile improving the objective function without violating alarge number of constraints. A time-warping parameteriza-tion enables us to simultaneously optimize all motion vari-ablesq, the spatial coefficients of the constraintsp, and thetiming of both the motion and constraints, while maintainingthe temporal relation between the motion and constraints.

4.1. Motion representation

We represent a motion as a sequence of body posesthrough time, using B-spline parameterization. Withouttime-warping, the value of ajth joint degree of freedom(DOF) at timet is writtenq j (t;hq j ), wherehq j are the coef-ficients parameterizing the B-spline. In our formulation, wechoose to represent the joint DOFs under warped time pa-rameterizationq j (t;hq j ), and define a monotonic, non-linearfunction to represent time transformation from the warpedtime to the actual timet = g(t,hg), wherehg are the one-dimensional B-spline coefficients describing the non-lineartime transformation (Figure2). We deliberately representgas aninversetime warping function for the ability to vary thelength of the actual motion sequence during the optimiza-tion. To prevent time from being reversed, we constrainhg

to ensure the monotonicity of time:

g(t +1;hg)−g(t;hg) > 0 (1)

for all warped time indicest.

To fully determine a motion sequence, we need to specifyboth motion variableshq1,hq2, . . .hqn and the time variableshg. Note that this representation is largely redundant, ashq jcan represent all possible joint variation, provided enoughcontrol points in the B-spline. However, the coefficients fortime transformation (hg) allow the optimizer to vary the tim-ing of the motion by changing a minimal set of motion vari-ables, without disrupting the relative relationship among dif-ferent joint DOFs.

4.2. Constraint representation

The constraint representation in our framework is based onthe standard spacetime optimization with two modifications.First, the timing of each constraint is specified by a fixedwarped time instancetc instead of the actual time. This mod-ification allows us to optimize the actual timing of the con-straint (tc = g(tc;hg)) by varying the time coefficientshg.Second, the spatial coefficients of each constraint are solvedas free variables in the optimization. We represent a con-straint as a function of motion variables and spatial coeffi-cient, with a warped time instance as a function coefficient:C(hq,p; tc). We can visualize the timing changes as con-straints travel through time together with joint DOFs thatsatisfy the constraints. Figure3 demonstrates how the timetransformation affects the timing of a constraint in the actualtime domain, but leaves it unchanged in the warped time do-main.

There are cases when constraints associated with actualtime instances are useful. For example, the user might wantthe character’s hand to reach the cup exactly two secondsafter the motion starts. Our generative constraint represen-tation can easily revert to a standard spacetime constraintby simply associating the constraint with an actual time in-stance:C(hq,p; tc).

4.3. Environment constraints

The ability to change the timing and spatial coefficients isessential to environment constraints in multi-character ani-mation. Although we only demonstrate our constraint repre-sentation on positional constraints, the representation can beapplied to any type of environment constraints that enforcea spatial relation between the character and the environmentat a particular time instance.

Current spacetime optimization algorithms require the de-sired footprint locations to be fixed prior to the synthe-sis process. These models are not suited for synthesis ofmulti-character motion, as the interactions between charac-ters could change the footprints unexpectedly. Our coupled-warping formulation allows the optimizer to search for thebest desired location and timing for a positional constraintCp:

Cp(hq,p; tc) = d(hq; tc)−p = 0 (2)

whered(hq; tc) evaluates the position of the relevant handleon the character’s body attc andp is the desired location ofthe handle. Note that varyinghg does not affect the value ofCp, but changes the timing ofCp in the actual time domain.

This new constraint specification provides the flexibilityto generate optimal motions that vary significantly from theinitial state. For example, a single optimal walking cycle canbe transformed into arbitrary walking, turning, pivoting, ormaneuvers.

c© The Eurographics Association 2006.

Page 5: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

4.4. Dynamic constraints

To ensure physical realism, we enforce generalized dynamicconstraints on each joint. Similar to environment constraints,we represent a dynamic constraint as a function of mo-tion variableshq, associated with an warped time instancetc: Cd(hq; tc). In general, adjusting timing of environmentconstraints can easily violate a large number of dynamicconstraint distributed over time, immediately requiring verylarge forces with high energy cost. Consequently, the opti-mization would almost never modify the time variableshg.To avoid this problem, we let the dynamic constraints travelthrough actual time domain in synchrony with environmentconstraints as time variableshg vary.

We use the dynamics formulation described by Liu et.al. [LHP05]. Here we provide a brief summary of this for-mulation. A dynamic constraint enforces the Lagrangian dy-namics that includes the effect of external and internal forcesat each joint DOFj.

∑i∈N( j)

ddt

∂Ti

∂ q j−

∂Ti

∂q j−Qg j −Qc j −Qsj −Qt j −Qmj = 0 (3)

whereTi indicates the kinetic energy ofith body node,N( j)is the set of body nodes in the subtree of joint DOFq j ,and the first two terms together represent the generalizednet force at DOFq j . Qmj is the force generated by muscles,Qg j is the generalized gravitational force,Qc j is the gener-alized ground contact force,Qsj is the generalized elasticforce from the characters’ shoes, andQt j represents an ag-gregate spring force from the passive elements around jointq j , such as tendons and ligaments. It is crucial to model theaccurate impedance parameters for the passive elements, aspassive forcesQt j significantly affect the realism of the re-sulting motion. The process of computing these values froma motion capture sequence is described in [LHP05]. Sincethere are no muscles or tendons around root DOFs, the rootmotion of the character is completely determined by the ex-ternal forcesQg j , Qc j , andQsj ( [LHP05]).

Dynamic constraints introduce two sets of additionalDOFs in the optimization: generalized muscle forcesQm andground contact forcesλ in Cartesian coordinates.

4.5. Optimization

We measure effort in terms of muscle force usage by sum-ming the magnitudes of the forces at all joint DOFsj overthe warped time domain:

E = ∑j∑t

α j‖Qmj (t)‖2 (4)

where the weightsα represent the relative usage prefer-ence for each joint. These weights are also automaticallydetermined from the motion capture data as described in[LHP05].

In summary, we formulate an optimization for solving a

motion sequencex = {hq,hg,p,Qm,λ}, which minimizesthe muscle force usageE while satisfying environment con-straintsCp, dynamic constraintsCd, and interaction con-straintsCi . Interaction constraints are defined based on spe-cific scenarios. We will detail three examples in section6.

minx

E(x) subject to

Cp = 0Cd = 0Ci = 0

(5)

5. Motion composition

The flexibility of the framework described in the previoussection works well with a short, single-character motion, butscales poorly when extended to a large problem with mul-tiple characters. Although researchers have partly addressedthese issues with spacetime windows [Coh92] and waveletrepresentation of DOFs [LGC94], these methods did not ap-ply to multi-character motion. To solve such problems, wedescribe a generalization of the spacetime windows togetherwith the continuation-based optimization strategy.

We view the process of synthesis of complex motions asan optimal composition of smaller optimal sequences. Theuser selects motion clips from a small set of primitive motiondate set and specifies the important constraints to achievedesired interaction. Our framework solves for an optimalmotion that carries out the interaction in a physically real-istic manner. The main challenges of solving large multi-character motions lie in slow convergence rates and largenumber of local minima, causing the optimization to haltclose to the initial state. In our experience, it is virtually im-possible to solve such large problems with a single, simulta-neous optimization over all variables.

We describe a framework that allows us to both reducethe number of unknowns and avoid local minima. Instead ofoptimizing all unknowns simultaneously, we use a cyclicalsequence of smaller optimizations, which we refer to as aschedule, designed for a specific problem. For example, anadversarial chase scenario determines a schedule differentfrom the schedule of a collaborative scenario where charac-ters work towards a common goal. Our strategies use a com-bination of block coordinate descent and continuations. Weoutline these basic strategies in this section, and describe anumber of useful schedules and their applications in the nextsection.

5.1. Block coordinate descent

This optimization strategy is frequently used for large opti-mization problems [Coh92]. The idea is to optimize with re-spect to a block of coordinates (or unknowns), while holdingthe remaining unknowns fixed. The active block of coordi-nates varies during the optimization. As long as the blockscycle over all unknowns, this process will converge to a lo-cal minimum. In our framework, we use the results from

c© The Eurographics Association 2006.

Page 6: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

the previous optimization to construct interaction constraintsthat need to be satisfied in the current optimization. In thetwo-character animation, the interaction constraint for char-acterA’s optimization depends on the earlier optimizationfor characterB, and vice versa. We can select the blocksby their spatial or temporal relations. For example, we canset unknowns to be only the DOFs of a specific charac-ter, treating the motion of all other characters as constant.Similarly, we can restrict the optimization to a smaller partof the whole animation. Whichever strategy is chosen, weinterleave the blocks so that all unknowns are optimized.Coordinate-descent strategies always minimize the same ob-jective function and the same constraints. In our framework,we also vary the objective function and constraints by con-tinuation strategies.

5.2. Continuations

The idea of continuations is to solve a sequence of differentproblems that, in the limit, smoothly approach the objectivefunction. For example, if a positional constraint is very diffi-cult to satisfy, we may replace it with a spring of rest-lengthzero, and slowly increase the spring coefficient during theoptimization. In the limit, we will strictly enforce the origi-nal positional constraint.

It is also crucial to apply block coordinate descent in con-junction with continuation to solve tightly coupled multi-character motion, such as a mother and a child walking hand-in-hand. If we were, for example, to first solve for mother’smotion and then for the child’s motion, the mother wouldchange her entire motion to accommodate for the child’smovement; the child would then change its motion mini-mally, resulting in undesirable behavior where the motherunrealistically tracks the child’s hand movement. By usingcontinuations, such as an increasingly tightening spring con-straint between two hands, we can still use the character-decoupling block optimizations, without biasing one sub-problem over another due to the ordering. This approach al-lows us to decouple all DOFs in virtually any grouping wechoose. As long as we have a way of smoothly introducingthe coupling constraints, in the final optimization they willrevert to the true problem formulation.

6. Results

In our implementation, we used only three basic input mo-tion clips: a walk cycle, a run cycle, and a child’s walk cycle.We intentionally restricted ourselves to this small set of inputmotions to demonstrate the versatility of this approach. Fromthis very limited input motion data set (5 second long), ourframework was able to compose complex multi-charactermotions with interesting interaction in many different sce-narios. The length of the clips ranges from 40 to 60 framesat 30 fps. We use SNOPT [GSM96] to solve spacetime op-timization problems on a 2Ghz Pentium 4 machine. Eachoptimization takes approximately 8 to 20 minutes.

A

B

C

pose constraint rubber-band constraintpositional constraintinequality constraint

Figure 4: Three different schedules. Left: Time-layeredschedule. The gray bars indicate the pose constraints at tran-sition points. Middle: Constrained multi-character sched-ule. The motions of two characters (red and blue bars) aretightly coupled by a continuation constraint. The influence ofthe constraint is indicated in grayscale. Right: Decreasing-horizon optimization. Two characters plan their motions ac-cording to the previous movement of the opponents.

6.1. Time-layered schedule

This schedule is deployed for sequentially composition ofsmall problems. First, we solve for two abutting optimiza-tions A and B, and then we solve the optimization C thatstraddles across the transition from A to B. Each optimiza-tion has pose constraints obtained from the adjacent blocks.Repeated optimizations of this schedule converge to the op-timal solution over the interval of A and B, thus effectivelyusing smaller interval optimizations to solve for larger inter-vals (Figure4).

Example: Transition synthesis. Since a complex motionis composed of smaller sequences that can be fundamentallydifferent, there is a large discontinuity between the two se-quences. Given two example sequences, our goal is to gen-erate a realistic transition between them. First, we initializethe problem by connecting the two motions: we determinethe average pose between the start and the end of the two se-quences, and then solve two optimization problems (A andB) so that both motions smoothly lead to the average tran-sition pose. We then remove the transition pose and solvefor the overlap (C). The transition pose is only used for ini-tializing the problem and has little effect on subsequent op-timization. The video shows the synthesized transition be-tween walking and running where the character correctlydips down and leans forward before reaching the runningspeed. This example was completed by three optimizations.The total computation time took 38 minutes.

6.2. Constrained multi-character schedule

When the motion of two characters is mutually constrained,we employ a schedule that alternates between optimizingeach character’s motion. We also use a continuation strategyto represent the constraint connecting the two characters. Tofind the optimal constraints for both characters, we need toallow each character to slowly adjust its motion according tothe behavior of the other character.

Example: Hand in hand walking. When solving for thewalk cycle with two people holding hands, we start out from

c© The Eurographics Association 2006.

Page 7: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

Figure 5: Solving for the mother’s motion more frequently than the child’s motion would result in a final motion where themother does all the work, while the child adjusts minimally.

two people walking independently without any body con-tacts. We first optimize for the motion of one person and treatthe other person’s motion as constant. During the optimiza-tion, we apply a continuation-like “rubber-band” that pullsthe hand of the character towards the other one. After everyscheduled cycle, we increase the stiffness coefficient of therubber-band by 100, pulling the characters further together(Figure4). At convergence, the soft constraint is satisfied.Since we exert a spring force on the characters, the resultingforce exchange across the hand constraint is dynamically ac-curate. By manipulating the order of the schedules, the usercan control which character dominates the motion. For ex-ample, solving for character A more frequently than char-acter B would result in the motion where character A doesall the work, while character B adjusts minimally (Figure5). The example shown in the accompanying video took 8scheduled cycles before convergence. The total computationtime took 85 minutes.

6.3. Decreasing-horizon optimizations

Spacetime optimization is inherently unsuited for creatinganimations where characters react to unexpected events, be-cause all constraints are knowna priori and the charactercan always plan well ahead. When two people are planningtheir motions in an adversarial setting, at every instance intime, each character adjusts to the latest movement of theopponent. We approximate this process by interleaving thesolutions for both opponents, while consistently decreasingthe reaction time for each character by reducing the opti-mization interval.

Example: Simulations of optimal adversary behavior.We synthesized several scenarios of a tackler and a target.The tackler’s goal is to hit the torso of the target at specificvelocity, and position his hands around the torso. The target’sgoal is to avoid the tackler at the collision time. We representthis as an inequality constraint that prevents the target frombeing inside the bounding sphere of the tackler. We use onerunning cycle and one walking cycle from the dataset as theinitial motions for the tackler and the target. Each schedulingcycle, we choose to half the interval of time over which weare optimizing (Figure4). First the tackler would find the op-timal motion for tackling the target. Then the target charac-ter would solve for the optimal avoiding motion based on thetackler’s final position, only we would solve for the last half

of the animation. Then the tackler would plan the optimalmotion based on the new avoidance strategy, only during thelast half of the animation. This process would continue untila sufficiently small time interval. In the video, we optimizeeach character for two or three cycles depending on the sce-narios. Effectively, we have created a coarse discretization ofthe optimal planning process. We show the two outcomes ofthe tackle by letting different characters have the last chanceto adjust the motion. We also show an animation where thetarget avoids two tacklers coming from different directions.

7. Conclusion

We have described a framework for composing complex op-timal motions from optimal motion building blocks. We in-troduce two simple but powerful extensions to spacetimeoptimization that make this possible. First, we exploit atime-warped parameterization to optimize the timing and thespatial coefficients of the constraints. Second, we describehow a block-coordinate descent approach can be extended tosolve tightly coupled multi-character motions by use of con-tinuations. The combination of these two techniques greatlyexpands the applicability of the spacetime optimization, in-cluding optimal transitions between different motions andcollaborative and adversarial optimal motions for multiplecharacters.

This work presents only the first attempt at solving com-plex multiple character animations. We have by no meansdescribed a general framework that can deal with any sit-uation, but merely shown that, with thoughtful structuringof the problem, it is possible to solve these large problemsusing spacetime optimizations. An important open problemis to automatically determine the appropriate schedule fromhigh-level user specifications. Our current implementationuses inequality constraints to prevent self-penetration of acharacter. As the complexity and the number of the charac-ters increase in the scenario, a more sophisticated collisiondetection method becomes necessary to our system. The in-tensive computational time also undermines the applicabil-ity of our algorithms. An adaptive optimization frameworkcould potentially result in a more efficient algorithm for mo-tion synthesis of a large group of characters.

c© The Eurographics Association 2006.

Page 8: Composition of Complex Optimal Multi-Character Motions · automatic controller synthesis [HP97,FvdPT01]. Moreover, these methods usually lack the intuitive control provided by the

C. K. Liu & A. Hertzmann & Z. Popovic / Optimal Multi-Character Motions

References

[AF02] ARIKAN O., FORSYTH D. A.: Synthesizing ConstrainedMotions from Examples.ACM Trans. on Graphics 21, 3 (July2002), 483–490.

[Bet01] BETTS J. T.:Practical methods for optimal control usingnonlinear programming. SIAM, 2001.

[BH00] BRAND M., HERTZMANN A.: Style machines.Proceed-ings of SIGGRAPH 2000(July 2000), 183–192.

[BW95] BRUDERLIN A., WILLIAMS L.: Motion signal process-ing. In Computer Graphics (SIGGRAPH 95 Proceedings)(Aug.1995), pp. 97–104.

[Coh92] COHEN M. F.: Interactive spacetime control for anima-tion. In Computer Graphics (SIGGRAPH 92 Proceedings)(July1992), vol. 26, pp. 293–302.

[FP03] FANG A. C., POLLARD N. S.: Efficient synthesis of phys-ically valid human motion.ACM Trans. on Graphics 22, 3 (July2003), 417–426.

[FvdPT01] FALOUTSOSP., VAN DE PANNE M., TERZOPOULOS

D.: Composable Controllers for Physics-Based Character Ani-mation. InProceedings of SIGGRAPH 2001(August 2001).

[Gle97] GLEICHER M.: Motion Editing with Spacetime Con-straints. 1997 Symposium on Interactive 3D Graphics(Apr.1997), 139–148.

[Gle98] GLEICHER M.: Retargeting motion to new characters. InComputer Graphics (SIGGRAPH 98 Proceedings)(July 1998),pp. 33–42.

[GMHP04] GROCHOW K., MARTIN S. L., HERTZMANN A.,POPOVIC Z.: Style-based Inverse Kinematics.ACM Trans. onGraphics(Aug. 2004), 522–531.

[GSM96] GILL P., SAUNDERS M., MURRAY W.: SNOPT: AnSQP algorithm for large-scale constrained optimization. Tech.Rep. NA 96-2, University of California, San Diego, 1996.

[HP97] HODGINS J. K., POLLARD N. S.: Adapting SimulatedBehaviors For New Characters.Proc. SIGGRAPH 97(1997),153–162.

[KGP02] KOVAR L., GLEICHER M., PIGHIN F.: Motion Graphs.ACM Trans. on Graphics 21, 3 (July 2002), 473–482. (Proceed-ings of ACM SIGGRAPH 2002).

[KKK ∗02] KUFFNERJR. J. J., KAGAMI S., KISHIWAKI K., IN-ABA M., INOUE H.: Dynamically-stable Motion Planning forHumanoid Robots. vol. 12, pp. 105–118.

[KKKL94] K OGA Y., KONDO K., KUFFNER J., LATOMBE J.-C.: Planning Motions with Intentions. InProc. SIGGRAPH 94(July 1994), pp. 395–408.

[LC95] L IU Z., COHEN M. F.: Keyframe motion optimization byrelaxing speed and timing. InComputer Animation and Simula-tion Eurographics Workshop(1995).

[LCR∗02] LEE J., CHAI J., REITSMA P. S. A., HODGINS J. K.,POLLARD N. S.: Interactive Control of Avatars Animated WithHuman Motion Data.ACM Trans. on Graphics 21, 3 (July 2002),491–500.

[LGC94] LIU Z., GORTLER S. J., COHEN M. F.: Hierarchicalspacetime control. InComputer Graphics (SIGGRAPH 94 Pro-ceedings)(July 1994).

[LHP05] LIU C. K., HERTZMANN A., POPOVIC Z.: Learningphysics-based motion style with nonlinear inverse optimization.ACM Trans. Graph 24, 3 (2005), 1071–1081.

[LP02] LIU C. K., POPOVIC Z.: Synthesis of Complex Dy-namic Character Motion from Simple Animations.ACM Trans.on Graphics 21, 3 (July 2002), 408–416.

[LWS02] LI Y., WANG T., SHUM H.-Y.: Motion Texture: ATwo-Level Statistical Model for Character Motion Synthesis.ACM Trans. on Graphics 21, 3 (July 2002), 465–472.

[PB02] PULLEN K., BREGLERC.: Motion Capture Assisted An-imation: Texturing and Synthesis.ACM Trans. on Graphics 21,3 (July 2002), 501–508.

[PR01] POLLARD N. S., REITSMA P. S. A.: Animation of hu-manlike characters: Dynamic motion filtering with a physicallyplausible contact model. InYale Workshop on Adaptive andLearning Systems(2001).

[PSE∗00] POPOVIC J., SEITZ S. M., ERDMANN M., POPOVIC

Z., WITKIN A. P.: Interactive manipulation of rigid body simu-lations. Proceedings of SIGGRAPH 2000(July 2000), 209–218.ISBN 1-58113-208-5.

[PW99] POPOVIC Z., WITKIN A.: Physically Based MotionTransformation.Proceedings of SIGGRAPH 99(Aug. 1999), 11–20.

[RCB98] ROSEC., COHEN M. F., BODENHEIMERB.: Verbs andAdverbs: Multidimensional Motion Interpolation.IEEE Com-puter Graphics & Applications 18, 5 (1998), 32–40.

[Rey87] REYNOLDS C. W.: Flocks, Herds, and Schools: A Dis-tributed Behavioral Model. InProc. SIGGRAPH 87(July 1987),vol. 21, pp. 25–34.

[RGBC96] ROSEC., GUENTER B., BODENHEIMER B., COHEN

M.: Efficient generation of motion transitions using spacetimeconstraints. InComputer Graphics (SIGGRAPH 96 Proceedings)(1996), pp. 147–154.

[SHP04] SAFONOVA A., HODGINS J. K., POLLARD N. S.:Synthesizing Physically Realistic Human Motion in Low-Dimensional Behavior-Specific Spaces.ACM Trans. on Graphics(Aug. 2004).

[vL95] VAN DE PANNE M., LAMOURET A.: Guided optimizationfor balanced locomotion. InComputer Animation and Simulation’95 (Sept. 1995), Eurographics, Springer-Verlag, pp. 165–177.

[WH97] WILEY D. J., HAHN J. K.: Interpolation Synthesis ofArticulated Figure Motion.IEEE Computer Graphics & Appli-cations 17, 6 (Nov. 1997), 39–45.

[WK88] W ITKIN A., KASS M.: Spacetime constraints.Proc.SIGGRAPH 88 22, 4 (August 1988), 159–168.

[WP95] WITKIN A., POPOVIC Z.: Motion Warping.Proc. SIG-GRAPH 95(Aug. 1995), 105–108.

[YKH04] YAMANE K., KUFFNER J. J., HODGINS J. K.: Syn-thesizing animations of human manipulation tasks.ACM Trans.on Graphics 23, 3 (Aug. 2004), 532–539.

[ZH99] ZORDAN V. B., HODGINS J. K.: Tracking and modify-ing upper-body human motion data with dynamic simulation. InProc. CAS ’99(September 1999).

[ZMCF05] ZORDAN V. B., MAJKOWSKA A., CHIU B., FAST

M.: Dynamic response for motion capture animation.j-TOG 24,3 (July 2005), 697–701.

c© The Eurographics Association 2006.