animation by example michael gleicher and the uw graphics group university of wisconsin- madison...
TRANSCRIPT
Animation by Example
Michael Gleicherand the UW Graphics GroupUniversity of Wisconsin- Madisonwww.cs.wisc.edu/~gleicherwww.cs.wisc.edu/graphics
Animation by Example:
The Summary We can create new animations in
response to situations by adapting existing animations and piecing them together as needed.
To do this we will: Extend our work on adapting motions Present ways of putting motions together Adapt example-based synthesis to be
appropriate for interactive applications
Virtual Experiences For training, entertainment, design, … Experience a scenario
Not just a space People are a part of it
Just one part, but an interesting one
Emergency rescue training and rehearsal Crowds populating a space
Virtual Experience Applications Training
Emergency Rescure, Crowd Control Design
Evaluating populated spaces Entertainment / Commerce
Games, Shopping Malls, Chat
All need people!
The Demands of Virtual Experiences Fast Responsive Streams of Motion Realistic Attractive Detailed Directable Autonomous
The problems What do characters do?
How do they choose their actions?
How do they do it? How do we generate their movements?
What do they look like? How do we draw them?
Cut to the chase…An Example How do you make a character sneak
around?
Start with some captured motion of a person sneaking around
Synthesize a new motion of a character “sneaking” somewhere else
What did you just see? Small amount of example motion Examples of what I want
Actions Quality
Character did something different New path
Character did it the same way Preserves “style” and “quality”
How to make a Character “Sneak”? What is sneaking?
Hard to define mathematically Abstract qualities matter
Style, mood, realism, … Details matter
Feet not sliding on the floor Subtle gestures
Computer Animation 101:
How do we get motion? Create it manually (keyframing)
Common method used for film VERY talent and labor intensive
Synthesize it by procedural methods Physical simulation, or ad-hoc methods Can’t get exactly what you want
Capture it from a performer Motion Capture Animation from Observation
Use markers and special cameras Tracking + Math
Motion Capture:Optical Tracking
All examples I will show are from optical motion capture
Mocap Pipeline(Animation from Observation)
Increasingly stable technology It’s more than just pointing cameras at
somebody! Getting the observations is just one part
of the process Need to build usable representations of
motion
Plan Shoot Process Apply
Why Edit Motion? What you get is not what you want!
You get observations of the performance A specific performer A real human Doing whatever they did With the noise and “realism” of real sensors
You want animation A character Doing something And maybe doing something else…
The General Challenge Get a specific motion
From capture, keyframe, … Specific character, action, mood, …
Want something else But need to preserve original
But we don’t know what to preserve Can’t characterize motion well enough*
*This is a working assumption of my research. I’d love to be proven wrong.
Three Problems Where does X live in the data?
Where X {style, personality, emotion, …} The things to keep or add
Small artifacts can destroy realism Eye is sensitive to certain details Amazing what you can’t get away with
• Kovar, Schreiner and Gleicher, SCA ’02
How to specify what you want
How do we handlethese problems? Don’t know which details are
important! Must preserve ALL details
Since you don’t know what is important Need to understand artifacts better
Need to represent what is important
An ApproachConstraint-Based Motion Editing
Identify specific details in motions that must be preserved Constraints such as footplants
Make conservative changes to motions Things that generally don’t cause problems Add low frequencies Blends with similar motions
Re-establish constraints (solve) Avoid creating new artifacts
Band-limited adaptation High frequencies are important
Eye is sensitive to them Always signifies important events
Avoid high frequency changes Preserve existing high-frequencies Avoid adding new ones
Band limit the changes Not the resulting motions
Band-limited adaptation? Can’t look at individual frames Need to look across space and time
Popping can be worse than skating
Constraint Solutions for Editing
Spacetime (single large non-linear optimization) Gleicher ’97, Gleicher ’98, Popovic and Witkin ’99
Hierarchical Splines Lee and Shin ’99
IK + Filter Gleicher ’00
Importance-Based Shin, Lee, Gleicher and Shin ’01
IK + Blending Kovar, Schreiner and Gleicher ’02
How to use this Editing methods just need to get
close Avoid nasty artifacts (high-frequencies)
Footskate cleanup fixes many important details
Makes editing methods easier to devise and implement
Motion Editing Interactive methods Many are easy to implement Cleanup solvers
Footskate Physics (Shin et al 2003)
Alter clips Retain the length
Getting Beyond Clips Want to generate a wider range Want more control
Applications need streams of motion Dynamically generated
Applications need “long clips”
Idea: Put Clips Together New motions from pieces of old ones!
Good news: Keeps the qualities of the original (with care) Can create long and novel “streams” (keep
putting clips together) Challenges:
How to connect clips? How to decide what clips to connect?
Connecting ClipsTransition Generation Transitions between motions can be
hard
Simple method work sometimes Blends between aligned motions Cleanup footskate artifacts
Just need to know when is “sometime”
What is Similar? Factor out invariances and measure
2) Extract windows
3) Convert to point clouds4) Align point clouds and sum squared distances
1) Initial frames
An Aside: Other Invariances (Kovar&Gleicher ’03)
Different Timing
Different Curvature
Different Constraints
An easy point to miss:Motions are Made Similar
“Undo” the differences from invariances when assembling
Rigidly transform motions to connect
Motion GraphsKovar, Gleicher, Pighin ’02
Start with a database of motions, each with type and constraint information.
Goal: add transitions at opportune points.
Motion 1
Motion 2
Motion 1
Motion 2
Other Motion Graph-like projects elsewhereDiffer in details, and attention to detail
Building a Motion Graph Find Matching States in Motions
Motion Graphs
Quality: restrict transitions
Control: build walks that meet constraints
Idea: automatically add transitions within a motion database
Edge = clip
Node = choice point
Walk = motion
Automatic Graph Construction
Find many matches (opportunistic)
Good: Automatic Good: Lots of choices
Using a motion graph Any walk on the graph is a valid motion
Generate walks to meet goals Random walks (screen savers) Search to meet constraints
Other Motion Graph-like projects elsewhere Differ in details, and attention to detail
The initial example:
Building a Motion Graph
The initial example:
Using a Motion Graph
Given a path Find a motion that minimizes distance Combinatorial optimization
Synthesis by Example Graph-based approaches
Kovar et al., Lee at al., Arikan and Forsyth Good:
High quality results Flexible
Bad: Offline (need to search) Limited Precomputation
Trade some quality for performance
We’ll fix this
Motion Graph Problems Graphs built opportunistically
Leads to unstructured graphs Don’t know what’s there No control over connectivity No way to know what is close Don’t know what character can or can’t do
Lots of transitions Can’t check them all, be conservative
Why is this OK? Search the graphs for motions Look ahead to make sure we don’t
get stuck Cleanup motions as generated Plan “around” missing transitions Optimization gets close as possible
Not OK for Interactive Apps!
Snap-Together Motion
Three basic ideas Find “Hub Nodes” Pre-process motions so they “snap”
together Build small, contrived graphs
Snapable Motions What if motions matched exactly?
Match both state and derivatives Match reasonably at a larger scale
Make motions match exactly Add in displacement maps Bumps we add to motions Modify motions to common pose Compute changes at author time
Average orCommon Pose
C(1) Displacement Maps Add displacements to make motions
match in both value and derivative Displacement maps themselves are
smoother
Motion 1
Motion 2
Cut Together
Some features of Cuts Don’t need original motions
Just use displacements No database growth
Multi-way transitions Find common point for several motions
Hub Nodes Multi-way cuts yield Hub Nodes Hub nodes allow nice graphs
Contrived Graph Structure?Search: Look ahead to
get where you need to go
React: Always lots of choices. Something close to need.
Gamers use these
Semi-Automatic Graph Construction Pick set of match frames
User selects System picks “best” one
Modify motions to build hub node Check graph and transitions
Common poses for Multi-way Hubs All motions displace to one pose Pose must be consistent with all
motions Average “common” pose
Average of poses Constraints solved on frame
Must solve constraints on all motions Inconsistencies are possible
Building the Motion Graphs
Why should we care? Precompute everything but rigid
transforms = FAST!
Nodes with high connectivity = RESPONSIVE / CONTROLLABLE
Add in user interaction Build convenient graph structures Verify transitions (less conservative)
Some results Corpus of Karate Motions
Pick two best poses Map motions to game controller 12 minutes – including picking buttons
Walk Walk + Karate
20 minutes – including picking buttons
Nothing for Free Less Variety Less “Exactness” Lower Quality Transitions
Displacements not as good as blends “Big” Transitions Big changes to get multi-way
Need user intervention Pick Match Frames Verify Transitions
In Practice, heuristics work REALLY well – but no guaruntees
Are we done yet? May not have exact motions you need
Make them using blending techniques Discrete nature (choices)
Parametric “fat” arcs Better matching of dissimilar motions
New and better invariances Motion specific invariances
How do characters decide what to do?
Snap Together Motion in ActionCrowd Simulation
Snap-Together Motion Animation by Example for
Interactive applications!
User-controlled graph structures Multi-way cut transitions Pre-computed transitions
Still more to do Interactive Systems
Online generation (no search) Low-cost runtimes
Better synthesis Self-awareness (what can you do?) Parameterized motions Better goal specifications
Multiple interacting characters Crowds Characters The bigger picture
The Vision… Visual Media for Everyone ! Visual Media
Pictures / Models Animations / Interactive Environments
Everyone Not just artists Easier for artists too
The Computer Science Tools for the creation and delivery
of visual media What representations to use
Build from “real-world” data Use for synthesis
Some current projects…
Current Projects Animation (Virtual Experiences)
Motion Synthesis Character Geometry Crowds and Behavior authoring
Vascular Visualization Virtual Videography
Virtual Videography Want to record classroom lectures
And more (start with something easy) Needs to be done well
Static camera video is unwatchable Good video holds interest, guides attention
Can’t afford a professional crew Good videography is hard Intrusive
Get a computer to simulate the crew?
Virtual Videography
Cheap and unobtrusive capture
Static cameras in back of the room
synthesize
decide
analyze
capture
Virtual Videography:Media Analysis
Build a representation of what happens
Computer Vision + Regions, Attention Model
synthesize
decide
analyze
capture
Virtual Videography:Computational Cinematography
Decide what to be shown Simulate director and editor Combinatorial optimization
chooses best shot sequence
synthesize
decide
analyze
capture
Virtual Videography:
Synthesis
Simulate Other cameras Plan good camera movements Create visual effects
synthesize
decide
analyse
capture
Vascular Visualizationw/Garet Lahvis, UW Dept of Surgery
How does the vascular system develop? In the brain?
How do we help a biologist study this? How to work with visual complexity?
Not an artist Not a film maker Not even a computer scientist
Vascular VisualizationYou could see every capillary?
This is ONE of HUNDREDS of slices of this
brain!
From Lahvis Lab, UW Dept of Surgery. Image processing by UW CS Graphics Group
Vascular Visualization:
How do you look at this? Goal driven: what do biologists need to
see? Need to manage immense complexity
Artistic presentation (abstraction) Analytical tools
Build representations of the 3D structure Leverage efficient display mechanism Afford easy analysis Unlike traditional “volume” visulization
Thanks! To the UW graphics gang. Animation research at UW is sponsored by the
National Science Foundation, Microsoft, and the Wisconsin University and Industrial Relations program.
House of Moves, IBM, Alias/Wavefront, Discreet, Pixar and Intel have given us stuff.
House of Moves, Ohio State ACCAD, and Demian Gordon for data.
And to all our friends in the business who have given us data and inspiration.