redefining vfx - vicon.com · the barriers stopping vfx artists from crossing the uncanny valley to...

9
REDEFINING VFX MOTION CAPTURE ENTERTAINMENT

Upload: others

Post on 22-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

  • R E D E F I N I N G

    V F XM O T I O N C A P T U R E

    E N T E R T A I N M E N T

  • B U I LT F R O M T H E G R O U N D U P, S H Ō G U N TA K E S A D V A N TA G E O F V I C O N ’ S 3 5 Y E A R S ’ E X P E R I E N C E I N M O T I O N C A P T U R E A N D T H E I M P R O V E D T E C H N O L O G Y A V A I L A B L E I N V I C O N V A N TA G E A N D V I C O N V E R O C A M E R A S .

    W H A T ’ S I N S I D E

    3

    Today’s visual effects productions need to be achieved in real time and deliver the highest quality skeletal data in the shortest time possible. Shōgun Live and Post are designed to help studios of any size optimize capture and processing for maximum quality results.

    2 Introduction

    4 Who uses Shōgun?

    6 Key market use cases

    10 Digital humans for all

    12 What can you do with Shōgun?

    14 What’s new in 1.4?

    "We consider Vicon the gold standard for production. It is unbelievably powerful"

    - Dan Pack, Founder Silver Spoon Animation

  • 5

    W H O U S E S S H Ō G U N ?

    AudioMotion

    Beyond Capture

    House of Moves

    Imaginarium

    Neoscape

    MOOV

    Silver Spoon

    The Capture Lab

    S E R V I C E P R O V I D E R S

    Activision

    Bandai Namco

    Digital Domain

    EA

    Epic Games

    Ninja Theory

    Myrkur Games

    Plarium

    Square Enix

    Ubisoft

    Warner Brothers

    G A M E S C O M P A N I E S

    U N I V E R S I T Y F I L M & G A M E D E P A R T M E N T S

    DAVE School

    Drexel University

    FIA

    NYU

    Portsmouth University

    Queen Mary University London

    Disney

    Digic Pictures

    Double Negative

    Dreamworks

    Framestore

    ILM

    Pixar

    F I L M P R O D U C T I O N C O M P A N I E S A N D S T U D I O S

    Savannah College of Art & Design

    Staffordshire University

    University of Leeds

    University of Westminster

    USC

  • K E Y M A R K E T U S E C A S E S

    Visual Works

    Since the groundbreaking release of Final Fantasy VII in 1997, Square's Visual Works CGI animation studio has been the driving force behind the cutscenes illuminating the publisher's games. The studio has worked on multiple entries in blockbuster franchises such as Final Fantasy.

    Branching out, the production house began extending itself to standalone CGI films in 2005 with Final Fantasy VII: Advent Children, and taking on AAA Western series such as Tomb Raider since Square's acquisition of British publisher Eidos in 2009.

    Visual Works has used Vicon technology since 2006 and currently boasts a huge motion capture volume equipped with 100 Vantage cameras, making it one of the largest mocap stages in Japan. In keeping with the epic scale of Square’s games the team often uses the large space to capture casts of 10 or more actors, while at a more granular level they’re investigating Shōgun’s new high fidelity finger solver as an option for producing hand animation.

    The director of Visual Works emphasizes the speed that the studio's Vantage cameras and Shōgun software allow his team to work at. Being able to quickly process capture data in-house and output real-time animation in the studio is a powerful advantage, particularly when it comes to creating fantasy animations which might take the form of anything from a realistic medieval knight to a bobble-headed anime character.

    Whether Visual Works is subtly grounding and balancing fantastical characters or creating spectacular set-pieces, everything the studio does comes back to bringing beloved characters to life, such as Cloud in the recent Final Fantasy VII Remake.

    Video game titan Square Enix has built its reputation and its fan base by drawing gamers deep into immersive worlds such as those of the famous Final Fantasy series. One of the cornerstones of this worldbuilding is the use of lengthy, increasingly lifelike cut scenes that bring the games' stories and characters to life, and one of the most powerful tools for creating them is Square's Vicon motion capture stage.

    S Q U A R E E N I X – V I S U A L W O R K S

    “Most people talk with their hands a lot,” says Richard Graham, CaptureLab Supervisor for VFX house Framestore. “There's a lot of gestural information that we give to each other when we speak through our hands.” Despite the importance of hands in communication, finger tracking has long been a white whale in motion capture - one of the barriers stopping VFX artists from crossing the uncanny valley to create a complete ‘digital human’.

    “With a hand there's just so much occlusion going on at any pose,” says Tim Doubleday, Entertainment Product Manager for Vicon. Markers disappear from the view of cameras, getting tangled up in processing, and the sheer complexity of hands aggravates the problems further.

    The result would be messy data, with models often resembling pretzels more closely than they did hands.

    To finally address the problem, Framestore collaborated with Vicon on a 2017 government-funded project through Innovate UK called ProDip. The plan was to come up with a working solution over an 18 month period.

    Vicon’s approach was to build a virtual model of a hand skeleton that Tim’s team trained to understand what a ‘real’ pose looked like. To do this, they had 22 subjects each place 58 markers on their hand and perform a range of key actions, meticulously tracking them as they went along.

    “You basically use that as training data to constrain the hand model. For each subject that came in, we knew where the joints should be placed and then also how their hand deformed during the range of motion,” says Tim.

    The model the team produced then became a reference point that Shōgun can use to interpret a hand movement tracked by a dramatically reduced number of markers.

    “Using 10, six or three markers you're saying, ‘okay, 10 are in this position, we think it's going to be one of these poses, and it's most likely to be this one. So let's put the hand in that pose,” says Richard. “So therefore it can't ever do something that the human hand can’t do.”

    Framestore has wasted no time in putting the new process into action. The system has been used in realtime, directly out of Shōgun, on everything from characters in blockbuster movies (including Mulan, Captain Marvel, Aeronauts, and Spider-Man: Far From Home) through to producing reference material for animators when animating creatures and animals.

    “The takeaway for us is that when we offer fingers to our clients they always say yes, and now it's much less of a headache.It's been a big advantage for the last two years that we've been using it,” says Richard.

    F R A M E S T O R E – F I N G E R C A P T U R E

  • 9

    D R E X E L U N I V E R S I T Y

    The Animation Capture & Effects Lab (ACE-Lab) at Drexel University’s Westphal College of Media Arts & Design looks, on the surface, like a training program for VFX-led entertainment industries. Under the stewardship of Nick Jushchyshyn, Program Director for VR & Immersive Media, Drexel prepares students not only for the visual effects applications that exist right now but also those that are coming up five or even 10 years down the line.

    The department’s news page is full of headlines about alumni working on high-profile projects such as Star Wars and Frozen II, but the ACE-Lab takes its students down less well-trodden paths, too. In fact, it’s had a wide-ranging mission from the outset.

    Early adopters

    Motion capture is a core part of the department’s offering. ACE-Lab was an early adopter of Vicon’s Vantage cameras, proud that its entertainment set-up was one of the first Vantage installations in the US. The lab upgraded their T-Series system when the department moved to a larger 40ft x 40ft stage, complete with a huge green screen cyclorama.

    Nick points to the value the system offers. “Price-to-performance was hands-down the best for what we needed. There was nothing at that price point that would deliver that type of performance - five megapixel cameras, and several hundreds of frames per second. That was a whole new order of accuracy for us.”

    A versatile approach

    Aiming to give students a dynamic mocap skillset, the department has brought in subjects ranging from martial artists to dancers to a troupe of performers in the mold of Cirque du Soleil for capture sessions.

    Collaborations have brought in the engineering college, the education college, the law school and nursing and medical students.

    Virtual production is an increasingly crucial component of ACE-Labs’ programs, both for entertainment and for other sectors. “Right now we’re investigating and deploying virtual production technologies towards remote teaching and learning scenarios, incorporating motion capture into virtual production and leveraging that for teaching,” Nick says.

    “We're looking at ways of creating virtual learning experiences that are better than in-person. What can we do in these spaces that isn't even possible when you’re here in person?”

    The diversity of industries that the program feeds into means that Nick and his colleagues are constantly looking to the future. “The trajectory is then transferring these technologies that are being driven by entertainment production into non-entertainment fields,” he says.

    “How would you use virtual production in aerospace design, automotive design? How do you use that in training law enforcement? How can you build up awareness and train people without putting them at risk? How can you create simulations that are genuinely visceral, that feel completely real, but they're completely safe and allow people to experience all sides of a scenario?

    Those are the directions that I see our practitioners moving towards in the years ahead.”

    S I L V E R S P O O NA N I M A T I O N

    Silver Spoon was originally conceived by founder Dan Pack as a one-stop shop for visual effects support to other businesses working in the field. Motion capture was initially a small part of the equation, but grew as part of Silver Spoon’s business and evolved into real-time animation.

    "We're being much more involved in the creative end, and taking our technology and years of experience working in this format, and applying that to these new types of opportunities and new types of engagements with viewers," says Pack.

    He points to developments in finger tracking as especially important to Silver Spoon's work. "Finger tracking has always been a complex issue. They are small, they cover each other, they are complicated!

    Vicon has always been leading the pack in pushing mocap development and they were the first to really nail down proper finger tracking.”

    "So now, we're capturing unbelievable finger movement, which is such a big deal, especially when you're doing any type of real-time engagement with a client. It adds a depth and realism to characters that body language and facial expression alone can’t offer”, says Pack. Then Shōgun, plugged into Unreal Engine, enables the turnaround speed that Silver Spoon needs to generate animation in real time.

    Real-time animation on a national stage

    Planters, VaynerMedia and Silver Spoon teamed up to introduce Planter’s Baby Nut to the world during a 4.5-hour animated livestream running on Twitter during and after the 2020 Super Bowl. This was something that hadn’t been seen at that scale before – an animated character responding live, in real time, to a worldwide audience following along through Twitter.

    Silver Spoon’s Vicon motion capture setup allowed game actress Erica Citrin, with direction from director Marcus Perry, to play, dance and delight viewers as Baby Nut throughout the entire performance.

    The team built a virtual, interactive bedroom for Baby Peanut ahead of time, and then created physical props in the studio that were twice their normal size to reflect the fact that Baby Peanut is only half the size of the actress. Vicon’s ability to track multiple props made the integration between the two seamless.

    “We can utilize this technology to tell engaging stories and to create rich interaction between viewers or consumers,” says Pack. "And if we can do it in a way, like with any good VFX, that makes less of a spectacle of the technology and allows people to interact with characters in a way that's more seamless, that's what we're all about.

  • H I G H F I D E L I T Y O P T I C A L F I N G E R S

    R U N S I N U N R E A L E N G I N E 4 . 2 4 C H A R A C T E R R I G G I N G

    S U P P L I E D B Y A N E W D E S I G Nwww.anewdesign.studio

    R E A L T I M E C A P T U R E W I T H I N S H Ō G U N L I V E

    R E A L T I M E F A C I A L C A P T U R E U S I N G A P P L E A R K I T

    F U L L R E T A R G E T I N G P I P E L I N E U S I N G S H Ō G U N P O S T

    A F F O R D A B L E , E N T R Y L E V E L C A M E R A S Y S T E M

    D I G I TA L H U M A N S F O R A L LVicon has built a pipeline designed to be used by studios of any size looking to leverage technology used on the latest games and films. The pipeline works on a small 12 camera Vero system that makes use of Shōgun 1.3’s new high fidelity finger solver and retargeting workflow. You can now stream your Digital Humans directly into a game engine and record the full performance directly in engine. Utilizing Apple’s Face AR plugin every nuance of the performance is captured and delivered in the game engine at 60fps.

    11

  • 13

    U N B R E A K A B L E R E A L - T I M E S O L V E R T H A T ' S B E S T I N C L A S S

    F A S T E S T T I M E T O C A P T U R E ( I N C L U D I N G C A L I B R A T I O N A N D R E C O R D I N G O F 3 D D A T A D I R E C T T O D I S K )

    Realtime retargeting direct into game engines without using 3rd party software

    High fidelity finger solver allowing complex hand gestures like sign language

    4K SDI video camera calibration complete with overlay.

    Automatic gap filling and data assessment including innovative gap list feature

    Full retargeting pipeline direct onto character fbx

    Interactive solver that runs in real-time and gives instantaneous results

    Fully scriptable using Python or HSL.

    S H Ō G U N L I V E V S

    W H AT C A N Y O U D O W I T H S H Ō G U N L I V E ?

    W H AT C A N Y O U D O W I T H S H Ō G U N P O S T ?

    O N L Y M O C A P P R O V I D E R T O S U P P O R T U S D E X P O R T F O R V I E W I N G A N I M A T I O N O N I O S D E V I C E S

    S H Ō G U N P O S T V S

    Clou

    d, S

    quar

    e En

    ix TM

    Senu

    a, N

    inja

    Theo

    ry T

    M

    Kassandra, Ubisoft TM

    Siren, Epic Games TM

    Snok

    e, IL

    M T

    M

    Incredible Hulk, Marvel TM

  • I M P R O V E D C A L I B R A T E DS K E L E T O N Live Subject Calibration skeleton hands now look more realistic and the shoulder position is a better fit. This makes the whole character more anatomically correct and helps for a more believable performance.

    N E W T R A C K I N G P A N E LA brand new tracking panel makes it much easier to mange and create subjects and props. This panel combines features from the subject panel and subject calibration panel and brings everything together in one simpler interface. Calibration, Re-calibration, Retargeting and Color selection can all be setup and each subject, prop and cluster is listed in the tracking panel list.

    P Y T H O N 3 S U P P O R TWe have added support for Python 3 in both the Live and Post API. This allows users to create their own applications that interact with Shōgun using the latest version of Python. This includes full support for commands based around capture, system calibration, subject calibration and MCP review.

    O C C L U S I O N F I X I N G F O RH I G H F I D E L I T Y F I N G E R SImproved occlusion fixing in Shōgun now works better when capturing fingers. This allows users to perform motion where many of the markers get hidden yet the skeleton underneath still looks believable.

    W H AT ’ S N E W I N

    1.4

    15

    M C P R E P R O C E S S I N G T O R E M O V E D R O P P E D F R A M E SReprocess an MCP file that has dropped frames using either the command line or Shōgun Post. This is especially useful for users with large systems looking to capture lots of subjects and props at once. If you open an MCP file with dropped frames into Post it will quickly reprocess the data before opening the file.

    I M P R O V E D V I D E O P L A Y B A C K I N M C P R E V I E W We have drastically improved playback of video when reviewing data using MCP review. It’s now possible to review 4 * 4K video alongside the overlaid motion capture data.

    I M P R O V E D C L U S T E RA T T A C H M E N TImproved cluster workflow - clusters are now labeled and attached more reliably. This means they don't become un-attached and can be placed anywhere on the subjects body.

    M U L T I - M A C H I N E C U S T O MI P R A N G E You can now specify the IP range you want to use for multi-machine. This opens up it’s use on existing networks and allows slave machines to help with processing tasks like Reconstruction, Labeling or Solving.

  • VICON DENVER7388 S. Revere Parkway Suite 901 Centennial CO 80112 , USA T:+1.303.799.8686 F:+1.303.799.8690

    VICON LA3750 S. Robertson Boulevard, Suite 100Culver CityCA 90232, USAT:+1.310.437.4499 F:+1.310.388.3200

    VICON OXFORD6, Oxford Industrial ParkYarntonOxfordOX5 1QU T:+44.1865.261800

    © Copyright 2020 Vicon Motion Systems Limited. All rights reserved. Vicon® is a registered trademark of Oxford Metrics plc. Other product and company names herein may be the trademarks of their respective owners.

    For more information visit our website or contact us:

    www.vicon.com/vfx www.vicon.com/shōgun

    [email protected]