World War Z opens with a thrilling VFX sequence that shows a confused Gerry (played by Brad Pitt) and family in the middle
of a traffic jam in Philadelphia. A massive rubbish truck busts through the traffic, wiping out a police officer a split-second after he tells Gerry to stay in his vehicle. Zombie mayhem ensues.
This, and other key sequences in the film, used environments created by VFX outfit Cinesite (opens in new tab), which worked on the movie for two years, along with a handful of other VFX houses including MPC (opens in new tab), Framestore (opens in new tab), ILM (opens in new tab) and Centroid (opens in new tab).
Director Marc Forster wanted the film to seem real; like a documentary, not like a VFX movie. He wanted the audience to focus on the story, not the effects. Cinesite's environment team decided that to make the opening shot convincing, they would base the CG as much as possible on reference photography.
The opening sequence was actually shot in Glasgow, and then elements of Philadelphia were mixed into the footage. Cinesite took extensive reference photographs of buildings in and around Philadelphia, which were digitally projected onto the geometry. The buildings were then placed in layouts in Nuke and fitted into the original Glasgow plates.
Once the plates were complete, the crowds of CG zombies, swarming and attacking people were added. They had to have realistic hair, cloth and attacking movements: as these effects had to stand up for both distance shots and close-ups. "The main factor for the crowd in World War Z was that they had to work as a cast of thousands in some shots, but also had to hold up as full-screen individuals in others," says CG supervisor Anthony Zwartouw.
"Each character had to have three levels of detail. The more distant crowds use lower-resolution geometry, textures and simplified shaders, whereas those closer use high-res geometry with sculpted displacement, full three-layer subsurface scattering and fully simulated clothing and hair."
Crowds of zombies
For the zombie movement visualisation, motion capture and keyframe animation, Cinesite used Maya and MotionBuilder. During pre-production, the animation team, led by Peter Clayton, created dozens of movement tests for Forster so that he could visualise how the zombies should move.
"These tests included the look of the zombie run, the zombie's 'teeth-first Israeli attack dog' style lunge through the air, jumps and other shot-specific actions," explains Zwartouw.
The production did three mocap sessions, which each lasted two days and were led by animation supervisor Andy Jones. Also, on the shoot from Cinesite was crowd lead Jane Rotolo, lead animator Peter Clayton and Anthony Zwartouw.
"Hundreds of different motions were captured, from walks and runs to complicated vignettes which featured multiple characters for specific actions in specific shots," says Zwartouw. "The zombie mocap was then augmented with key frame animation to create the arms-back, stooped-forward run that the director was looking for."
Although mocap was used extensively for crowd scenes, a lot of pure key frame animation was done for hero digital double shots and actions that were impossible to achieve using mocap.
The crowd shots and the full-frame digital doubles proved to be extremely challenging. In the Philadelphia traffic jam scene, for example, there were a few close-ups of fully CG doubles; these needed fully simulated clothing and hair. Six bespoke hero digital doubles were built, which had to hold up full-frame and intercut seamlessly with their live- action counterparts.
Cinesite came up with a novel and effective technique to gather the expressions. "Each hero digi-double had to have a range of expressions which needed to be captured, modelled and rigged," explains Zwartouw.
"For some of the characters we didn't have all the expressions we needed in the reference, so the head of modelling, Royston Willcocks, proposed a method of doing a shoot with several of the willing artists at Cinesite, which enabled us to extract each expression from the neutral pose and map it onto our digi-doubles."
These expressions were then split up into separate muscle groups and given to lead rigger Adam Lucas, who built all the face rigs.
Clothing the undead
The digital doubles had to be convincing in terms of the CG hair and CG cloth too. Some shots were too close to the camera to use a normal crowd approach, so Cinesite had to simulate everything as though they were going to be heroes.
"We first shot video reference of several types of clothing in motion from three different angles," says Zwartouw. "We put each costume through a work-out, which consisted of extreme body positions like touching your toes and reaching up in the air to running on a treadmill.
"We then rotomated the work-out and stress-tested all our digital costumes using the HD footage to guide us. From that, we devised base settings for garments."
Lots of time was spent refining the garments in modelling to get the best solve from the sim mesh and also setting up how to accurately wrap the render mesh to the sim mesh. nCloth was used to do most of the cloth sims, but Cinesite developed several tools to facilitate the task inside its pipeline.
"A cloth wedge tool was developed, which enabled artists to select specific parameters and vary their values over a number of versions. This would then be put on the farm overnight. In the morning we would get multiple simulations, each one slightly different from the other," explains Zwartouw.
The tool saved lots of time and enabled the artists to fine-tune their simulations. It was also developed into an overall simulation wedge tool that would run all simulations from fluids to hair. "Asset management tools were developed around cloth to be able to import a character from animation into the cloth pipeline already setup and ready to sim," Zwartouw continues.
Much like the requirements of the cloth simulation set-up, all the hero and crowd characters had their own individual grooms, sculpted to impressive detail by lead groom TD Tarkan Sarim.
For hair grooming, Cinesite acquired several licences of Peregrine Lab's Yeti (opens in new tab) while it was still in beta stage. As the company developed the software, Cinesite offered direct feedback to help it craft the tools.
"We created a similar asset management pipeline as we did for cloth, which enabled us to get characters from animation through hair and into lighting, with a lot of the manual labour taken out," says Zwartouw.
"It was decided to only simulate hair that was longer than a 'bob cut'," says Zwartouw. "The shorter hair is, the stiffer it becomes, so movement would be very minimal in those cases. We developed the Cinesite curve cache tool which made handling and visualising hair inside of Maya less heavy, but with keeping a lot of the original curve's attributes."
Massive was used extensively in World War Z, from characters walking along the street while talking on a mobile phone and crossing roads, to hoards of zombies chasing and attacking thousands of civilians, to populating military personnel and refugees.
"Because the crowd in WWZ was to be used as a horde of thousands but also to come as close as half-screen height, it was decided to create a tool which enabled the Massive TDs to export portions of the simulated crowd as animation rigs to import into Maya; so the animation could be tweaked, the geo could be upgraded to a higher resolution and if need be, high-res cloth and hair sims could be run," explains Zwartouw.
Cinesite's in-house shader assignment system enabled the artists to do the look development once for both digital doubles and Massive agents. "This enabled us to precisely match a Massive agent's appearance when it is promoted to an animation rig," says Zwartouw.
They also used the system to easily assign materials to crowd agents based on various agent properties. "This came in handy for visualising demographics, for example, the distribution of humans versus zombies, when blocking out crowd shots."
Rotomation of live-action performers was used to further enhance the CG zombie characters' hands and faces. For the tracking, rotomation and layout, Cinesite used a combination of software: Maya, 3d Equalizer, boujou, PFTrack and Nuke.
"Specific models and rigs were built for each character whose face needed enhancing," explains Zwartouw. "3D head rigs were hand-tracked to capture the larger movements of the character's head and face.
"Witness cameras used during the shoot made the character rotomation tasks more efficient and accurate. Then the 2D department would use motion analysis in Nuke to nail down the finer twitches and integrate the make-up enhancement, which was created by the matte painting department."
Zwartouw remembers having sleepless nights due to one particularly challenging shot in Philadelphia, where all the foreground and mid-ground people and zombies are CG. "The very nature of this shot is challenging. We have close-up CG people running: as humans, we are accustomed to finding any minute detail that is wrong.
"Also, the lighting is overcast, so completely flat. Direct sunlight can be very helpful because it adds contrast, shape and detail, so we had to make sure there was enough detail in the characters to work in that environment." After a lot of experimenting, Cinesite finished the sequence. It's seamless and grounded in reality, delivering exactly what the director wanted.
Words: Kulsoom Middleton
This article originally appeared in 3D World issue 173.
Liked this? Read these!
- Top free 3D models (opens in new tab)
- Best 3D movies (opens in new tab) of 2013
- Blender tutorials (opens in new tab): ways to create cool effects