5 technologies revolutionising animation in 2014

CG is moving faster than ever. Here are 5 new technologies coming to a computer near you...

Win a trip to Los Angeles!

This feature is brought to you in association with Masters of CG, a new competition that offers the chance to work with one of 2000AD's most iconic characters and win an all expenses paid trip to the SIGGRAPH conference. Find out more at the end of this article...

The fields of animation and visual effects are constantly being driven forward by new technology, and 2014 is playing host to some major advances.

There are two main threads of work: speed and automation. For the former, the power of the GPU is being applied to all areas of the production pipeline, from previs to animation to final output. Also, the more time-consuming aspects of character creation, such as rigging and the process of animation are being enhanced, simplified or totally automated.

Here are some of the areas to keep an eye on this year…

01. GPU accelerated rendering

Unbiased renderers are giving animators a serious speed kick

One of the big areas of development is in the application of GPUs and CUDA (Compute Unified Device Architecture) acceleration to improve render speeds. The technology has been in use for years as a specialized renderer, but is now reaching a level of maturity where off-the-shelf apps like Octane Render are being deployed in the production of the feature film. (And Octane Render 2.0 has just launched, featuring a bunch of impressive new features).

Not only are these unbiased renderers faster than CPU-bound processes, but they also produce physically realistic results in near real-time, providing a preview as fast and of far greater quality than OpenGL. Also, the more GPUs you throw at it, the faster the end result.


Octane Render is rapidly being joined by other renderers like V-Ray, Indigo, Gelato, LuxRender, Thea Render, Moskito, and Blender's Cycles. Meanwhile the big-hitting industry-standards Arnold and Renderman are also due to gain GPU acceleration at some point, while also becoming much more accessible to non-studios; anyone is free to download and use Pixar's Renderman for non-commercial purposes.

The beauty of the technique its scalability: to speed up the process you simply slot in additional GPUs. As a consequence, we'll probably see a lot more PCs and GPU enclosures on the market, enabling users to add multiple GPUs as necessary.

02. Realistic on-set previsualisation

Previs was seriously advanced during the making of Avatar

A side effect of GPU-accelerated rendering is in previs. A key element in the creation of visual effects sequences pre-visualisation is a way for the director to see a version of the scene complete with basic effects in place. The concept was advanced during the shooting of James Cameron's Avatar, where a 'virtual camera' enabled a videogame-quality version of the CG characters and environments to be viewed in real-time, enabling Cameron to frame his shots.

The ultimate goal is to refine this to the point where previs footage is near or actual final quality. Kevin Margo of Blur Studio recently demonstrated a combination of motion capture hardware with V-Ray RT for MotionBuilder, providing an almost-real-time view of the CG characters. Although the live image is a little coarse, the system can be paused at any time and the system rapidly resolves to show the near-final output, including motion blur and depth of field.

The system is driven using a Boxx workstation, fitted with a Quadro K6000 and two Tesla K40s - around £17,000-worth of gear.

03. Faster character animation

PIxar demoed Presto at NVIDIA's GTC conference

It's not just the rendering pipeline that's seeing the benefits of GPU acceleration; already being employed elsewhere in computationally heavy tasks from physics and fluid simulations to filters in Adobe Photoshop. But the power of the GPU is also being applied to things like Pixar's real-time preview system, Presto.

Developed for the film Brave, this proprietary software enables an animator to view their animation in real-time, enabling them to work faster and more effectively.

Driven by a £4,000 Nvidia Quadro K6000 card, the system leverages the power of the GPU to drive an OpenGL 4.0 system capable of displaying fully animatable characters with tens of millions of poseable hairs with PTex textures and real-time shadows.

Maya and Blender

The app also supports real-time character deformation with subdivision surfaces and tessellation, but this technology isn't just for high-end bespoke apps: Maya 2105 already features GPU-accelerated OpenSubdiv, and it's due to be implemented in Blender 2.72, enabling much faster previews of animated characters. Expect the tech to appear across other digital content creation apps in due course.

Fabric Software is just about to update Fabric Engine, a development system that enables studios to build their own custom tools. The key element in the new release is the ability to write Fabric code that can run on both the CPU and GPU, without making changes or having any previous knowledge of CUDA or OpenCL.

In most instances, technical directors can get an instant 5x to 10x speed-up, just by flicking the switch to use available GPUs. The end result is apps or plug-ins that enable animators to work with faster feedback and iterate more quickly – no more guessing what the end result might look like.

Big enabler

"The reason we're excited about what we've done with GPU compute is that it's now accessible to regular TDs," explains Paul Doyle, CEO and Founder of Fabric Software, "and that there is zero cost experimentation to go with it. GPU compute can be a big enabler once it is just treated as a readily available compute resource.

"I think the next phase is going to be distributed computation across GPU clusters," he adds, "particularly on the cloud. This becomes viable once you separate data and the processing of that data, which is happening more and more in VFX tools.

Combined with streamed applications this can start looking very interesting indeed – for example, on-demand offloading of simulations that is completely transparent to the artist. If you look at things like GRID from Nvidia, you can see the infrastructure is in place to go crazy with this – again it's just about having a modern toolset that can take advantage of it."

04. Low-cost motion capture

iPiSoft produces a number of markerless systems which work with the Kinect and PlayStation Eye

Once the domain of very high-end studios and bespoke outfits, motion capture continues to tumble in cost, to the point where there are now free, if somewhat basic, open source solutions. Developer Jasper Brekelmans is a key driver the democratisation of mocap, with a series of affordable Brekel tools that work in conjunction with Microsoft's Kinect or the Leap Motion gesture-recognition hardware: details

Russian mocap specialist iPiSoft also produces a number of markerless systems, which work with Kinect, Sony's PlayStation Eye cameras or depth sensors from Asus and Faceshift. For $1,500 (or less) plus a handful of cheap USB cameras, you can set yourself up with a pretty effective mocap studio. The cost of professional systems continue to fall, too, with optical sensors from the likes of OptiTrack down to less than $600 apiece, and an all-in-one plug 'n' play system for just $2,500.

If the DIY approach isn't for you, Mixamo provides a complete online service for the creation, rigging and animating of characters using standard mocap files. It's been designed with indie game development in mind, but is really easy to use, and provides a great way for small studios to add realistic movement to their projects.

05. Rigging reinvented

Mixamo offers an entirely automated online rigging system

The method of rigging a character for animation hasn't substantially changed for decades: you build a character, add a skeleton of bones, apply weight maps, IK/FK chains, and so on. But there seems to be a lot of interest in reinventing the process, simplifying the task and reducing much of the drudgery.

We've already mentioned Mixamo, which offers an entirely automated online rigging system, based on research by Stanford University. And it's not just for characters built using its Fuse app, but for any bipedal character you've created (in FBX or OBJ formats). You simply upload it to mixamo.com, and let the system do its thing.

Webcam action

Mixamo also offers Unplugged, an app for generating facial animation using a use's webcam. You simply film your performance and the app creates animation using a mixture of blend shapes and bones. However it currently only works in conjunction with Unity and MotionBuilder.

Another new technique that's just arrived is Autodesk's geodesic voxel binding system. Introduced as a SIGGRAPH paper last year, the system automatically specificies weight maps based on a mesh and its skeleton, and binds the geometry accordingly. The tool, now featured in Maya 2015, works on meshes with less-than-perfect topology, and lets you add animation or motion capture without having to do too much adjustment after the fact.

There's also plenty of research being done in the field of synthesising character animation, so that meshes move with a degree of intelligence rather than simply according to mocap files or keyframes. Before long (though maybe not this year), you'll simply be able to describe your character or creature and let the software imbue it with naturalistic movement.

Words: Steve Jarratt

Win a trip to Los Angeles!


Masters of CG is a competition for EU residents that offers the one-in-a-lifetime chance to work with one of 2000AD's most iconic characters: Rogue Trooper.

We invite you to form a team (of up to four participants) and tackle as many of our four categories as you wish - Title Sequence, Main Shots, Film Poster or Idents. For full details of how to enter and to get your Competition Information Pack, head to the Masters of CG website now.

Enter the competition today!