Day two at SIGGRAPH Asia 2013 was mostly spent learning from a good number of speakers, showcasing new techniques and tools. Here are some of the day's standout topics.
Dynamics for plants
The tools we have for generating plants are becoming increasingly efficient, with new versions giving us more control and realism. That said, new advances are always welcome and my first talk of the day was all about this subject.
Taking the volume of an object, such as a leaf, and applying natural properties helps to give greater realism than the typical 2D 'billboard' style we see right now. For example, a newer fleshier leaf will react very differently to an older, less fleshy one.
4D digital plant scans are also on the way. Represented in 2D a scan that captures the size, shape and temporal the map can be applied to a 3D scene, is an efficient way of working.
Facial performance capture is currently hard to deploy and very expensive - complex methods are needed in a controlled environment.
A new system showcased here, which is yet to be named, make the process much more simple and will be cheaper and more efficient for software developers to implement. Even consumer webcams will do and only a small number of phonemes are needed to create a fully renderable model. Minimal user interaction is needed and no special lighting.
The first step is to create the face model and blend shapes, which, with a wide user base, are easily understood and their additive properties make them versatile. Normally this stage is done manually in a modeling package but this new method does it automatically by capturing the data with the cameras.
Sparse feature tracking uses optical flow, tracking landmarks onto the 3D mesh before adding dense shape refinement. Overcoming jitter in landmarks is accomplished by new optical flow key frame forward and backward tracking recognising errors. Key frame based correction is noticeably less jittery and more accurate.
The results I saw were captured using a single camera. Indoor examples recorded on a canon 550D with normal indoor lighting. The outside shots were captured with a go pro hero. The cost benefits are massive but more importantly this brings advanced performance capture to even hobbyist level artists. Using cameras many of us own and with less complex tracking procedures really opens up possible applications.
Hair simulations have been around for a while now and although they are pretty easy to get started with, the last ten-percent of the realism can often take the majority of the time.
Digitizing can be accomplished by modeling in a 3D app or by capturing the geometry using a camera rig. Both not massively complex. Animating on the other hand is very complex but there are new ways to simulate without capturing real hair.
You could use trial and error but you usually get sag at the start of a sequence. You can add pre-simulation frames but you still loose the input geometry, youjust don't see it. Accounting for gravity alone results in stiff unnatural hair but using super helices and friction controls along a fibre give realistic results but without the previously associated problems.
I'm not going to report the actual science behind this (some of it was way over my head) but seeing it in action was impressive and as these new methods work their way into our apps we will all benefit. The application of contact between fibers, that changes along the length, does give great results and the examples shown made various hairstyles behave very naturally, even complex curly hair. The simulation times were also very fast. Under 20 seconds for 30,000 control splines.
An extra benefit of the method is for digital doubles. The data from a scan of an actor can be used as input data for the simulation, so when animating the digital double the hair will retain the correct styles and behave accurately.
I'm currently on a bit of a mission to understand performance capture. The last talk today was all about the latest advances, countering some of the problem inherent with current methods.
Essentially the biggest problems facing current methods are the need for controlled input, such as lambertian materials, the capturing environment and the apparel of the actors. This is solved via inverse rendering.
On set, using two cameras and accounting for known lighting, multiple actors are trackable, even wearing flowing natural clothing. Pose estimation and shape refinement are generated by comparing scanned models of the actors (up to around 80k vertices) lit via a light probe taken on set, with frames shot by the two cameras. This is a vastly simplified report on the process and I haven't even started to talk about how the system differentiates between foreground, background and characters to track!
I think I've learned more about how are software works today than I have in the last decade. If you have any interest in understanding what is actually happening when you use your software then I highly recommend attending this kind of talk. It's amazing quite how much research goes into even small elements and the high prices of 3D software seems far more reasonable when you see how much effort and passion, as well as skill, is involved.