Nvidia's powerful AI facial animation tool is now available for all

An example of a character facial animation created with Nvidia Audio2Face-3D
(Image credit: Nvidia)

The use of digital characters is growing fast not only in game development but also in animation for marketing and customer service. Nvidia's Audio2Face is one of the AI advances that has been speeding up the process of creating realistic facial animation synced to audio, and the GPU giant is now making it available to all.

Nvidia is open sourcing Audio2Face models and software development kit (SDK) so all game and 3D app developers can use it to build and deploy high-fidelity characters with cutting edge animations. That includes plugins for some of the best 3D modelling software and the best animation software.

NVIDIA ACE | New Audio-Driven AI Facial Animation Features Coming to NVIDIA Audio2Face - YouTube NVIDIA ACE | New Audio-Driven AI Facial Animation Features Coming to NVIDIA Audio2Face - YouTube
Watch On

Nvidia Audio2Face accelerates the creation of realistic digital characters by providing AI-driven real-time facial animation and lip-sync. The tool uses AI to generate realistic facial animations from audio input by analyzing acoustic features like phonemes and intonation to create a stream of animation data.

The data is then mapped to a character's facial poses. It can be rendered offline for pre-scripted content or streamed in real-time for dynamic, AI-driven characters, providing accurate lip-sync and emotional expressions.

The model is already used widely across gaming, entertainment and customer service. Game devs using it include Codemasters, GSC Games World, NetEase, Perfect World Games, while ISVs include Convai, Inworld AI, Reallusion, Streamlabs, The Farm 51 and UneeQ.

The software developer Reallusion has integrated it into its suite of tools, allowing integration with its iClone, Character Creator and iClone AI Assistant.

Now open source, the Audio2Face SDK includes libraries and documentation for authoring and runtime facial animations on-device or in the cloud. There are also plugins for Autodesk Maya and Unreal Engine 5, which allow users to send audio inputs and receive facial animation for characters.

Nvidia is also open sourcing the Audio2Face training framework, so anyone can fine-tune and customise pre-existing models for their specific use case.

Nvidia Audio2Face SDK can be downloaded from GitHub, where you'll also find instructions to build it. You can learn more on the Nvidia Developer site. You may need a computer with one of the best graphics cards to run it (see deals below).

Thank you for reading 5 articles this month* Join now for unlimited access

Enjoy your first month for just £1 / $1 / €1

*Read 5 free articles per month without a subscription

Join now for unlimited access

Try first month for just £1 / $1 / €1

Joe Foley
Freelance journalist and editor

Joe is a regular freelance journalist and editor at Creative Bloq. He writes news, features and buying guides and keeps track of the best equipment and software for creatives, from video editing programs to monitors and accessories. A veteran news writer and photographer, he now works as a project manager at the London and Buenos Aires-based design, production and branding agency Hermana Creatives. There he manages a team of designers, photographers and video editors who specialise in producing visual content and design assets for the hospitality sector. He also dances Argentine tango.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.