The AI bubble inflated just a little more this week as Nvidia showcased a new AI that can react and hold a conversation with humans. It was demoed as a short, visually incredible cyberpunk video game sequence built using Unreal Engine 5. Set inside a sci-fi ramen shop the player can actually talk to game characters.
Nvidia's keynote at this week's Taipei Computex 2023 by Nvidia CEO Jensen Huang was the company's first live show in four years, and its tease of what the future of video games could look like was impressive, to an extent. We've already covered how Nvidia Picasso is powering some of the best new generative AI apps and how Nvidia is pioneering the 3D internet and the metaverse.
In the demo, called Kairos, rather than select dialogue choices from a familiar conversation menu you simply hold down a button on the controller and speak. Yes, you actually just talk to the game's character and you'll get an answer. The back and forth felt realistic, in a game kind of way where you really only want to pick up a side quest or order some noodles.
"Generative AI has the potential to revolutionise the interactivity players can have with game characters and dramatically increase immersion in games," said John Spitzer, vice president of developer and performance technology at Nvidia at Computex 2023. "Building on our expertise in AI and decades of experience working with game developers, NVIDIA is spearheading the use of generative AI in games."
Technically I'm impressed, but the dialogue isn't great; it's stilted, formulaic and lacking any emotional context, inference or nuance. If you're an actor or game writer right now shouldn't feel too bad, yet. Watch the demo below for yourself and struggle not to giggle a little at the deadpan, lifeless delivery of the line: "I am worried about the crime around here. It’s gotten bad lately. My ramen shop got caught in the crossfire."
Now, laughable delivery aside, the back and forth between the player and the game character is impressive. The generative AI is reacting to a person's natural speech and reflects on what it is hearing; this tech is impressive.
The demo was created using a mix of dev tools called Nvidia ACE (Avatar Cloud Engine) for Games, which includes Nvidia's NeMo tools for deploying large language models (LLMs), and in partnership with Convai, a company that specialises in creating a conversational AI for virtual worlds and video games.
Get top Black Friday deals sent straight to your inbox: Sign up now!
We curate the best offers on creative kit and give our expert recommendations to save you time this Black Friday. Upgrade your setup for less with Creative Bloq.
The tech is eye-catching and uses Nvidia Riva for speech-to-text and text-to-speech, Nvidia NeMo is used to control the conversational AI, and Audio2Face powers the AI facial animation from voice inputs. This is all wrapped up by Convai and made accessible for use in Unreal Engine 5 and MetaHuman.
But this Nvidia demo also really showcases that AI is only really as good as the datasets being fed into it; if video games using the tech want to keep players immersed they'll need to hire good writers to ensure the dialogue is lively and natural – us humans don't talk like spell checked, anodyne text prompts. But I can see a broader concern for animators and riggers who could feel the heat of AI encroaching on their jobs. Or, best case, this frees up those roles to go deeper and be more creative in other areas of the pipeline.
At the conference Nvidia has revealed some developers are already using this character text-to-speech AI pipeline, sharing how the highly-anticipated S.T.A.L.K.E.R. 2 Heart of Chornobyl will make use of Nvidia's generative AI. Indie developer Fallen Leaf is using AI for facial animation and conversation paths for its virtual characters.
Nvidia is on a roll at the moment, its share price has soared this year as AI is taking off and everyone needs its high-end GPUs; Nvidia was briefly valued at $1 trillion putting it ahead of companies like Meta. The AI gold rush is really proving successful for some companies.
If you're still bewildered by this rise in AI, then read our feature that explains how AI generators compare, and my feature 'Adobe Firefly - explained' where I detail what's different about this latest generative AI for photoshop.
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Ian Dean is Editor, Digital Arts & 3D at Creative Bloq, and the former editor of many leading magazines. These titles included ImagineFX, 3D World and video game titles Play and Official PlayStation Magazine. Ian launched Xbox magazine X360 and edited PlayStation World. For Creative Bloq, Ian combines his experiences to bring the latest news on digital art, VFX and video games and tech, and in his spare time he doodles in Procreate, ArtRage, and Rebelle while finding time to play Xbox and PS5.