You may not have heard of NeRF but it's the hottest thing in CG and VFX right now, and simply it can render lifelike video scenes from video scans like no other tech. Every reflection is captured and it's, well, hard to tell real from CG. But it's (very) complex.
Which is why news that Shutterstock is partnering with Luma Labs AI, RECON Labs, and Volinga AI to create an AI tool to make NeRF (neural radiance fields) easier and more approachable for everyone, could be one of the biggest uses of AI to shake-up CG. With this news, and Autodesk adding AI to Maya, it looks like text-to-3D is the next big step in AI.
Shutterstock has been making AI announcements all year, from its Nvidia Picasso news in March to the generative AI for HDRI reveal, trained on TurboSquid models, being part of Nvidia's keynote at Siggraph last week. The stock library also has an ethical base for its AI training, and offers a contributor fund that pays royalties to users who wish to sell their work. But it's the prospect of one of VFX's most complex and advanced technologies being available to everyone that feels genuinely exciting. (Watch the Corridor Crew's explanation of NeRF below.)
Creating 3D assets and scenes is complex and time consuming, something as photoreal as NeRF even more so. It's why Dade Orgeron, Vice President of Innovation at Shutterstock/Turbosquid told me how he believes 3D artists are more open to embracing AI tools.
"As people start to use generative AI more and more, I think they're coming to the realisation that these tools still require a tremendous amount of work to get really results," he says. "So I think people are becoming more comfortable with [AI]."
He also stresses how it's "really important for us to make sure that everyone has the opportunity to opt out and that they're compensated for the contribution". Shutterstock offers an opt out, it also offers royalty payments for any images or models used by others, and if you do upload your work it can be "ring fenced" and excluded from training.
So, with this in mind, why is NeRF AI so exciting? Traditionally realistic 3D scenes can take hours if not days to create and render. Using NeRF technology artists and AI an artist could build a photoreal scene that's indistinguishable from video in minutes. (Watch the Corridor Crew video below as an explainer for why NeRF tech matters.)
Get the Creative Bloq Newsletter
Daily design news, reviews, how-tos and more, as picked by the editors.
In a press statement Amit Jain, Cofounder and CEO of Luma explained how they are on "a mission to democratise 3D and make Hollywood quality, photorealistic 3D accessible to everyone," adding how "realtime, high quality, and lifelike 3D scenes are now a reality for everyone using Luma, whether on the web or in Unreal Engine with the new luma scenes."
There's a lot of impressive CG tools, software and tech announced at the moment; and AI in this space could make life easier for many artists. In a recent interview I found out how Unreal Engine 5 and Nanite have made game development faster for Lords of the Fallen, and have been impressed by world-building tool CityBLD for Unreal Engine could shorten deadlines. So, what do you think? Is 3D art AI a good idea?
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Ian Dean is Editor, Digital Arts & 3D at Creative Bloq, and the former editor of many leading magazines. These titles included ImagineFX, 3D World and video game titles Play and Official PlayStation Magazine. Ian launched Xbox magazine X360 and edited PlayStation World. For Creative Bloq, Ian combines his experiences to bring the latest news on digital art, VFX and video games and tech, and in his spare time he doodles in Procreate, ArtRage, and Rebelle while finding time to play Xbox and PS5.