AI art has been bursting into the mainstream thanks to the likes of DALL-E 2 and MidJourney. The tools allow anyone to create almost any image they can dream of from just a short text prompt.
The results can be very, very strange, but artists, designers and brands are learning how to make the technology work for them, sometimes very successfully. But if you've been impressed so far, it seems the next advance is already on the way: AI video generators.
#stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://t.co/7SFUVKoUdlWorking on a more permissive release & inpainting checkpoints.Soon™ coming to @runwayml for text-to-video-editing pic.twitter.com/7XVKydxTeDAugust 11, 2022
AI art generators work based on text prompts. You type in what you want, and the art generator will create it – or at least its interpretation of the prompt. The results can be very haphazard and the tools have been used to create some very weird AI art, but we've already seen AI art generators put to use in marketing campaigns by Heinz and to create NFT art based on Bored Ape Yacht Club NFTs.
Now a company has teasing that is perhaps the next logical step. AI-generated video. hasn't shared much about its project – just a grainy video of a tennis match, but it seems that its tool will work in a similar way to the current AI art generators like DALL-E 2, MidJourney and Artbreeder-Collages. You type in a text prompt and the AI tool will generate a visual output.
The video shared on twitter shows a tennis match on a court. The text prompt is then changed to read things like 'a tennis court on the moon' or 'a tennis court in an apocalyptic desert'. The backdrop changes as a result (UPDATE: Meta Ai has since entered the fray, announcing its Make-A-Video AI video generator).
Text-to-video is going to hit sooner than we all know.Have used @runwayml’s AI assisted tools and they’re mind blowing. Combine that with AI image generation as seen here and 🤯🤯🤯 https://t.co/HN1OGvKTxwAugust 11, 2022
The work was shared by Patrick Esser, a research scientist at the AI-powered video editing platform Runway. The company has been involved in the development of the text-to-image AI art generator Stable Diffusion, and according to Esser's tweet, it sounds like text-to-video generation will be added to Runway's software. The stop-motion animator Kevin Parry retweeted the post, suggesting that such a feature will be coming sooner than we think.
Commenters have expressed their disbelief at how good this seems to be, with one saying that “the moon would have less gravity so the ball wouldn’t bounce that way!” Realism aside, however, this is a pretty big deal, particularly in the world of special effects and CGI. If you can merely type in a set of prompts to change aspects of a video, well, the mind boggles really.
A lot of questions remain about generative AI imagery and video, and there are concerns for a lot of reasons. Some people fear that such power could be misused – with the same fears caused by deepfakes. We've already seen edited images shared on social media as news, but what will it be like if it gets to the stage that AI can create convincing images and video showing anything.
The other concern people have is for the future of jobs such as photo retouching and image editing. While the tools can be used by creatives to speed up their work or unleash their imaginations, some fear being able to work faster will mean less work (update: the concern is understandable considering an AI artwork has won an art competition).
The freelance motion graphics artist Lawrence Chase replied to Parry's tweet: "I can't help but feel dread. Is this normal? I spent today replacing an ice lake surface with a CG one revealing a frozen object underneath. What if an AI can just do this in a few minutes?"
But then again, most brands will still need someone with the creative eye and skill to be able to get decent results from AI-powered tools. "It will kill VFX," one person replied to Esser, but some disagreed. One person wrote: "Nah, but it will definitely improve VFX workflows. Imagine making a full feature film in less than a year. Storytelling just got to a new level."