Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Mangled nightmares like the Coca-Cola AI advert and the withdrawn McDonald's Christmas ad show that AI-generated video is still of dubious usefulness as a finished creative asset. Meanwhile, OpenAI's closure of Sora has raised questions again about whether there's demand for it and whether it can be both safe and profitable given the amount of resources it uses.
Adobe thinks it has a solution, at least for one of the major technical problems that makes AI video so difficult to use. It's released a preview of an experimental product called MotionStream that allows users to take a more hands-on approach to controlling AI-generated footage.
Text prompts make AI video easy to generate but difficult to control since it can be hard to describe motion in text. AI video generation also suffers from being slow, which means having to wait to a short clip to generate only to discover that the movement looks weird and unnatural, and then having to start over with each new generation.
Adobe's solution is to develop a way to interact with AI-generated video as it’s being created. MotionStream shifts from delayed rendering to real-time interaction, letting the user reposition objects and change camera angles using cursors and sliders as the video is generated.
The process still begins with a text prompt, but users can then click and drag objects to control their movement and adjust the camera location. Users can click to mark objects they want to remain static.
Eli Shechtman, Senior Principal Scientist and one of the researchers behind MotionStream, says the tool could be a game-changer for secondary effects that are hard to control manually.
“If you want to move an elephant, for example, you can click and move its body, but it’s a lot of work to manually make those movements look natural. This currently requires skills and specialized software to rig, and animate or keyframe the animation, following a process that typically takes hours, if not days depending on scope.
Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.
“Instead, the underlying video generator behind MotionStream is basically simulating the world in real time. So, the elephant’s legs move naturally, and the ears flap naturally as the elephant moves. The model provides you with knowledge about the world and you can interact with it.”
He thinks the same technology could also change how people edit photos and other still images.
“Once video becomes interactive, your canvas could be a video that’s always running. When you interact with it, you see a smooth video changing toward the edit you’ve specified. You can watch the transition, and you could even stop it in the middle if you like the intermediate result. There’s big promise here for both image and video”.
The paradigm shift behind MotionStream would also speed up work with AI video. Early models would generate an entire video before delivering it to the user as each frame would look at every other frame.
That improved generation quality, but Senior Research Scientist and MotionStream collaborator Richard Zhang says “knowing both the past and future isn’t how the universe works".
Adobe Research wanted to remove that constraint so it developed a method that could generate a video in pieces, with future frames depending only on what’s already been created, a process described as “autoregressive”. As users watch their first piece, the tool is generating the second, making it possible to show a generated video to the user in more real-time fashion.
For now, MotionStream remains in development as a research project. There's no detail on if, when or how it could be added to tools like Adobe Firefly or Adobe's video-editing software Premiere.

Joe is a regular freelance journalist and editor at Creative Bloq. He writes news, features and buying guides and keeps track of the best equipment and software for creatives, from video editing programs to monitors and accessories. A veteran news writer and photographer, he now works as a project manager at the London and Buenos Aires-based design, production and branding agency Hermana Creatives. There he manages a team of designers, photographers and video editors who specialise in producing visual content and design assets for the hospitality sector. He also dances Argentine tango.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
