How Anomaly is using generative AI tool Runway for animation and video
As part of a series looking at how agencies are making use of generative AI tools in their work, Anomaly’s Chris Neff explains the pros and cons of Runway.
This is just a regular runway, not a still from the Runway program. It sets the scene just as well, though / Unsplash
Runway is a generative AI program that allows users to create video and animated media the same way they can conjure images and text with Midjourney and ChatGPT.
Chris Neff, global head of emerging experience and technology at digital agency Anomaly has been integrating Runway into his team’s toolbox since last autumn.
“It’s a video editorial tool and a generative engine for video and moving pictures,” he explains. There’s a free trial version and enterprise pricing available, but the off-the-shelf option costs between $12-76 a month. The latest edition, Gen-2, was released in March and added the ability to change the photographic style of an image.
There’s a suite of tools available, but Neff says its video editing features are the most useful of the bunch. And although Runway can generate video media from a text prompt, Neff says it’s not reliable enough for Anomaly’s needs.
“You put in some prompts, you generate some video, and the results are pretty wild. More often than not, you do not get what you want. I really like Runway and we talk to the company to give them feedback. It’s not a high batting average. They’re probably the New York Yankees right now.”
A more efficient and practical use of the tool, he says, is to use its image-generating function. Like other generative AI image programs, Runway can use still pictures as a prompt – though in Neff’s experience, animating or translating images into a video with Runway is indiscriminate. “Sometimes it gets it right, but you can’t tell it to make a specific person move or make just their hair move. We don’t have that level of control yet.”
“You feed in an image and ask it to animate it. In theory, it’s extraordinary. But again, it does what you want it to pretty infrequently. But when it does do it well, you just make a scene come to life… that has a lot of practical application within our walls at Anomaly.”
The agency has used other generative AI video tools to create live client work. In November, it used DreamStudio API to create a music video for Ally Bank, with imagery generated in response to each bar of lyrics. But DreamStudio and Kyber (another similar tool) only ‘morph’ an image, Neff says, rather than producing a true animation, limiting their utility.
Neff expects the tool to improve. It’s already come a long way, he says, from its earliest iteration as a Discord plugin used for adding animation filters to video clips. “We’re seeing it in the first stages,” he says.
With time, he suggests, Runway will be most useful for creating storyboards. Creatives could use written scripts, or images generated using other tools in response to those scripts, to produce moving storyboards. “We’re not there yet. But the idea of plugging in a script that our teams have written and generating video line-by-line is cool. We could generate stills and feed them into Runway so that you’re looking at moving boards,” Neff explains.
“A lot of filmmakers that I’ve known and respected for a long time, they’re using it to convey film ideas,” he adds.
That wouldn’t just speed up the work of designers and artists producing the visual aids in storyboards. It could give clients an understanding of a creative idea earlier in the process – saving everyone time and money. Furthermore, the ability to produce more storyboards overall could give creatives the ability to explore different ideas for shooting live-action work at an earlier stage in the ideation process.
In the interim, Anomaly’s been using Runway to create short animated clips in client decks “to give a sense of character or complement a script or a passage on the page.”
Does Runway save time?
It’s unclear, yet, whether or not Runway is actually a faster means of animating content outright than methods that don’t use generative AI. “It’s a different use of time,” says Neff. “And it’s a different skill set.” On the one hand, it’s possible to create animated media without the knowledge required to use a 3D physics engine such as Unity or the expertise to do 3D modeling, virtual light rigging and animation – all of which are notorious time sinks. In contrast, Neff says Runway is “quite easy” to get to grips with.
On the other hand, you might spend the equivalent hours waiting for Runway to come close to your personal vision, says Neff. “If you have to do it 15, 25, 35 times to get something that you like – and it still has six fingers or whatever – then that time was spent very differently.”
Plus, he notes, the traditional digital animation process creates reusable assets for later iteration and use – while Runway creates an animation as a single asset. “It’s a one-shot wedge,” he says.