Artificial Intelligence Agencies Midjourney

How VCCP is using Stable Diffusion to create brand identities and storyboards

Author

By Sam Bradley, Journalist

July 28, 2023 | 6 min read

VCCP’s Faith team explain how they’re using Stable Diffusion in their work right now.

faith girl mecha suit

An image created by VCCP Faith’s team using Stable Diffusion / VCCP Faith

Stable Diffusion is a generative AI tool that can create images from written prompts. Released last August, users enter a string of words or phrases, which are then used to create an image. It’s open source and free to use – and is already employed by several creative agencies in design work.

Creatives at VCCP have been using Stable Diffusion for around eight months. Innovation lead Peter Gasston explains the team used the program to create assets for the brand identity of its latest spinoff venture, AI-focused creative agency Faith. It’s a powerful tool that can be used to generate both video animations and high-fidelity artwork.

While Midjourney can create images easily and has been adopted by many VCCP staffers, creative director Morten Legarth says it lacks fine control. The time required to write a prompt that yields a precise body pose from a character, or the exact placement of an object within an image is prohibitively high, he says. As such, they’ve found themselves “hitting the limits of what Midjourney can do.”

“That’s the gap that Stable Diffusion fills,” adds Gasston. Stable Diffusion is difficult to use, but when deployed correctly it allows for more precision in image generation. “It’s one of the harder ones to get into, but it’s a lot easier to control,” Legarth says. “If you want to get more detail, and you want to have a specific idea, which is kind of what our job is – that’s hard to do with a generalist tool. Stable Diffusion is very good at more articulated control.”

Brand identity generation

There’s already a range of plugins and extensions available with Stable Diffusion which can customize its interface. The VCCP team found one which would allow them to generate using a specific, predetermined training set – meaning they could force the program to use specific images created via other means.

“It’s an open model, there’s a whole sea of people out there experimenting, adding new features to it that weren’t in the original training data,” explains Gasston.

vccp faith robot

Another extension, Controlnet, enables users to generate images of characters based on other imagery, not just text. It allows the team to ’condition’ the training set and ask for specific criteria – character poses, lighting or image composition, for example – to be used. “I could put in a stick figure and say that I want my brand character to assume this pose, in this part of the image. Controlnet would do that,” says Legarth.

VCCP also uses the software on live client briefs “in an explorative sense,” such as for storyboarding live-action shoots. “A storyboard artist could do a simple sketch, load it into Stable Diffusion, run it and generate that image in, say, a photographic style.”

It used the tool to create Faith’s brand identity, which centers on a pair of mascots. The team used Midjourney to create a range of basic concept images of a friendly robot and little girl – an updated version of the girl and bear featured in VCCP’s logo – and entered a small set of usable ones into Stable Diffusion to create usable assets (some of the different iterations from that design process are above and below).

vccp faith

“We got a bank of images generated using Midjourney. We tagged them all up and ran them through a training model called Kohya_SS, which trains Stable Diffusion. Then we get a whole bunch of different examples out of that.”

Faith has been using more advanced tools, such as low-ranked adaptation models (LoRA), to improve that approach.

The team still uses traditional digital design tools to clean up the images or to cut out generations of its mascot onto the background desired. But Legarth said it proved they could create usable assets without “too much work.”

Barrier to entry

Stable Diffusion is less accessible than Midjourney in Legarth's opinion. “It depends on how technically minded the creative is. Some will probably be more inclined to use Stable Diffusion and some more included to use Midjourney,” he explains.

Gasston says there’s already a variety of plugins users can employ to make the program more accessible. But, he notes, “only a small handful of people can realistically use Stable Diffusion at its full potential.” To help VCCP staffers familiarize themselves with the tech, it’s applied plugins and designed its own graphical user interfaces (GUI) that simplify the program; most of those interfaces integrate Controlnet into the software, too.

“We are in the process of creating an easier version of Stable Diffusion,” says Legarth. The hope is to create an alternative UI for the software, one that highlights certain features and hides others – so that users aren’t caught in an avalanche of icons and tools. “You don’t have to learn everything,” he adds.

Artificial Intelligence Agencies Midjourney

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +