Text-to-image AI art generators have been one of the biggest controversies in art and design this year. And one of the most controversial of all is Stable Diffusion. That's because, as an open-source platform, Stable Diffusion hasn't taken the cautious approach to access and restrictions that other AI art generators have. From day one, it allowed anyone to use its powerful capabilities and to fine-tune their own releases of its model.
That led to the creation of images using copyrighted material and the likenesses of famous people, including for pornography. While the platform has always defended its aim to democratise access to art, the latest release of Stable Diffusion has reined things in a little. But people still aren't happy (if you're still not sure what text-to-image generators are, see our piece on how to use DALL-E 2).
InpaintingFinally, we also include a new text-guided inpainting model, fine-tuned on the new Stable Diffusion 2.0 base text-to-image model, which makes it super easy to switch out parts of an image intelligently and quickly. pic.twitter.com/3fKng0ti3SNovember 24, 2022
Stability AI released stable Diffusion in August, and it quickly became one of the most-used AI art generators. Unlike DALL-E 2, another one of the best-known models, it didn't initially restrict access while it tested things out. Instead, it made the tool open source. Anyone can access it on Microsoft's GitHub.
Since then, the boom in AI art has led to lots of controversies, from AI art scooping first prize in an art competition to Getty banning AI-generated images from its library over copyright concerns and people using the tools to copy specific artists' styles. Just recently users of the online art community DeviantArt were furious to learn that their work was going to be included in the platform's AI model by default, and DeviantArt was forced to change its approach.
But now Stability AI has released a major update to Stable Diffusion. It's added the capability to produce more detailed, higher-resolution images, a new tool to swap parts of an image more easily and the ability to transfer the inferred depth of one image to another image (Depth2img), which allows users to create radically different images that have the same coherence as the original.
But it's also responded to concerns about its broad dataset and lack of restrictions. It says it's reduced its model's understanding of celebrity likenesses and removed its ability to create images in the style of specific artists – so users can no longer get convincing results using phrases like 'in the style of...'. It's also reduced the model's ability to create AI-generated nudity and porn by removing such images from its training data.
All this sounds sensible from a reputational standpoint and to avoid potential legal challenges. But not everyone's happy. Some users have blasted the update as a form of censorship, while others say that despite some of the impressive technical improvements, the updated model isn't as good. Some have created images to compare the ability of the previous version and the current version to create art in specific styles.
Get top Black Friday deals sent straight to your inbox: Sign up now!
We curate the best offers on creative kit and give our expert recommendations to save you time this Black Friday. Upgrade your setup for less with Creative Bloq.
“They have nerfed the model,” one user wrote on Reddit. “To choose to do NSFW content or not, should be in the hands of the end user, no [sic] in a limited/censored model," someone else wrote.
"Big mistake forcing the NSFW filter onto the training data. Last time I checked, it got triggered by every other classical painting. There is no reasonable justification for hindering the model and forcing puritan values onto everyone," one person replied to Stabilty Ai on Twitter.
Someone else wrote: "There’s no way that it can possibly be as good as any prior model when the majority of the high-quality images have been removed from the dataset. No matter how good their improved architecture is, it can’t create something it has never seen before."
Stability AI says it has not, in fact, removed artists’ images from its training data. Rather, it's changed the ways the software encodes and retrieves data. But some question how restrictive the changes really are when third parties can create their own releases of the model adding more training data. “Do not freak out about V2.0 lack of artists/NSFW, you’ll be able to generate your favorite celeb naked soon," one person wrote.
Since it's open source and developers can include it in their own apps free of charge, Stable Diffusion is one of the most influential AI imaging tools. Any changes it makes could influence the rapidly evolving technology and how it's received. See how the best AI art generators compare for more on how Stable Diffusion compares to Midjourney and DALL-E 2.
Read more:
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Joe is a regular freelance journalist and editor at Creative Bloq. He writes news, features and buying guides and keeps track of the best equipment and software for creatives, from video editing programs to monitors and accessories. A veteran news writer and photographer, he now works as a project manager at the London and Buenos Aires-based design, production and branding agency Hermana Creatives. There he manages a team of designers, photographers and video editors who specialise in producing visual content and design assets for the hospitality sector. He also dances Argentine tango.
Related articles
- I’m obsessed with Pinterest's weird and wonderful trend predictions for 2025
- 50 years of Dungeons & Dragons: art and insights from Tony DiTerlizzi, Ralph Horsely, Anne Stokes and other leading illustrators
- Elon Musk defends missing Tesla logo on new Cybercab and Cybertruck
- 'Anti-human' pro-AI billboards spark public outrage (but that's the point)