Getting started with local Stable Diffusion XL AI

Current image generation AI is amazing, and Stable Diffusion is one of the best models available. It is capable of generating excellent quality images and because it is open source, you can run it locally which means there are no privacy concerns or additional costs involved with using it. Just a few days ago the newest and most powerful version yet, Stable Diffusion XL 1.0 was released which works better at higher resolutions of 768×768 to 1024×1024. With some extra steps you can set it up and use it today with stable-diffusion-webui, an easy to use tool you can run locally and use in your browser to play around with various models. This is how to set it up in just 10 minutes:

stable-diffusion-webui setup

First, if you have an Nvidia GPU, make sure you have the latest proprietary driver. You need it in order to make use of CUDA for acceleration of Stable Diffusion.

Installing python3

First you will need to install python3 if you don’t already have it. I won’t get into too much detail on this because there are thousands of guides for this, but the easiest way is to use a package manager:

For Windows 11: Run winget install -e --id Python.Python.3.11 in the Windows terminal

For Arch Linux: Run sudo pacman -S python

For Ubuntu: It should already be installed on modern versions

Running python -V should now yield a 3.x version number.

Install stable-diffusion-webui

Next we are going to download and install stable-diffusion-webui which we are later going to use to interact with Stable Diffusion. As of now support for Stable Diffusion XL has not yet been merged into the master branch so we are going to use the dev branch.

If you don’t have git, you can download the current dev state here: stable-diffusion-webui
If you do have git however, I recommend that you properly clone the repository. This way you can later update more easily:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
git checkout dev

Download Stable Diffusion XL models

stable-diffusion-webui comes with Stable Diffusion 1.5. If you want to use the improved Stable Diffusion XL model, you will need to download it separately and place it in the directory stable-diffusion-webui/models/Stable-diffusion
You can get the base model from here: Base Model
And the optional refiner model from here: Refiner Model

Running the webui and usage

Now you can run the webui. Simple start webui.sh if you are on Linux or webui.bat if you are on Windows. It may download and install some dependencies on your first launch, but when it’s done, point your web browser to http://localhost:7860/ and you should see the webui.

At the top left, make sure you select the XL base model. If you are using Stable Diffusion XL, make sure your resolution is between 768×768 and 1024×1024 or quality will be poor. Higher resolutions will take longer to generate but look sharper. You can also play with the number of sampling steps and sampling method which can influence the final result significantly. Generally, 20-60 steps are good values and the sampler “DPM++ SDE Karras” should yield very good result. You can learn more in this excellent comparison: https://stable-diffusion-art.com/samplers/#Evaluating_samplers
You can move the CFG slider to influence how creative the model should interpret the prompt. A lower value may lead to less literal, more creative results.
Next, just enter a prompt and hit generate. If you encounter any issues, make sure to read the next section.

Finally, if you have installed the refiner model, you can send your generated image to the img2img section. There you can switch to the refiner model to apply modifications and tweaks to the original image. For example, you can change the subject or art style after you are happy with the basic composition.

Optimizing performance and troubleshooting

Here are some tips for improving performance:
If you are on Linux, installing TCMalloc may improve generation speed, for example: sudo apt install --no-install-recommends google-perftools
If you are using CUDA, running with xformers should speed things up further: webui.sh –xformers
If you are running low on VRAM and experiencing crashes, try this option to save memory at the cost of speed: webui.sh --medvram
And finally, if you are generating black images, try this option: webui.sh --no-half-vae

Examples

Here are some cool images I was able to generate using Stable Diffusion XL:

Ancient Rome
Cyberpunk outfit
A weird situation happening in public

by

Comments

One response to “Getting started with local Stable Diffusion XL AI”

  1. […] Midjourney. For Stable Diffusion I’m using the latest release of Stable Diffusion XL with the webui setup. I will do my best using the available options to get the best possible results out of Stable […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.