Arki's Guides

Hi everyone! Arkitecc#0339 from the Stable Diffusion discord here!I've got two sections of guides available right now.If you check out the Stable Diffusion Guides section below, I'll walk you through how to get started dreaming with Stable Diffusion on your local hardware or in the cloud.I also included a link to the HUGE Dreamer's Guide I put together for /r/StableDiffusion that has everything you could need to help you get better at prompting in no time!In the DreamStudio Guides section below, I'll help you get started dreaming with Stability AI's DreamStudio ASAP and along the way help you learn how to prompt efficiently and save credits too!If you'd like to support me and my work, please consider dropping by my Ko-Fi.If you have a budget for some cloud compute, consider signing up for Runpod using this section of the guide: Arki's Easy SD-WebUI it features my bash script suite which will allow you to get SD-WebUI running in the cloud on powerful GPUs with a single web terminal command!Let’s get started!


Stable Diffusion Guides


In these guides I walk you though the installation of the most popular repos of Stable Diffusion:
SD-WebUI (HLKY and AUTOMATIC1111) LStein and Basujindal.
Each of these repos have made efforts to make it easier to run on GPUs with less VRAM, but for those who have 6GB VRAM or less I recommend using the Basujindal repo as it has more aggressive tactics implemented to help optimize VRAM usage.Note: Stable Diffusion depends primarily on Nvidia GPUs.


DreamStudio Guides


AUTOMATIC1111's SD-WebUI (Windows)

Hi friends! AUTOMATIC1111’s SD-WebUI is one of the most frequently updated repos for Stable Diffusion and it’s chock full of all sorts of power-user features. I’ll cover how to get a basic installation of it up and running in this section of the guide. For more advanced functionality, check out the link above for more details.

Note: Particularly with this repo, it updates so frequently that sometimes this guide may fall out of date. I'll do my best to keep it up to date, but its important to keep in mind that this space right now is moving at breakneck speed so it can at times be hard to maintain guides.

Install Git: https://gitforwindows.org/ Download this, and accept all of the default settings it offers except for the default editor selection. Once it asks for what the default editor is, most people who are unfamiliar with this should just choose Notepad because everyone has Notepad on Windows.

Next we need to open up a terminal, if you have Anaconda or Miniconda Prompt installed that will work perfectly (check the other sections of this guide for how to install them if you plan on running other forks of SD) if not, type Command Prompt into your Start Menu and open that up.

Now that we have our terminal open, we need to download the AUTOMATIC1111 SD-WebUI repository.
Type in git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui/

After downloading the repository, we need to close the prompt and install Python 3.10.6

Once prompted, make sure you check Add Python to PATH as this is what will allow your computer to use this specific version of Python for AUTOMATIC1111’s SD-WebUI

After Python has finished installing, you'll need to create a Hugging Face account: https://huggingface.co/

After you have signed up and signed in, go to this link and click on Access Repository:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original

After you have authorized your account, go to this link to download the model weights:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt

Create the models folder in the folder that was created we downloaded the repository. IE:
C:\Users\<username>\stable-diffusion-webui\models\ Download the model file into it and rename it to model.ckpt.

If you’d like to have GFPGAN face correction support in addition to the built-in support for Codeformer face correction, download this file into the C:\Users<username>\stable-diffusion-webui\ folder.

Now you will need to run webui-user.bat, which you’ll find in the C:\Users\<username>\stable-diffusion-webui\ folder.Note: If you have a 1660ti or are getting green / black images once you launch SD-WebUI you'll want to open up webui-user.bat and change the set COMMANDLINE_ARGS= line to: set COMMANDLINE_ARGS=--opt-split-attention --medvram --precision full --no-half or set COMMANDLINE_ARGS=--opt-split-attention --lowvram --precision full --no-half and that should help you get it started properly.

Give it time, as it needs to install additional dependencies required to run. Once it's finished you’ll see something like the following screenshot.

Success! This means you’ve successfully installed and initialized AUTOMATIC1111’s SD-WebUI for the first time! Now you’ll want to go to http://127.0.0.1:7860 or http://localhost:7860 and you’ll be greeted with the interface below. Happy dreaming!


HLKY's SD-WebUI (Windows)

Like AUTOMATIC1111's SD-WebUI, HLKY's SD-WebUI is also one of the most feature rich repos available right now. It has a plethora of options available for those looking to jump head-first into Stable Diffusion.

Install Git: https://gitforwindows.org/ Download this, and accept all of the default settings it offers except for the default editor selection. Once it asks for what the default editor is, most people who are unfamiliar with this should just choose Notepad because everyone has Notepad on Windows.

Download Miniconda3: https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe
Get this installed so that you have access to the Miniconda3 Prompt Console.

Open Minconda3 Prompt from your start menu after it has been installed and type:
git clone https://github.com/sd-webui/stable-diffusion-webui.git

This will create the stable-diffusion-webui directory in your Windows user folder.

Once a repo has been cloned, updating it is as easy as typing git pull inside of Miniconda when in the repo’s topmost directory downloaded by the clone command.

Below you can see I used the cd command to enter that folder. Do this, we’ll need to be in this directory for the next part of the guide.

Next you are going to want to create a Hugging Face account: https://huggingface.co/

After you have signed up and signed in go to this link and click on Access Repository:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original

After you have authorized your account, go to this link to download the model weights:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt

Download the model into this directory: C:\Users\<username>\stable-diffusion-webui\models\ldm\stable-diffusion-v1\

The stable-diffusion-v1 folder won’t exist by default with most repos, so create it if it doesn’t and save the model file to it.

Rename sd-v1-4.ckpt to model.ckpt once it is inside the stable-diffusion-v1 folder.

Since we are already in our stable-diffusion-webui folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work.

(Optional) If you already have an environment set up for an installation of Stable Diffusion named ldm open up the environment.yaml file in \stable-diffusion-webui\ and change the environment name inside of it from ldm to ldo.

Type and press enter: conda env create -f environment.yaml

Wait for it to process, this could take some time. Eventually it’ll look like this:

Next, there are three more models that we need to download in order to get the most out of the functionality offered by SD-WebUI.

The first of which is GFPGAN, a model that SD-WebUI takes advantage of in order to (optionally) help improve the look of generated faces.

Download the model from here and save it into this folder:
\stable-diffusion-webui\src\gfpgan\experiments\pretrained_models

The next two models are for RealESRGAN an upscaling model that you can (optionally) use to upscale your generations by 4x their original resolution.

Download the models from here and here and save them both into this folder:
\stable-diffusion-webui\src\realesrgan\experiments\pretrained_models

Next, in the \stable-diffusion-webui\ folder, you’ll see a file named webui.cmd. Double click this file to run it.

If you’ve renamed your environment due to having multiple installations of SD, you’ll need to edit this line:
set conda_env_name=ldm in webui.cmd to reflect the name of your environment before running it.

This will be how you start up and access this repo’s GUI from now on. After it finishes initializing it’ll spit out a localhost link: http://localhost:7860 that you can copy and paste into your web browser to start dreaming with!

Images created with the web interface will be saved to \stable-diffusion-webui\outputs\ in their respective folders alongside .yaml text files with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with their .yaml files easily.


SD-WebUI (Cloud)

Hi friends! I recently developed a Runpod workflow for those who are interested in running Stable Diffusion in the cloud on powerful GPUs. It covers both AUTOMATIC1111 and HLKY, so if you can afford to spend a bit on cloud compute, check out this section of the guide!

If you’d like to automate backing up your generations, check out this guide I wrote on how to install and set up overGrive!

Create a Runpod account through this link: Arki's Easy SD-WebUI

Arriving at the site for the first time you’ll see this. Click the Login / Sign-up button in the upper right hand corner. I recommend logging in with a Google account, it’s the quickest way to get started.

After logging in you’ll see this screen. In order to use Runpod you need to click the dollar amount next to your initial, and pay for some GPU time. I recommend putting in $10 to start out with.

Once you’ve topped-up your account, you’ll want to click Deploy under Secure Cloud, Community Cloud is where people or businesses are sharing their GPUs into the Runpod system.Sometimes they are cheaper, but I go with Secure Cloud when there are slots available.

Once you’re here, you’ll want to select a GPU to get started on with my template. Sometimes you’ll see certain GPUs marked as Reserved, which means there aren’t any open slots available right now for that GPU.

I usually go for the A40 or the A6000 since I like to generate landscapes with large dimensions (1280 x 512), which requires a lot of VRAM. However, you can absolutely get by with an A4000 which is much cheaper and will make your money last much longer.

The only downside is that you wouldn’t be able to do huge dimensions. I’ll be covering the optimal settings for each GPU type in another guide that I’ll link at the top of this section once I’ve finished it.

I always recommend going with the On-Demand pricing, because that means your pod can’t be interrupted and shut down by someone else. Spot pricing is cheaper because anyone can interrupt your pod by bidding a higher price to take over that spot.

After choosing a GPU you’ll be at this screen, the default settings will be fine for our purposes here. It’ll inform you how much you’ll pay for disk space if you choose to shut down your pod and leave it in their cloud instead of deleting it and starting over again when you want to generate.

Once you’ve arrived here you’ll want to hit the Deploy On-Demand button to boot your pod. After this you’ll want to go to the My Pods section to see your pod for the next steps.

Allow your pod some time to boot up, once it’s ready you’ll see the following screen.

At this point you’ll want to hit the Connect button to open up your connection settings and then click on Start Web Terminal, followed by Connect to Web Terminal.

Connect to Web Terminal will open up a new tab in your browser that will look like this.

Once you are here you’ll want to enter chmod +x ArkiSD-I.sh && bash ArkiSD-I.sh (for HLKY’s SD-WebUI) or chmod +x ArkiSDA-I.sh && bash ArkiSDA-I.sh (for AUTOMATIC1111’s SD-WebUI) and then press enter.

Note: My scripts pull live from the latest repository updates on Github, there may occasionally be breaking changes, I’ll try to occasionally check for these and update the scripts / template when I have the ability to.

Link Format: https://dl.dropboxusercontent.com/s/fileidhere/model.ckpt Replace fileidhere with the string of letters and numbers Dropbox gives you with your shared link to your model.ckpt file. Press enter once you’ve done so.

After you’ve entered your Dropbox link, all there is to do now is sit back and wait as the script does everything for you. Go get a bite to eat, or a drink, or watch some anime!

After about 15 minutes or so, you’ll see this. To access your SD-WebUI instance, you’ll want to click on the link with the random name that ends with local.lt.

Et voila! An instance of SD-WebUI hosted on a powerful GPU in the cloud is all yours thanks to the awesome folks at Runpod.

Images created with the web interface will be saved to \stable-diffusion-webui\outputs\ in their respective folders alongside .yaml text files with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with their .yaml files easily.

LStein (Windows)

I recommend the LStein repo of Stable Diffusion for beginners due to its attempts at making it easier to understand and use than the CompVis repo, and including a barebones easy-to-use web GUI.

Install Git: https://gitforwindows.org/ Download this, and accept all of the default settings it offers except for the default editor selection. Once it asks for what the default editor is, most people who are unfamiliar with this should just choose Notepad because everyone has Notepad on Windows.

Download Miniconda3: https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe
Get this installed so that you have access to the Miniconda3 Prompt Console.

Open Minconda3 Prompt from your start menu after it has been installed and type:
git clone https://github.com/lstein/stable-diffusion.git

This will create the stable-diffusion directory in your Windows user folder. Since I already have one there, I’ll show you what it looks like to install it into a different directory.

In the above screenshot I used the mkdir SD-Guide command, what this does is make a folder named SD-Guide that I can download the LStein repository into.

I then used the cd SD-Guide command, which moves us into our new folder.

Once a repo has been cloned, updating it is as easy as typing git pull inside of Miniconda when in the repo’s topmost directory downloaded by the clone command.Below you can see I again used the cd command to enter that folder, do this as we will need to be in this folder for the next part of the guide.

Next you are going to want to create a Hugging Face account: https://huggingface.co/

After you have signed up and signed in go to this link and click on Access Repository:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original

After you have authorized your account, go to this link to download the model weights:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt

Download the model into this directory:
C:\Users\<username>\stable-diffusion\models\ldm\stable-diffusion-v1

The stable-diffusion-v1 folder won’t exist by default with most repos, so create it and save the model file to it.

Rename sd-v1-4.ckpt to model.ckpt once it is inside the stable-diffusion-v1 folder.

Since we are already in our stable-diffusion folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work.

Doing this for the first time your environment will be named ldm, but if you are installing additional repos of Stable Diffusion you’ll need to edit the environment name in environment.yaml (with something like Notepad) to a new name other than ldm to prevent conflicts. I have to do this for this guide since I have multiple installs on my computer.

Type and press enter: conda env create -f environment.yaml

Wait for it to process, this could take some time so be patient.

Once it has finished and you can type again, type in: conda activate ldm

Now you should notice in Miniconda that the prefix changed from (base) to (ldm)

Initialize LStein by typing and entering: python scripts/preload_models.py

Once this finishes you have the option of continuing to use it from the command line, or use it from your web browser with a simple GUI.

I recommend that most people use the web interface, as it is simple and easy to use. To initialize that type in and enter: python scripts/dream.py —web

The web interface will look like the above, just minus the generations in the example. In order to access it once you’ve run the script, put this address into your browser http://localhost:9090

Images created with the web interface will be saved to:
C:\Users\<username>\stable-diffusion\outputs\txt2img-samples

The web interface will save your images to C:\Users\<username>\stable-diffusion\outputs\img-samples as well as a dream_web_log.txt with all of the details of your prompts for easy referencing later. Images will also be saved with their seed and numbered so that they can be cross referenced with dream_web_log.txt easily.

Basujindal (Windows)

Basujindal is another popularly used repo of Stable Diffusion. Its main advantage point is that it allows for people with less VRAM to still be able to use and enjoy SD. This guide will be going over how to set up the basic Gradio GUI supported by Basujindal.

Install Git: https://gitforwindows.org/ Download this, and accept all of the default settings it offers except for the default editor selection. Once it asks for what the default editor is, most people who are unfamiliar with this should just choose Notepad because everyone has Notepad on Windows.

Download Miniconda3: https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe
Get this installed so that you have access to the Miniconda3 Prompt Console.

Open Minconda3 Prompt from your start menu after it has been installed and type:
git clone https://github.com/basujindal/stable-diffusion.git

This will create the stable-diffusion directory in your Windows user folder. Since I already have one there, I’ll show you what it looks like to install it into a different directory.

In the above screenshot I used the mkdir SD-Guide command, what this does is make a folder named SD-Guide that I can download the Basujindal repository into.

I then used the cd SD-Guide command, which moves us into our new folder, do this as we will need to be in this folder for the next part of the guide.

Once a repo has been cloned, updating it is as easy as typing git pull inside of Miniconda when in the repo’s topmost directory downloaded by the clone command. Below you can see I again used the cd command to enter that folder.

Next you are going to want to create a Hugging Face account: https://huggingface.co/

After you have signed up and signed in go to this link and click on Access Repository:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original

After you have authorized your account, go to this link to download the model weights:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt

Download the model into this directory:
C:\Users\<username>\stable-diffusion\models\ldm\stable-diffusion-v1

The stable-diffusion-v1 folder won’t exist by default with most repos, so create it and save the model file to it.

Rename sd-v1-4.ckpt to model.ckpt once it is inside the stable-diffusion-v1 folder.

Since we are already in our stable-diffusion folder in Miniconda, our next step is to create the environment Stable Diffusion needs to work.

Doing this for the first time your environment will be named ldm, but if you are installing additional repos of Stable Diffusion you’ll need to edit the environment name in environment.yaml (with something like Notepad) to a new name other than ldm to prevent conflicts. I have to do this for this guide since I have multiple installs on my computer.

Type and press enter: conda env create -f environment.yaml

Wait for it to process, this could take some time so be patient.

Once it has finished and you can type again, type in: conda activate ldm

Now you should notice in Miniconda that the prefix changed from (base) to (ldm)

Basujindal recently implemented its own simplified UI, but in order to use it we need to install Gradio. To do that, enter pip install gradio.

Now that we have Gradio installed we can go ahead and enter either python optimizedSD/txt2img_gradio.py or python optimizedSD/img2img_gradio.py

Enter the URL it gives you after Running on local URL: in our case this is http://127.0.0.1:7860

Images flagged in the web interface will be saved to C:\Users\<username>\stable-diffusion\flagged\ along with a log.csv spreadsheet detailing all of the relevant information associated with your prompts.

Now you're all set to start dreaming with Basujindal's repo!

DreamStudio: Tips and Tricks! (How to save credits!)

Hi everyone! Arkitecc#0339 from the Stable Diffusion discord here!I figured it would be a pretty good idea to start a Tips and Tricks guide for DreamStudio, since I know a lot of people are just now starting to experience it for the first time.

DreamStudio: Introduction

DreamStudio is run by Stability A.I., the company responsible for managing and developing Stable Diffusion alongside their industry and educational partners.

The first important thing to mention, is that DreamStudio is an entirely optional service, and only one of many ways in which you can access Stable Diffusion.

Stable Diffusion has recently gone open source, which means that if you have a powerful enough GPU you can run it on your own hardware for free!

Check out theInstallation sections of this guide if that interests you.

That aside, DreamStudio is currently in its lite form, which means that it is early days yet for the app and you may run into some bugs. Additionally, certain aspects of the app may not be quite as clear as you would like them to be yet.

If you encounter any credit related bugs, please be sure to submit your issue through this form!

It can be a little bit confusing the first time you open DreamStudio, so I’m aiming to clear up as much of that as I can and help enable you to have as much fun as possible!

All users get 200 Free Credits when they first sign up to DreamStudio. By purchasing additional credits, you can help support Stability A.I. as they continue to push the boundaries of creative expression with cutting edge technology.

To understand how DreamStudio consumes your credits, check out the diagram below.

As you can see here, credits are consumed at a higher rate as you increase the amount of steps and the total resolution of your generation. Fortunately however, there are ways to be careful about your credit usage, and you’ll almost never have to go into the extreme range of 150 Steps or 1024 x 1024.Let’s get into it!

Prompting Tips

Firstly, it’s important to understand the content of your prompt when starting to use DreamStudio. What is it that you are actually looking for? Diligent research will go a long way when you are prompting.

Look into things like:Stylistic keywords.
Artists whose styles you like.
Names of specific artistic genres or disciplines.
Lighting and rendering terms etc.

Building a descriptive prompt goes a long way, and knowing how to tell SD what you’re looking for will always pay off in the end.

Credit Saving & Generation Setting Tips

An easy way to be strategic about your credit spend while working inside of DreamStudio is to first test out your prompts at low steps and low resolution settings. When generating, you’ll see a counter at the top of the page that shows how many credits your current generation will cost.

It's easy to save a boatload of credits this way, while still being able to understand how SD is interpreting your prompt as you develop it further.

I recommend doing all of your testing at 512 x 512 and 50 Steps or lower, as this was what Stable Diffusion was trained on, and will give you the greatest chance of understanding how it will interpret your prompts.

If you get something that looks like it has potential in this testing stage, copy down the prompt and the seed that you used for it. Later on regen the same prompt and seed but at higher dimensions and higher steps if you desire.

Changing the resolution of your image, will lead to a different image, but by starting with lower settings first you can get an understanding of how SD is interpreting your prompt which can help in refining it.

If you keep the same resolution settings however, and lock the seed, you can see how your current generation would be affected at higher steps or higher CFG Scale, which is tremendously useful.

Higher step count is NOT always equivalent to higher quality. Past a certain point (that will continue to change as the model evolves) all you are doing is burning through credits for diminishing returns.IE: I’ve almost never had to approach 100 Steps or above. Additionally, playing with CFG Scale in concert with your step count has a huge impact on your final result. CFG Scale does not affect credit consumption, so experimenting with that can also be an interesting way to learn more about how SD sees your prompt.

According to the site’s description CFG Scale affects “how much the image will be like your prompt. Higher values keep your image closer to your prompt.” I tend to enjoy working in a range between the default of 7.5 up to about 14 depending on the content and the sampler.

Seed Editing can be a lot of fun and lead to a lot of interesting results! DreamStudio offers a lot of interesting settings that can be modified by sliders, but did you know that if you manually edit the numbers in your seed you can discover wildly cool different results?

Depending on how many numbers you change in your seed and how far apart your changes are from your original generation, you can either gently affect your generation or affect it in a much greater manner.

Experiment with the different samplers, check out this example and this example of the differences each sampler can have on your image!

SAVE 👏 YOUR 👏 IMAGES 👏!DreamStudio currently stores your historical generations in your browser’s cache, which means that they could disappear if you aren’t careful. I recommend downloading all of your generations as soon as they come up, as well as recording your prompts, seeds, and settings in a text document. You can always delete them later if you’d like to.

Redreaming images from the history panel costs credits, since you’re asking DreamStudio to essentially make it from scratch again, which costs GPU compute.

If you end up getting a blurred result to your prompt, don’t worry! That’s just the NSFW filter doing its job. It’s a little overactive right now as it's being trained to be more accurate, but you won’t lose any credits for blurred images.

External Tools

Don’t be afraid to upsample your generations! If you generate an image, for example, that is 768 x 512 and you wish you could make it a desktop wallpaper or large enough to be printed, upsampling is your friend!

Popular upsampling solutions include: Real-ESRGAN and Gigapixel AI

Automating Runpod G-Drive Art Backups

Hi friends! I figured I’d write up a little guide about how to set up automatic backups for your art generations in the cloud, if you’ve used my SD-WebUI Guide for getting set up in the Cloud or are running on Linux.I’ll update this guide with how to get started like this on Windows soon.

Open up the web terminal for your Runpod, or the terminal on your local instance of Linux and CTRL + SHIFT + V these commands into it.wget https://www.thefanclub.co.za/sites/default/files/public/overgrive/overgrive_3.4.6_all.deb
sudo dpkg -i overgrive3.4.6all.deb
sudo apt install -f -y

Once this all completes, you’ll want to find overGrive in your Application Finder and run it.

Once you see the above screen you’ll want to click the Change button and set it to your /home/user/stable-diffusion-webui/outputs/ folder.

Once you have your directory selected, you’ll want to make sure the settings are identical to mine, and then click on Connect account.

Go through the authorization process as shown above, and then paste your Authorization Code into the Account field back in overGrive. Hit Validate to connect it to your account.

overGrive offers a free trial for 14 days, and if you like it you can purchase a license for (Ubuntu) in our case that will stay valid for the machines that you use it on.

Once you’ve gone through everything, hit the Start Sync button. Soon you’ll see your samples folders in your Drive. There’ll be more than just the one folder below if you’ve also used IMG2IMG.

Linux (Ubuntu) Installation Tips

If you're looking to get started on Linux, I've created a suite of bash scripts that should help you get started on Ubuntu based distros.They were primarily developed to get people running on Runpod GPU servers, but should still functionally allow you to get going on a local installation.All testing and development was done on Ubuntu 20.04.

If you check out this section of the guide: Arki's Easy SD-WebUI after the Runpod specific segments, you should be able to get going.Right now I only support the SD-WebUI repos by AUTOMATIC1111 and HLKY but in the future I may expand to other repos.Chances are you'll have to update the scripts as they are right now if your root directory isn't /home/user/ but the scripts are easily readable so this shouldn't be too much trouble.In the future I may update the scripts, or create a separate branch, to allow you to enter in your local username to avoid having to do this.In the previous edition of my guide I included Linux installation guides for LStein and Basujindal, but I've since discovered a better way to install repos via bash scripts. I'll be providing install scripts for those repos here once I've had the chance to get them.

In the mean time, you may want to look into the Linux install instructions for each repo.AUTOMATIC1111's SD-WebUI - Linux
HLKY's SD-WebUI - Linux
LStein - Linux
Basujindal - Docker

Installation Troubleshooting Tips

Occasionally you may run into some trouble when trying to install Stable Diffusion, especially if you are coming to this guide after trying to install it yourself first without following the steps I’ve laid out here.

More often than not, the number one way to get Stable Diffusion running properly on your computer is by clearing out your existing conda environments, and following this guide from the top again (or for the first time).

In order to clear out your existing environments, open up your Anaconda3 / Miniconda3 Prompt and enter:
conda deactivate followed by: conda info --envs to verify the names of the environments you have installed.

Next, you’ll want to enter conda remove --name myenv --all where you replace myenv with the name of the environment you’re looking to remove.

Enter yes when prompted to confirm the removal of your environment.

This won’t remove the folder that you have already downloaded Stable Diffusion into, but it will allow you to start this guide fresh if your environment was giving you errors when trying to get it running.

My advice however would be to save your model.ckpt file somewhere safe, clear out your environment, delete the folders you downloaded and then run through the entire installation guide from scratch again to ensure no problems arise.

Anaconda3 / Miniconda3 should install everything that you need in order for each Stable Diffusion repo to work when running the conda env create -f environment.yaml command during that part of the installation process.