Skip to main content

How to Run Stable Diffusion on Your PC to Generate AI Images

 

First of all, let's define Stable Diffusion. Stable Diffusion is an open-source machine learning model that is capable of creating images from text, altering images based on text, or adding information to low-resolution or low-detail images. Also, it can produce outcomes that are comparable to those from DALL-E 2 and MidJourney as it was trained on billions of images. Such a model was created by Stability AI and made available to the public for the first time on August 22, 2022.
Unlike several AI text-to-image generators, Stable Diffusion doesn't have a clean user interface (yet), but it has a very permissive license, and luckily it is open-source so we can use it on our PC and maybe fine-tune it to do other customized image generation tasks. 

What Do You Need to Run Stable Diffusion on Your Computer?

To be able to run a stable diffusion model on your computer, the latter should at least be a Gaming Laptop with the following requirements: 
  • A GPU with at least 6 gigabytes (GB) of VRAM, this includes most modern NVIDIA GPUs
  • 10GB (ish) of storage space on your hard drive or solid-state drive
  • The Miniconda3 installer
  • The Stable Diffusion files from GitHub
  • The Latest Checkpoints (Version 1.4, as of the time of writing, but 1.5 should be released soon)
  • The Git Installer
  • Windows 8, 10, or 11
  • Stable Diffusion can also be run on Linux and macOS
if you can't meet these requirements, consider using the Stable Diffusion model on some cloud-based platforms such as Google Colab which give a free GPU limit. Moreover, if you just want to explore and generate some images using Stable Diffusion you can use a web-based AI image generator or use some web apps that let you generate images using Stable Dissuion API. 

How to Install and Run Stable Diffusion on Windows

to install Stabe Diffusion on Windows you need first to install Git installer and anaconda 3 or miniconda 3.

1. Install Git

Git offers an easy way to access and download these projects if you're not a developer, therefore we'll use it in this situation. Installing Git requires running the Windows x64 installer that may be downloaded from the Git website.
While the installer is running, you'll be given the opportunity to choose from a number of choices; keep them all set to the default values. The "Adjusting Your PATH Environment" options page is a crucial one. "Git From The Command Line And Also From 3rd-Party".

2. Install Miniconda 3

In essence, Miniconda3 is a tool for convenience. It enables you to manage all of the libraries needed for Stable Diffusion to run without requiring a lot of manual labor. It will also be how we apply stable diffusion in practice.
Get the most recent installation by visiting the Miniconda3 download page and selecting "Miniconda3 Windows 64-bit."

Once it has downloaded, double-click the executable to launch the installation. Installation with Miniconda3 requires less page clicks than with Git, however you should be cautious with this choice:

Before selecting the next button and completing the installation, be certain that "All Users" is selected.
After setting up Git and Miniconda3, your computer will ask you to restart. Although we didn't think it was necessary, it won't harm you if you do.
Now that we installed Git and Miniconda 3, it is time to get the Stable Diffusion files from Github:

3. Download the Stable Diffusion GitHub Repository and the Latest Checkpoint

We are now prepared to download and install Stable Diffusion as we have completed the prerequisite software installation.
The most recent checkpoint should be downloaded first; version 1.4 is approximately 5GB, so it might take some time. To download the checkpoint, you must first create an account, but all they ask for is your name and email address. The rest is entirely up to you.
Click “sd-v1-4.ckpt” to start the download.

Stable Diffusion must then be downloaded from GitHub. Click "Download ZIP" after selecting the green "Code" button. 

We now need to set up a couple folders where we can unpack the files for Stable Diffusion. Type "miniconda3" into the Start Menu search bar by clicking the Start button, then select "Open" or press Enter.
Using the command line, we'll make a folder called "stable-diffusion." Press Enter after pasting the following code block into the Miniconda3 window.

cd C:/
mkdir stable-diffusion
cd stable-diffusion

We'll need Miniconda3 again in a moment, so keep it open.
Open the "stable-diffusion-main.zip" ZIP archive that you got from GitHub in your preferred file archiver. If you don't have one, Windows may open ZIP files on its own as an alternative. Open a second File Explorer window and navigate to the "C:stable-diffusion" folder we just created while keeping the ZIP file open in the first window.
Drag and drop the "stable-diffusion-main" folder from the ZIP archive into the "stable-diffusion" folder.

Go back to Miniconda3, then copy and paste the following commands into the window:
cd C:\stable-diffusion\stable-diffusion-main
conda env create -f environment.yaml
conda activate ldm
mkdir models\ldm\stable-diffusion-v1


Note: don't halt this procedure. It can take some time to download because some of the files are more than a gigabyte in size. You must delete the environment folder and rerun Conda env create -f environment.yaml if you mistakenly pause the process. In that case, perform the previous command after deleting the "ldm" folder in "C:Users(Your User Account).condaenvs."

We've reached the installation's last phase. Copy and paste the checkpoint file (sd-v1-4.ckpt) into the "C:stable\diffusionstable\diffusion\main\models\ldm\stable\diffusion-v1" folder using File Explorer.

After the file has finished transferring, click "Rename" from the context menu when you right-click "sd-v1-4.ckpt." To rename the file, enter "model.ckpt" in the box that is highlighted and press Enter.

And with that, we are done. We are now prepared to use stable diffusion in practice.


Next Steps: in the next article we will show you how to run Stabe Dissuon on your system and get some results. Stay Tuned.

Comments

Latest Posts

How to Create AI images with Stable Diffusion Model (Extended Article)

  In a previous article,  we showed how to prepare your computer to be able to run Stabe Diffusion, by installing some dependencies, creating a virtual environment, and downloading Stabe Diffusion from Github. In this article, we will show you how to run Stable Diffusion and create images. First of all, You must activate the ldm environment we built previously each time you wish to use stable diffusion because it is crucial. In the Miniconda3 window, type conda activate ldm and press "Enter." The (ldm) on the left-hand side denotes the presence of an active ldm environment.  Note: This command is only necessary while Miniconda3 is opened. As long as you don't close the window, the ldm environment will be active. Before we can generate any images, we must first change the directory to "C:stable-diffusionstable-diffusion-main.":  cd C:stable-diffusionstable-diffusion-main. How to Create Images with Stable Diffusion We're going to use a program called txt2img.p

U-Net Implementation For the Segmentation of Nuclei

  Introduction Image segmentation is the partitioning of images into various regions, in which every region has a different entity. An efficient tool for image segmentation is a convolutional neural network (CNN) . Recently, there has been a significant impact of CNNs that are designed to perform image segmentation. One the best models presented was the U-Net . A U-Net is U-shaped convolutional neural network that was originally designed to segment biomedical images. Such a network is better than conventional models, in terms of architecture and pixel-based image segmentation formed from convolutional neural network layers. Similar to all CNNs, this network consists of convolution, Max-pooling, and ReLU activation layers. However, in a general view, U-Net can be seen as an encoder-decoder network. The encoder is the first part of this network and it is a conventional convolutional neural network like VGG or ResNet that is composed of convolution, pooling, and downsampling laye

Recognizing AI-generated Faces Using Deep Learning

  Artificial intelligence is reshaping the world . This technology is changing the way we handle our daily tasks. Deep learning is one method to go toward artificial intelligence and it has so far shown great significance when applied to various areas, from medicine to computer vision.  However, deep learning showed also that it can be used to harm or help in fraudulence, depending on how it is applied. One example of this is what is called Generative adversarial learning . In 2016, generative adversarial networks (GANs) were presented to the world, and since then many versions of these networks were developed. A GAN can be defined as a machine learning model that consists of two neural networks competing with each other. These two networks are called generator and discriminator, and each has a different role; one is for generating images from random noise given as input, while the latter is for detecting whether the ge

Why is everyone talking about ChatGPT?

    Well, a simple answer to that question is that  ChatGPT is so cool!!! Why cool! It is because it can engage in conversational interaction just like humans, which is what artificial intelligence models should be like, mimicking human behavior as much as possible. With that said, let’s start by first getting to know ChatGPT. What is it? How was it trained? And How can we use it?   What is ChatGPT?   Conversational AI refers to the use of artificial intelligence to enable computers to have natural, human-like conversations with people. This technology has become increasingly popular in recent years, as it allows businesses to automate customer service and enables people to interact with computers in a more natural and intuitive way. Some of the key components of conversational AI include natural language processing (NLP), which allows computers to understand and interpret human language, and machine learning, which enables the computer to improve its performance over time. Ove

Multi-Label Classification with Deep Learning

In machine learning, multilabel classification is a classification task where multiple labels may be assigned to each instance. This is in contrast to traditional classification, where each instance is assigned only one label. Multilabel classification is useful in cases where there are multiple possible labels for a single instance, and each label represents a different aspect or category of the data. For example, an image recognition system might be trained to recognize multiple objects in an image, such as a cat, a dog, and a person, and assign one or more labels to each image accordingly. Because each instance can have multiple labels, the output of a multilabel classification model is often represented as a binary matrix, where each column corresponds to a different label and each row corresponds to a different instance. How to create a convolutional neural network for multilabel classification in TensorFlow! To create a convolutional neural network (CNN) for multilabel classifica

How To Create Videos Using Artificial Intelligence Content Creation Tools

  Artificial intelligence (AI) is used by AI video creators to produce videos using information from a range of sources, such as text, images, and audio files. AI video creators can generate videos with little to no human input, however, some human direction is still required. So how does this work? First, you need content, script/text to be narrated in your video, which is the story you are telling. You can have your own script but guess what! you can also generate content using some text generators such as Copy.ai , Pictory AI , or Rytr AI . These text generators will write you a long text/blog/story with a little help from you. Let's see an example of how we can generate text using Copy.ai for free.  First, go to Copy.ai  and Create New Document. The second step is to go to Tools on the left side of the Figure above, and then select the type of text we want to generate. As seen in the Figure below, say we want to generate a Blog. To do so, write a blog in the Tool button, and th