Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Using Stable Diffusion with Python
  • Table Of Contents Toc
  • Feedback & Rating feedback
Using Stable Diffusion with Python

Using Stable Diffusion with Python

By : Andrew Zhu (Shudong Zhu)
4.8 (5)
close
close
Using Stable Diffusion with Python

Using Stable Diffusion with Python

4.8 (5)
By: Andrew Zhu (Shudong Zhu)

Overview of this book

Stable Diffusion is a game-changing AI tool that enables you to create stunning images with code. The author, a seasoned Microsoft applied data scientist and contributor to the Hugging Face Diffusers library, leverages his 15+ years of experience to help you master Stable Diffusion by understanding the underlying concepts and techniques. You’ll be introduced to Stable Diffusion, grasp the theory behind diffusion models, set up your environment, and generate your first image using diffusers. You'll optimize performance, leverage custom models, and integrate community-shared resources like LoRAs, textual inversion, and ControlNet to enhance your creations. Covering techniques such as face restoration, image upscaling, and image restoration, you’ll focus on unlocking prompt limitations, scheduled prompt parsing, and weighted prompts to create a fully customized and industry-level Stable Diffusion app. This book also looks into real-world applications in medical imaging, remote sensing, and photo enhancement. Finally, you'll gain insights into extracting generation data, ensuring data persistence, and leveraging AI models like BLIP for image description extraction. By the end of this book, you'll be able to use Python to generate and edit images and leverage solutions to build Stable Diffusion apps for your business and users.
Table of Contents (29 chapters)
close
close
Free Chapter
1
Part 1 – A Whirlwind of Stable Diffusion
8
Part 2 – Improving Diffusers with Custom Features
15
Part 3 – Advanced Topics
21
Part 4 – Building Stable Diffusion into an Application

Usage of ControlNet

Before diving into the backend of ControlNet, in this section, we will start using ControlNet to help control image generation.

In the following example, we will first generate an image using SD, take the Canny shape of the object, and then use the Canny shape to generate a new image with the help of ControlNet.

Note

A Canny image refers to an image that has undergone Canny edge detection, which is a popular edge detection algorithm. It was developed by John F. Canny in 1986. [7]

Let’s use SD to generate an image using the following code:

  1. Generate a sample image using SD:
    import torch
    from diffusers import StableDiffusionPipeline
    # load model
    text2img_pipe = StableDiffusionPipeline.from_pretrained(
        "stablediffusionapi/deliberate-v2",
         torch_dtype = torch.float16
    ).to("cuda:0")
    # generate sample image
    prompt = """
    high resolution photo,best quality, masterpiece...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY