back to the blog

How to generate Images from Images using Stable Diffusion API?

Written on . Posted in Stable Diffusion API.
How to generate Images from Images using Stable Diffusion API?

Stable Diffusion API is a company that specializes in image generation through its API-as-a-Service offerings. It provides customers with access to a variety of advanced models, allowing them to create custom images with ease. With a focus on simplicity and reliability, Stable Diffusion API is the ideal solution for businesses and individuals looking to generate high-quality images quickly and efficiently. 

 

The versatility of the models offered by Stable Diffusion API allows for a wide range of potential uses. Whether you need images for a business presentation, a marketing campaign, or simply for personal use, Stable Diffusion API has you covered. Their cutting-edge technology ensures that you can create stunning images that meet your specific requirements, every time. 

 

With Stable Diffusion API, the image generation process is streamlined and efficient. The APIs are easy to use, and the results are of the highest quality. In this tutorial, we will see how to generate images from images through prompts.

 

Getting started with the Image-to-Image API

Creating an Account

To use the API or the playground one needs to have an active account registered. So sign up to the official website to get the API. You can find the plans for subscriptions on the pricing page. After signing up you can see your API on your dashboard. It looks something like below. 

 

Click ‘View’ to view your API. Now that you have the API, you can start making API calls to generate images. In the below section, we will see how to make generate images using the python requests module.

 

Using API calls

Image-to-Image API takes an image as input and generates another image based on a prompt without changing the composition of the image. We will use the stable diffusion API to generate images using the img2img endpoint. 

 

First, we will import the requests module and give the URL to the API endpoint. After that, we define the payload parameter, which is basically the parameter that our API is going to use. After this, we create a response request to interact with the API. The entire code for this is shown below:

 

import requests

url = "https://stablediffusionapi.com/api/v3/img2img"

payload = {"key":"Your API key","prompt":"a cat sitting on a bench","init_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png","width":"512",  "height":"512","samples":"1","num_inference_steps":"30","safety_checker":"no","enhance_prompt":"yes","guidance_scale": 7.5,"strength": 0.7}
headers = {}

response = requests.request("POST", url, headers=headers, data=payload)

print(response.text)

 

The above code outputs the following:

Output:

We can see the output image link in the output. That link contains our generated image. Basically, what we did was, we gave an image as input and asked the AI to generate a new image based on the prompt. The input image we gave looks like the one below:

 

We want to have a cat in the same style as above on the bench. So we gave the prompt ‘ a cat sitting on a bench’. The output is as follows:

 

Generally, it takes a lot of time to generate images using stable diffusion models. But with stable diffusion API, you can generate images within seconds. You can also generate images using the http.client method. Now let’s see the different parameters that this endpoint uses.

Parameters for the Image-to-Image endpoint

The Image-to-Image endpoint uses several parameters to generate images in production and also for customization. The following parameters are available for this endpoint:

 

{
 "key": Your API key
 "prompt": A sentence to generate the image
 "negative_prompt": A sentence describing the things you don’t want in the image.
 "init_image":Base image from which new image needs to be generated
 "width": Width of the output image. Maximum size is 1024x768 or 768x1024
 "height": Height of the output image. Maximum size is 1024x768 or 768x1024
 "samples": The number of images to be generated
 "num_inference_steps": The number of denoising steps(Minimum:1, Maximum:50)
 "guidance_scale": Scale for classifier-free guidance(Minimum:1, Maximum:20)
 "safety_checker": A filter for NSFW content
 "enhance_prompt": Enhances prompt for better results
 "strength": This corresponds to the prompt strength
 "seed": A random seed to generate image
 "webhook": A webhook to get the generated image
 "Track_id": A tracking id to track your API call
}

 

Advantages of Using the Stable Diffusion API for the image-to-image generation

The main advantages of using the img2img endpoint are:

  • Efficiency: With the image-to-image generation endpoint, customers can quickly and easily generate images, saving time and resources compared to manual processes.
  • Flexibility: The API offers a range of options for customizing images, giving customers the flexibility to specify and change the desired output to meet their unique requirements.
  • Integration: The API is designed for easy integration into a wide range of systems and applications, providing customers with seamless access to image generation capabilities.

Conclusion

The Image-to-image generation is a powerful tool for a variety of applications, and the img2img endpoint provided by Stable Diffusion API makes it easy to access this capability. With its scalability, customization options, and ease of integration, the img2img endpoint is an ideal solution for organizations looking to streamline their image generation processes.

 

If you're interested in learning more about the img2img endpoint and how it can benefit your organization, be sure to check out Stable Diffusion API. And don't hesitate to reach out to us for more information or to get started with a subscription. With the img2img endpoint, you can take your image generation capabilities to the next level!