Skip to main content

Enterprise: Inpainting Endpoint

Overview

Enterprise Inpainting endpoint is used to change (inpaint) some part of an image according to specific requirements, based on trained or on public models. Pass the appropriate request parameters to the endpoint.

You can also add your description of the desired result by passing prompt and negative prompt.

Inpainting endpoint result

Request

--request POST 'https://stablediffusionapi.com/api/v1/enterprise/inpaint' \

Make a POST request to https://stablediffusionapi.com/api/v1/enterprise/inpaint endpoint and pass the required parameters as a request body.

Watch the how-to video to see it in action.

Attributes

ParameterDescription
keyYour enterprise API Key used for request authorization
model_idThe ID of the model to be used. It can be public or your trained model.
promptText prompt with description of the things you want in the image to be generated
negative_promptItems you don't want in the image
init_imageLink to the Initial Image
mask_imageLink to the mask image for inpainting
widthMax Height: Width: 1024x1024
heightMax Height: Width: 1024x1024
samplesNumber of images to be returned in response. The maximum value is 4.
num_inference_stepsNumber of denoising steps (minimum: 1; maximum: 50)
safety_checkerA checker for NSFW images. If such an image is detected, it will be replaced by a blank image.
safety_checker_typeModify image if NSFW images are found; default: sensitive_content_text, options: blur/sensitive_content_text/pixelate/black
enhance_promptEnhance prompts for better results; default: yes, options: yes/no
guidance_scaleScale for classifier-free guidance (minimum: 1; maximum: 20)
strengthPrompt strength when using init image. 1.0 corresponds to full destruction of information in the init image.
tomesdEnable tomesd to generate images: gives really fast results, default: yes, options: yes/no
use_karras_sigmasUse keras sigmas to generate images. gives nice results, default: yes, options: yes/no
algorithm_typeUsed in DPMSolverMultistepScheduler scheduler, default: none, options: dpmsolver+++
vaeuse custom vae in generating images default: null
lora_strengthStrength of lora model you are using. If using multi lora, pass each values as comma saparated
lora_modelmulti lora is supported, pass comma saparated values . Example contrast-fix,yae-miko-genshin
schedulerUse it to set a scheduler.
seedSeed is used to reproduce results, same seed will give you same image in return again. Pass null for a random number.
webhookSet an URL to get a POST API call once the image generation is complete.
track_idThis ID is returned in the response to the webhook API call. This will be used to identify the webhook request.
loadbalancerEnable load balancer; options: yes/no, default: no.
clip_skipClip Skip (minimum: 1; maximum: 8)
base64Get response as base64 string, pass init_image, mask_image as base64 string, to get base64 response. default: "no", options: yes/no
tempCreate temp image link. This link is valid for 24 hours. temp: yes, options: yes/no
info

To use the load balancer, you need to have more than 1 server. Pass the first server's API key, and it will handle the load balancing with the other servers.

Schedulers

This endpoint also supports schedulers. Use the "scheduler" parameter in the request body to pass a specific scheduler from the list below:

  • DDPMScheduler
  • DDIMScheduler
  • PNDMScheduler
  • LMSDiscreteScheduler
  • EulerDiscreteScheduler
  • EulerAncestralDiscreteScheduler
  • DPMSolverMultistepScheduler
  • HeunDiscreteScheduler
  • KDPM2DiscreteScheduler
  • DPMSolverSinglestepScheduler
  • KDPM2AncestralDiscreteScheduler
  • UniPCMultistepScheduler
  • DDIMInverseScheduler
  • DEISMultistepScheduler
  • IPNDMScheduler
  • KarrasVeScheduler
  • ScoreSdeVeScheduler
  • LCMScheduler

Example

Body

Body Raw
{
"key": "enterprise_api_key",
"model_id": "your_model_id",
"prompt": "a cat sitting on a bench",
"negative_prompt": null,
"init_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png",
"mask_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.7,
"scheduler": "PNDMScheduler",
"seed": null,
"lora_model": null,
"tomesd": "yes",
"use_karras_sigmas": "yes",
"vae": null,
"lora_strength": null,
"embeddings_model": null,
"webhook": null,
"track_id": null
}

Request

var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

var raw = JSON.stringify({
"key": "",
"model_id": "your_model_id",
"prompt": "a cat sitting on a bench",
"negative_prompt": null,
"init_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png",
"mask_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png",
"width": "512",
"height": "512",
"samples": "1",
"steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"guidance_scale": 7.5,
"strength": 0.7,
"scheduler": "PNDMScheduler",
"lora_model": null,
"tomesd": "yes",
"use_karras_sigmas": "yes",
"vae": null,
"lora_strength": null,
"embeddings_model": null,
"seed": null,
"webhook": null,
"track_id": null
});

var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};

fetch("https://stablediffusionapi.com/api/v1/enterprise/inpaint", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

Response

Example Response
{
"status": "success",
"generationTime": 20.970642805099487,
"id": 13446970,
"output": [
"https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/dc639bd6-d605-42c7-950e-48c531124d0d-0.png"
],
"meta": {
"prompt": " a cat sitting on a bench DSLR photography, sharp focus, Unreal Engine 5, Octane Render, Redshift, ((cinematic lighting)), f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame",
"model_id": "midjourney-v4-painta",
"scheduler": "PNDMScheduler",
"safetychecker": "no",
"negative_prompt": " ((out of frame)), ((extra fingers)), mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), (((tiling))), ((naked)), ((tile)), ((fleshpile)), ((ugly)), (((abstract))), blurry, ((bad anatomy)), ((bad proportions)), ((extra limbs)), cloned face, glitchy, ((extra breasts)), ((double torso)), ((extra arms)), ((extra hands)), ((mangled fingers)), ((missing breasts)), (missing lips), ((ugly face)), ((fat)), ((extra legs))",
"W": 512,
"H": 512,
"guidance_scale": 7.5,
"init_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png",
"mask_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png",
"multi_lingual": "no",
"steps": 50,
"n_samples": 1,
"full_url": "no",
"upscale": "no",
"seed": 1343687916,
"outdir": "out",
"file_prefix": "dc639bd6-d605-42c7-950e-48c531124d0d"
}
}